modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 12:32:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 12:31:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Tato-21/RL_Unit1
|
Tato-21
| 2025-09-02T03:54:51Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-02T03:54:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 289.09 +/- 11.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
omerbkts/blockassist-bc-insectivorous_bold_lion_1756785037
|
omerbkts
| 2025-09-02T03:51:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:50:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1756783274
|
coelacanthxyz
| 2025-09-02T03:49:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:49:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tiopuiter/blockassist-bc-amphibious_knobby_leopard_1756784905
|
tiopuiter
| 2025-09-02T03:48:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious knobby leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:48:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious knobby leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
forstseh/blockassist-bc-arctic_soaring_heron_1756784436
|
forstseh
| 2025-09-02T03:46:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic soaring heron",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:46:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic soaring heron
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-insectivorous_bold_lion_1756784647
|
omerbkts
| 2025-09-02T03:44:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:44:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Andra76/blockassist-bc-deadly_enormous_butterfly_1756783775
|
Andra76
| 2025-09-02T03:40:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly enormous butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:39:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly enormous butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/llama-3-meerkat-70b-v1.0-GGUF
|
mradermacher
| 2025-09-02T03:39:58Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"medical",
"small LM",
"instruction-tuned",
"usmle",
"synthetic data",
"en",
"base_model:dmis-lab/llama-3-meerkat-70b-v1.0",
"base_model:quantized:dmis-lab/llama-3-meerkat-70b-v1.0",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-01T13:54:57Z |
---
base_model: dmis-lab/llama-3-meerkat-70b-v1.0
language:
- en
library_name: transformers
license: cc-by-nc-4.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- medical
- small LM
- instruction-tuned
- usmle
- synthetic data
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/dmis-lab/llama-3-meerkat-70b-v1.0
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#llama-3-meerkat-70b-v1.0-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-meerkat-70b-v1.0-GGUF/resolve/main/llama-3-meerkat-70b-v1.0.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-meerkat-70b-v1.0-GGUF/resolve/main/llama-3-meerkat-70b-v1.0.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-meerkat-70b-v1.0-GGUF/resolve/main/llama-3-meerkat-70b-v1.0.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-meerkat-70b-v1.0-GGUF/resolve/main/llama-3-meerkat-70b-v1.0.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-meerkat-70b-v1.0-GGUF/resolve/main/llama-3-meerkat-70b-v1.0.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-meerkat-70b-v1.0-GGUF/resolve/main/llama-3-meerkat-70b-v1.0.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-meerkat-70b-v1.0-GGUF/resolve/main/llama-3-meerkat-70b-v1.0.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-meerkat-70b-v1.0-GGUF/resolve/main/llama-3-meerkat-70b-v1.0.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-meerkat-70b-v1.0-GGUF/resolve/main/llama-3-meerkat-70b-v1.0.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/llama-3-meerkat-70b-v1.0-GGUF/resolve/main/llama-3-meerkat-70b-v1.0.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama-3-meerkat-70b-v1.0-GGUF/resolve/main/llama-3-meerkat-70b-v1.0.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/llama-3-meerkat-70b-v1.0-GGUF/resolve/main/llama-3-meerkat-70b-v1.0.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama-3-meerkat-70b-v1.0-GGUF/resolve/main/llama-3-meerkat-70b-v1.0.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/DeepSeek-GRM-16B-GGUF
|
mradermacher
| 2025-09-02T03:39:50Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"zh",
"en",
"dataset:openbmb/UltraFeedback",
"dataset:NCSOFT/offsetbias",
"dataset:Skywork/Skywork-Reward-Preference-80K-v0.2",
"dataset:nvidia/HelpSteer2",
"base_model:BBQGOD/DeepSeek-GRM-16B",
"base_model:quantized:BBQGOD/DeepSeek-GRM-16B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-02T00:03:56Z |
---
base_model: BBQGOD/DeepSeek-GRM-16B
datasets:
- openbmb/UltraFeedback
- NCSOFT/offsetbias
- Skywork/Skywork-Reward-Preference-80K-v0.2
- nvidia/HelpSteer2
language:
- zh
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: deepseek
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/BBQGOD/DeepSeek-GRM-16B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DeepSeek-GRM-16B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DeepSeek-GRM-16B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q2_K.gguf) | Q2_K | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q3_K_S.gguf) | Q3_K_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q3_K_M.gguf) | Q3_K_M | 8.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q3_K_L.gguf) | Q3_K_L | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.IQ4_XS.gguf) | IQ4_XS | 8.7 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q4_K_S.gguf) | Q4_K_S | 9.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q4_K_M.gguf) | Q4_K_M | 10.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q5_K_S.gguf) | Q5_K_S | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q5_K_M.gguf) | Q5_K_M | 12.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q6_K.gguf) | Q6_K | 14.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-16B-GGUF/resolve/main/DeepSeek-GRM-16B.Q8_0.gguf) | Q8_0 | 16.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/70B_unstruct-GGUF
|
mradermacher
| 2025-09-02T03:39:48Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Jolly-Q/70B_unstruct",
"base_model:quantized:Jolly-Q/70B_unstruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-01T16:59:40Z |
---
base_model: Jolly-Q/70B_unstruct
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Jolly-Q/70B_unstruct
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#70B_unstruct-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/70B_unstruct-GGUF/resolve/main/70B_unstruct.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/70B_unstruct-GGUF/resolve/main/70B_unstruct.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/70B_unstruct-GGUF/resolve/main/70B_unstruct.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/70B_unstruct-GGUF/resolve/main/70B_unstruct.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/70B_unstruct-GGUF/resolve/main/70B_unstruct.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/70B_unstruct-GGUF/resolve/main/70B_unstruct.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/70B_unstruct-GGUF/resolve/main/70B_unstruct.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/70B_unstruct-GGUF/resolve/main/70B_unstruct.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/70B_unstruct-GGUF/resolve/main/70B_unstruct.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/70B_unstruct-GGUF/resolve/main/70B_unstruct.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/70B_unstruct-GGUF/resolve/main/70B_unstruct.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/70B_unstruct-GGUF/resolve/main/70B_unstruct.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/70B_unstruct-GGUF/resolve/main/70B_unstruct.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756784214
|
liukevin666
| 2025-09-02T03:38:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:37:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-insectivorous_bold_lion_1756784257
|
omerbkts
| 2025-09-02T03:38:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:37:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
david3621/blockassist-bc-gentle_meek_cat_1756782911
|
david3621
| 2025-09-02T03:37:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle meek cat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:30:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle meek cat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-insectivorous_bold_lion_1756784107
|
akirafudo
| 2025-09-02T03:36:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:35:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amandacute/blockassist-bc-amphibious_plump_ram_1756784106
|
amandacute
| 2025-09-02T03:35:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious plump ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:35:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious plump ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kmpartner/k5pcmlra2-test
|
kmpartner
| 2025-09-02T03:34:46Z | 233 | 0 |
peft
|
[
"peft",
"tensorboard",
"diffusers",
"safetensors",
"arxiv:1910.09700",
"base_model:segmind/Segmind-Vega",
"base_model:adapter:segmind/Segmind-Vega",
"region:us"
] | null | 2025-08-09T06:08:24Z |
---
library_name: peft
base_model: segmind/Segmind-Vega
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0
|
ChenWu98/numina_qwen_2.5_sft_combine_v1_source_split_0
|
ChenWu98
| 2025-09-02T03:34:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T03:33:04Z |
---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: numina_qwen_2.5_sft_combine_v1_source_split_0
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for numina_qwen_2.5_sft_combine_v1_source_split_0
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/pyqm8q99)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
HZCDLUT/MoE_Adapters_pp_CLIP_vitL_DIL
|
HZCDLUT
| 2025-09-02T03:30:04Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T03:30:04Z |
---
license: apache-2.0
---
|
acidjp/blockassist-bc-pesty_extinct_prawn_1756781425
|
acidjp
| 2025-09-02T03:28:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:28:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
parm-at-straker/MyGemmaNPC
|
parm-at-straker
| 2025-09-02T03:28:19Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T03:24:41Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="parm-at-straker/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.1
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
amandacute/blockassist-bc-amphibious_plump_ram_1756783646
|
amandacute
| 2025-09-02T03:28:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious plump ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:27:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious plump ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nick1880/blockassist-bc-barky_powerful_falcon_1756783551
|
nick1880
| 2025-09-02T03:26:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"barky powerful falcon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:26:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- barky powerful falcon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amandacute/blockassist-bc-amphibious_plump_ram_1756783538
|
amandacute
| 2025-09-02T03:26:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious plump ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:26:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious plump ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
moyixiao/Qwen3-0.6B-GRPO-f16
|
moyixiao
| 2025-09-02T03:25:57Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T17:23:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
runchat/lora-24cf691f-1d30-4f05-a39c-70053b2a66cd-pwa6iw
|
runchat
| 2025-09-02T03:24:06Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"text-to-image",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-02T03:23:59Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
base_model: black-forest-labs/FLUX.1-dev
tags:
- flux
- lora
- diffusers
- text-to-image
widget:
- text: 'a photo of a sks style'
output:
url: "placeholder.jpg"
---
# Flux LoRA: sks
This is a LoRA (Low-Rank Adaptation) model for Flux.1-dev fine-tuned on images with the trigger word `sks`.
## Files
- `pytorch_lora_weights.safetensors`: Diffusers format (use with diffusers library)
- `pytorch_lora_weights_webui.safetensors`: Kohya format (use with AUTOMATIC1111, ComfyUI, etc.)
## Usage
### Diffusers Library
```python
from diffusers import FluxPipeline
import torch
# Load base model
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
torch_dtype=torch.bfloat16
)
# Load LoRA weights (diffusers format)
pipe.load_lora_weights("runchat/lora-24cf691f-1d30-4f05-a39c-70053b2a66cd-pwa6iw", weight_name="pytorch_lora_weights.safetensors")
pipe = pipe.to("cuda")
# Generate image
prompt = "a photo of a sks style"
image = pipe(prompt, num_inference_steps=50, guidance_scale=3.5).images[0]
image.save("output.png")
```
### WebUI (AUTOMATIC1111, ComfyUI, etc.)
Download the `pytorch_lora_weights_webui.safetensors` file and place it in your WebUI's LoRA directory.
Use the trigger word `sks` in your prompts.
## Training Details
- Base model: black-forest-labs/FLUX.1-dev
- Training steps: 500
- Learning rate: 0.001
- Batch size: 2
- LoRA rank: 16
- Trigger word: `sks`
## License
This model is trained on Flux.1-dev and inherits its non-commercial license. Please see the [license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) for usage restrictions.
|
akirafudo/blockassist-bc-insectivorous_bold_lion_1756783371
|
akirafudo
| 2025-09-02T03:23:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:23:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vivekmakoday/yatayat
|
vivekmakoday
| 2025-09-02T03:22:24Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T03:22:24Z |
---
license: apache-2.0
---
|
ccchot/HAPPYGANG_FLUX
|
ccchot
| 2025-09-02T03:21:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-02T03:04:11Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: HAPPYGANG
---
# Happygang_Flux
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `HAPPYGANG` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "HAPPYGANG",
"lora_weights": "https://huggingface.co/ccchot/HAPPYGANG_FLUX/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ccchot/HAPPYGANG_FLUX', weight_name='lora.safetensors')
image = pipeline('HAPPYGANG').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ccchot/HAPPYGANG_FLUX/discussions) to add images that show off what you’ve made with this LoRA.
|
NeoChen1024/gemma-3n-E4B-it-FP8_DYNAMIC
|
NeoChen1024
| 2025-09-02T03:18:34Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"automatic-speech-recognition",
"automatic-speech-translation",
"audio-text-to-text",
"video-text-to-text",
"conversational",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2210.03057",
"arxiv:2502.12404",
"arxiv:2411.19799",
"arxiv:2009.03300",
"arxiv:2502.21228",
"arxiv:2311.12022",
"arxiv:2403.07974",
"arxiv:2108.07732",
"arxiv:2107.03374",
"base_model:google/gemma-3n-E4B-it",
"base_model:quantized:google/gemma-3n-E4B-it",
"license:gemma",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
image-text-to-text
| 2025-07-15T11:00:09Z |
---
license: gemma
library_name: transformers
pipeline_tag: image-text-to-text
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3n-E4B-it
tags:
- automatic-speech-recognition
- automatic-speech-translation
- audio-text-to-text
- video-text-to-text
---
# FP8 Dynamic Quantization of Gemma 3n E4B IT model
> [!Note]
> This repository corresponds to the launch version of Gemma 3n E4B IT (Instruct), to be used with Hugging Face `transformers`,
> supporting text, audio, and vision (image and video) inputs.
>
> Gemma 3n models have multiple architecture innovations:
> * They are available in two sizes based on [effective parameters](https://ai.google.dev/gemma/docs/gemma-3n#parameters). While the raw parameter count of this model is 8B, the architecture design allows the model to be run with a memory footprint comparable to a traditional 4B model by offloading low-utilization matrices from the accelerator.
> * They use a MatFormer architecture that allows nesting sub-models within the E4B model. We provide one sub-model (an [E2B](https://huggingface.co/google/gemma-3n-E2B-it)), or you can access a spectrum of custom-sized models using the [Mix-and-Match method](https://goo.gle/gemma3n-matformer-lab).
>
> Learn more about these techniques in the [technical blog post](https://developers.googleblog.com/en/introducing-gemma-3n-developer-guide)
> and the [Gemma documentation](https://ai.google.dev/gemma/docs/gemma-3n).
# Gemma 3n model card
**Model Page**: [Gemma 3n](https://ai.google.dev/gemma/docs/gemma-3n)
**Resources and Technical Documentation**:
- [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
- [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma-3n)
- [Gemma on HuggingFace](https://huggingface.co/collections/google/gemma-3n-685065323f5984ef315c93f4)
- [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3n)
**Terms of Use**: [Terms](https://ai.google.dev/gemma/terms)\
**Authors**: Google DeepMind
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
Gemma 3n models are designed for efficient execution on low-resource devices.
They are capable of multimodal input, handling text, image, video, and audio
input, and generating text outputs, with open weights for pre-trained and
instruction-tuned variants. These models were trained with data in over 140
spoken languages.
Gemma 3n models use selective parameter activation technology to reduce resource
requirements. This technique allows the models to operate at an effective size
of 2B and 4B parameters, which is lower than the total number of parameters they
contain. For more information on Gemma 3n's efficient parameter management
technology, see the
[Gemma 3n](https://ai.google.dev/gemma/docs/gemma-3n#parameters)
page.
### Inputs and outputs
- **Input:**
- Text string, such as a question, a prompt, or a document to be
summarized
- Images, normalized to 256x256, 512x512, or 768x768 resolution
and encoded to 256 tokens each
- Audio data encoded to 6.25 tokens per second from a single channel
- Total input context of 32K tokens
- **Output:**
- Generated text in response to the input, such as an answer to a
question, analysis of image content, or a summary of a document
- Total output length up to 32K tokens, subtracting the request
input tokens
### Usage
Below, there are some code snippets on how to get quickly started with running
the model. First, install the Transformers library. Gemma 3n is supported
starting from transformers 4.53.0.
```sh
$ pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your use case.
#### Running with the `pipeline` API
You can initialize the model and processor for inference with `pipeline` as
follows.
```python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="google/gemma-3n-e4b-it",
device="cuda",
torch_dtype=torch.bfloat16,
)
```
With instruction-tuned models, you need to use chat templates to process our
inputs first. Then, you can pass it to the pipeline.
```python
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
# Okay, let's take a look!
# Based on the image, the animal on the candy is a **turtle**.
# You can see the shell shape and the head and legs.
```
#### Running the model on a single GPU
```python
from transformers import AutoProcessor, Gemma3nForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/gemma-3n-e4b-it"
model = Gemma3nForConditionalGeneration.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16,).eval()
processor = AutoProcessor.from_pretrained(model_id)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [
{"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
{"type": "text", "text": "Describe this image in detail."}
]
}
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
# **Overall Impression:** The image is a close-up shot of a vibrant garden scene,
# focusing on a cluster of pink cosmos flowers and a busy bumblebee.
# It has a slightly soft, natural feel, likely captured in daylight.
```
### Citation
```
@article{gemma_3n_2025,
title={Gemma 3n},
url={https://ai.google.dev/gemma/docs/gemma-3n},
publisher={Google DeepMind},
author={Gemma Team},
year={2025}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset that includes a wide variety of sources
totalling approximately 11 trillion tokens. The knowledge cutoff date for the
training data was June 2024. Here are the key components:
- **Web Documents**: A diverse collection of web text ensures the model
is exposed to a broad range of linguistic styles, topics, and vocabulary.
The training dataset includes content in over 140 languages.
- **Code**: Exposing the model to code helps it to learn the syntax and
patterns of programming languages, which improves its ability to generate
code and understand code-related questions.
- **Mathematics**: Training on mathematical text helps the model learn
logical reasoning, symbolic representation, and to address mathematical queries.
- **Images**: A wide range of images enables the model to perform image
analysis and visual data extraction tasks.
- Audio: A diverse set of sound samples enables the model to recognize
speech, transcribe text from recordings, and identify information in audio data.
The combination of these diverse data sources is crucial for training a
powerful multimodal model that can handle a wide variety of different tasks and
data formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
- **CSAM Filtering**: Rigorous CSAM (Child Sexual Abuse Material)
filtering was applied at multiple stages in the data preparation process to
ensure the exclusion of harmful and illegal content.
- **Sensitive Data Filtering**: As part of making Gemma pre-trained models
safe and reliable, automated techniques were used to filter out certain
personal information and other sensitive data from training sets.
- **Additional methods**: Filtering based on content quality and safety in
line with
[our policies](https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using [Tensor Processing Unit
(TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv4p, TPUv5p
and TPUv5e). Training generative models requires significant computational
power. TPUs, designed specifically for matrix operations common in machine
learning, offer several advantages in this domain:
- **Performance**: TPUs are specifically designed to handle the massive
computations involved in training generative models. They can speed up
training considerably compared to CPUs.
- **Memory**: TPUs often come with large amounts of high-bandwidth memory,
allowing for the handling of large models and batch sizes during training.
This can lead to better model quality.
- **Scalability**: TPU Pods (large clusters of TPUs) provide a scalable
solution for handling the growing complexity of large foundation models.
You can distribute training across multiple TPU devices for faster and more
efficient processing.
- **Cost-effectiveness**: In many scenarios, TPUs can provide a more
cost-effective solution for training large models compared to CPU-based
infrastructure, especially when considering the time and resources saved
due to faster training.
These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/jax-ml/jax) and
[ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models. ML
Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
foundation models, including large language models like these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://goo.gle/gemma2report):
*"the 'single controller' programming model of Jax and Pathways allows a single
Python process to orchestrate the entire training run, dramatically simplifying
the development workflow."*
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated at full precision (float32) against a large
collection of different datasets and metrics to cover different aspects of
content generation. Evaluation results marked with **IT** are for
instruction-tuned models. Evaluation results marked with **PT** are for
pre-trained models.
#### Reasoning and factuality
| Benchmark | Metric | n-shot | E2B PT | E4B PT |
| ------------------------------ |----------------|----------|:--------:|:--------:|
| [HellaSwag][hellaswag] | Accuracy | 10-shot | 72.2 | 78.6 |
| [BoolQ][boolq] | Accuracy | 0-shot | 76.4 | 81.6 |
| [PIQA][piqa] | Accuracy | 0-shot | 78.9 | 81.0 |
| [SocialIQA][socialiqa] | Accuracy | 0-shot | 48.8 | 50.0 |
| [TriviaQA][triviaqa] | Accuracy | 5-shot | 60.8 | 70.2 |
| [Natural Questions][naturalq] | Accuracy | 5-shot | 15.5 | 20.9 |
| [ARC-c][arc] | Accuracy | 25-shot | 51.7 | 61.6 |
| [ARC-e][arc] | Accuracy | 0-shot | 75.8 | 81.6 |
| [WinoGrande][winogrande] | Accuracy | 5-shot | 66.8 | 71.7 |
| [BIG-Bench Hard][bbh] | Accuracy | few-shot | 44.3 | 52.9 |
| [DROP][drop] | Token F1 score | 1-shot | 53.9 | 60.8 |
[hellaswag]: https://arxiv.org/abs/1905.07830
[boolq]: https://arxiv.org/abs/1905.10044
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[arc]: https://arxiv.org/abs/1911.01547
[winogrande]: https://arxiv.org/abs/1907.10641
[bbh]: https://paperswithcode.com/dataset/bbh
[drop]: https://arxiv.org/abs/1903.00161
#### Multilingual
| Benchmark | Metric | n-shot | E2B IT | E4B IT |
| ------------------------------------|-------------------------|----------|:--------:|:--------:|
| [MGSM][mgsm] | Accuracy | 0-shot | 53.1 | 60.7 |
| [WMT24++][wmt24pp] (ChrF) | Character-level F-score | 0-shot | 42.7 | 50.1 |
| [Include][include] | Accuracy | 0-shot | 38.6 | 57.2 |
| [MMLU][mmlu] (ProX) | Accuracy | 0-shot | 8.1 | 19.9 |
| [OpenAI MMLU][openai-mmlu] | Accuracy | 0-shot | 22.3 | 35.6 |
| [Global-MMLU][global-mmlu] | Accuracy | 0-shot | 55.1 | 60.3 |
| [ECLeKTic][eclektic] | ECLeKTic score | 0-shot | 2.5 | 1.9 |
[mgsm]: https://arxiv.org/abs/2210.03057
[wmt24pp]: https://arxiv.org/abs/2502.12404v1
[include]:https://arxiv.org/abs/2411.19799
[mmlu]: https://arxiv.org/abs/2009.03300
[openai-mmlu]: https://huggingface.co/datasets/openai/MMMLU
[global-mmlu]: https://huggingface.co/datasets/CohereLabs/Global-MMLU
[eclektic]: https://arxiv.org/abs/2502.21228
#### STEM and code
| Benchmark | Metric | n-shot | E2B IT | E4B IT |
| ------------------------------------|--------------------------|----------|:--------:|:--------:|
| [GPQA][gpqa] Diamond | RelaxedAccuracy/accuracy | 0-shot | 24.8 | 23.7 |
| [LiveCodeBench][lcb] v5 | pass@1 | 0-shot | 18.6 | 25.7 |
| Codegolf v2.2 | pass@1 | 0-shot | 11.0 | 16.8 |
| [AIME 2025][aime-2025] | Accuracy | 0-shot | 6.7 | 11.6 |
[gpqa]: https://arxiv.org/abs/2311.12022
[lcb]: https://arxiv.org/abs/2403.07974
[aime-2025]: https://www.vals.ai/benchmarks/aime-2025-05-09
#### Additional benchmarks
| Benchmark | Metric | n-shot | E2B IT | E4B IT |
| ------------------------------------ |------------|----------|:--------:|:--------:|
| [MMLU][mmlu] | Accuracy | 0-shot | 60.1 | 64.9 |
| [MBPP][mbpp] | pass@1 | 3-shot | 56.6 | 63.6 |
| [HumanEval][humaneval] | pass@1 | 0-shot | 66.5 | 75.0 |
| [LiveCodeBench][lcb] | pass@1 | 0-shot | 13.2 | 13.2 |
| HiddenMath | Accuracy | 0-shot | 27.7 | 37.7 |
| [Global-MMLU-Lite][global-mmlu-lite] | Accuracy | 0-shot | 59.0 | 64.5 |
| [MMLU][mmlu] (Pro) | Accuracy | 0-shot | 40.5 | 50.6 |
[gpqa]: https://arxiv.org/abs/2311.12022
[mbpp]: https://arxiv.org/abs/2108.07732
[humaneval]: https://arxiv.org/abs/2107.03374
[lcb]: https://arxiv.org/abs/2403.07974
[global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
- **Child Safety**: Evaluation of text-to-text and image to text prompts
covering child safety policies, including child sexual abuse and
exploitation.
- **Content Safety:** Evaluation of text-to-text and image to text prompts
covering safety policies including, harassment, violence and gore, and hate
speech.
- **Representational Harms**: Evaluation of text-to-text and image to text
prompts covering safety policies including bias, stereotyping, and harmful
associations or inaccuracies.
In addition to development level evaluations, we conduct "assurance
evaluations" which are our 'arms-length' internal evaluations for responsibility
governance decision making. They are conducted separately from the model
development team, to inform decision making about release. High level findings
are fed back to the model team, but prompt sets are held-out to prevent
overfitting and preserve the results' ability to inform decision making. Notable
assurance evaluation results are reported to our Responsibility & Safety Council
as part of release review.
### Evaluation Results
For all areas of safety testing, we saw safe levels of performance across the
categories of child safety, content safety, and representational harms relative
to previous Gemma models. All testing was conducted without safety filters to
evaluate the model capabilities and behaviors. For text-to-text, image-to-text,
and audio-to-text, and across all model sizes, the model produced minimal policy
violations, and showed significant improvements over previous Gemma models'
performance with respect to high severity violations. A limitation of our
evaluations was they included primarily English language prompts.
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open generative models have a wide range of applications across various
industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
- Content Creation and Communication
- **Text Generation**: Generate creative text formats such as
poems, scripts, code, marketing copy, and email drafts.
- **Chatbots and Conversational AI**: Power conversational
interfaces for customer service, virtual assistants, or interactive
applications.
- **Text Summarization**: Generate concise summaries of a text
corpus, research papers, or reports.
- **Image Data Extraction**: Extract, interpret, and summarize
visual data for text communications.
- **Audio Data Extraction**: Transcribe spoken language, translate speech
to text in other languages, and analyze sound-based data.
- Research and Education
- **Natural Language Processing (NLP) and generative model
Research**: These models can serve as a foundation for researchers to
experiment with generative models and NLP techniques, develop
algorithms, and contribute to the advancement of the field.
- **Language Learning Tools**: Support interactive language
learning experiences, aiding in grammar correction or providing writing
practice.
- **Knowledge Exploration**: Assist researchers in exploring large
bodies of data by generating summaries or answering questions about
specific topics.
### Limitations
- Training Data
- The quality and diversity of the training data significantly
influence the model's capabilities. Biases or gaps in the training data
can lead to limitations in the model's responses.
- The scope of the training dataset determines the subject areas
the model can handle effectively.
- Context and Task Complexity
- Models are better at tasks that can be framed with clear
prompts and instructions. Open-ended or highly complex tasks might be
challenging.
- A model's performance can be influenced by the amount of context
provided (longer context generally leads to better outputs, up to a
certain point).
- Language Ambiguity and Nuance
- Natural language is inherently complex. Models might struggle
to grasp subtle nuances, sarcasm, or figurative language.
- Factual Accuracy
- Models generate responses based on information they learned
from their training datasets, but they are not knowledge bases. They
may generate incorrect or outdated factual statements.
- Common Sense
- Models rely on statistical patterns in language. They might
lack the ability to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of generative models raises several ethical concerns. In
creating an open model, we have carefully considered the following:
- Bias and Fairness
- Generative models trained on large-scale, real-world text and image data
can reflect socio-cultural biases embedded in the training material.
These models underwent careful scrutiny, input data pre-processing
described and posterior evaluations reported in this card.
- Misinformation and Misuse
- Generative models can be misused to generate text that is
false, misleading, or harmful.
- Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
- Transparency and Accountability:
- This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
- A responsibly developed open model offers the opportunity to
share innovation by making generative model technology accessible to
developers and researchers across the AI ecosystem.
Risks identified and mitigations:
- **Perpetuation of biases**: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
- **Generation of harmful content**: Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
- **Misuse for malicious purposes**: Technical limitations and developer
and end-user education can help mitigate against malicious applications of
generative models. Educational resources and reporting mechanisms for users
to flag misuse are provided. Prohibited uses of Gemma models are outlined
in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
- **Privacy violations**: Models were trained on data filtered for removal of
certain personal information and other sensitive data. Developers are
encouraged to adhere to privacy regulations with privacy-preserving
techniques.
### Benefits
At the time of release, this family of models provides high-performance open
generative model implementations designed from the ground up for responsible AI
development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756782929
|
liukevin666
| 2025-09-02T03:16:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:16:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kjydb/lerobot_test_162
|
kjydb
| 2025-09-02T03:16:21Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:kjydb/lerobot_test_162",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-02T03:16:03Z |
---
base_model: lerobot/smolvla_base
datasets: kjydb/lerobot_test_162
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- lerobot
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
akirafudo/blockassist-bc-insectivorous_bold_lion_1756782949
|
akirafudo
| 2025-09-02T03:16:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:16:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-insectivorous_bold_lion_1756782831
|
omerbektass
| 2025-09-02T03:14:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:14:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/CharGen-v3-beta-rl-83-s0-GGUF
|
mradermacher
| 2025-09-02T03:12:18Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:CharGen/CharGen-v3-beta-rl-83-s0",
"base_model:quantized:CharGen/CharGen-v3-beta-rl-83-s0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-01T21:02:21Z |
---
base_model: CharGen/CharGen-v3-beta-rl-83-s0
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/CharGen/CharGen-v3-beta-rl-83-s0
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#CharGen-v3-beta-rl-83-s0-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-rl-83-s0-GGUF/resolve/main/CharGen-v3-beta-rl-83-s0.Q2_K.gguf) | Q2_K | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-rl-83-s0-GGUF/resolve/main/CharGen-v3-beta-rl-83-s0.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-rl-83-s0-GGUF/resolve/main/CharGen-v3-beta-rl-83-s0.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-rl-83-s0-GGUF/resolve/main/CharGen-v3-beta-rl-83-s0.Q3_K_L.gguf) | Q3_K_L | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-rl-83-s0-GGUF/resolve/main/CharGen-v3-beta-rl-83-s0.IQ4_XS.gguf) | IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-rl-83-s0-GGUF/resolve/main/CharGen-v3-beta-rl-83-s0.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-rl-83-s0-GGUF/resolve/main/CharGen-v3-beta-rl-83-s0.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-rl-83-s0-GGUF/resolve/main/CharGen-v3-beta-rl-83-s0.Q5_K_S.gguf) | Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-rl-83-s0-GGUF/resolve/main/CharGen-v3-beta-rl-83-s0.Q5_K_M.gguf) | Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-rl-83-s0-GGUF/resolve/main/CharGen-v3-beta-rl-83-s0.Q6_K.gguf) | Q6_K | 19.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CharGen-v3-beta-rl-83-s0-GGUF/resolve/main/CharGen-v3-beta-rl-83-s0.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kdh2b/Exposure-slot
|
kdh2b
| 2025-09-02T03:12:02Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T02:56:36Z |
---
license: apache-2.0
---
<p align="center">
<h1 align="center">Exposure-slot: Exposure-centric representations learning with Slot-in-Slot Attention for Region-aware Exposure Correction (Official)</h1>
<p align="center">
<a href="https://github.com/dgjung0220">Donggoo Jung</a>*,
<a href="https://github.com/kdhRick2222">Daehyun Kim</a>*,
<a href="https://scholar.google.com/citations?hl=ko&user=I_5aoAwAAAAJ">Guanghui Wang</a>,
<a href="https://sites.google.com/view/lliger9/">Tae Hyun Kim</a>†.
(*Equal Contribution, †Corresponding author)
</p>
<h2 align="center">CVPR 2025</h2>
<h3 align="center">
<!-- GitHub Project -->
<a href="https://github.com/kdhRick2222/Exposure-slot" target="_blank"><img src="https://img.shields.io/badge/GitHub-181717?logo=github&logoColor=white"></a>
<!-- CVPR Paper -->
<a href="https://openaccess.thecvf.com/content/CVPR2025/papers/Jung_Exposure-slot_Exposure-centric_Representations_Learning_with_Slot-in-Slot_Attention_for_Region-aware_Exposure_CVPR_2025_paper.pdf" target="_blank"><img src="https://img.shields.io/badge/CVPR%20Paper-003B6F?logo=readthedocs&logoColor=white"></a>
<!-- Hugging Face Demo -->
<a href="https://huggingface.co/kdh2b/Exposure-slot/tree/main" target="_blank"><img src="https://img.shields.io/badge/🤗%20HuggingFace-FFAC45?logo=huggingface&logoColor=white"></a>
</h3>
</p>
This repository contains the official PyTorch implementation of "**_Exposure-slot_**: *Exposure-centric representations learning with Slot-in-Slot Attention for Region-aware Exposure Correction*" accepted at **CVPR 2025.**
<div align="center">
<img src="images/concept_figure.png" width="500px" />
</div>
**Exposure-slot** is the first approach to leverage *Slot Attention mechanism* for optimized exposure-specific feature partitioning. We introduce the slot-in-slot attention that enables sophisticated feature partitioning and learning and exposure-aware prompts that enhance the exposure-centric characteristics of each image feature.
Our proposing method is **the first approach to leverage Slot Attention mechanism** for optimized exposure-specific feature partitioning. We introduce the **slot-in-slot attention** that enables sophisticated feature partitioning and learning and exposure-aware prompts that enhance the exposure-centric characteristics of each image feature. We provide validation code, training code, and pre-trained weights on three benchmark datasets (**MSEC, SICE, LCDP**).
## Setting
Please follow these steps to set up the repository.
### 1. Clone the Repository
```
git clone https://github.com/kdhRick2222/Exposure-slot.git
cd Exposure-slot
```
### 2. Download Pre-trained models and Official Checkpoints
We utilize pre-trained models from [Exposure-slot_ckpt.zip](https://1drv.ms/u/c/1acaeb9b8ad3b4e8/ESoJibo6AeBNpjmZjVYWBqcBo1RC2pXZO3S13wEwiMqZQg?e=LQkgJo).
- Place the pre-trained models into the `ckpt/` directory.
### 3. Prepare Data
For training and validating our model, we used SICE, MSEC, and LCDP dataset.
- ### SICE dataset
We downloaded the SICE dataset from [here](https://github.com/csjcai/SICE).
```
python prepare_SICE.py
```
Make .Dataset_txt/SICE_Train.txt and .Dataset_txt/SICE_Test.txt for validation and training.
- ### MSEC dataset
We downloaded the MSEC dataset from [here](https://github.com/mahmoudnafifi/Exposure_Correction).
```
python prepare_MSEC.py
```
Make .Dataset_txt/MSEC_Train.txt and .Dataset_txt/MSEC_Test.txt for validation and training.
- ### LCDP dataset
We downloaded the LCDP dataset from [here](https://github.com/onpix/LCDPNet).
```
python prepare_LCDP.py
```
Make .Dataset_txt/LCDP_Train.txt and .Dataset_txt/LCDP_Test.txt for validation and training.
## Inference and Evaluation
We provide *2-level* and *3-level* Exposure-slot model for each dataset (SICE, MSEC, LCDP).
```
python test.py --level=2 --dataset="MSEC"
```
## Training
```
python train.py --gpu_num=0 --level=2 --dataset="MSEC"
```
## Overall directory
```
├── ckpts
│ ├── LCDP_level2.pth
│ ├── LCDP_level3.pth
│ ├── MSEC_level2.pth
│ ├── MSEC_level3.pth
│ ├── SICE_level2.pth
│ └── SICE_level3.pth
│
├── config
│ ├── basic.py
│
├── data
│ ├── dataloaders.py
│ └── datasets.py
|
├── Dataset_txt
│ ├── LCDP_Train.txt
│ ├── LCDP_Test.txt
│ ├── MSEC_Train.txt
│ ├── MSEC_Test.txt
│ ├── SICE_Train.txt
│ └── SICE_Test.txt
|
├── utils
│ ├── scheduler_util.py
│ └── util.py
|
├── network_level2.py
├── network_level3.py
├── prepare_LCDP.py
├── prepare_MSEC.py
├── prepare_SICE.py
├── test.py
└── train.py
```
## Citation
If you find our work useful in your research, please cite:
```
@inproceedings{jung2025Exposureslot,
title={Exposure-slot: Exposure-centric representations learning with Slot-in-Slot Attention for Region-aware Exposure Correction},
author={Donggoo Jung, Daehyun Kim, Guanghui Wang, Tae Hyun Kim},
booktitle={Computer Vision and Pattern Recognition (CVPR)},
year={2025}
}
```
|
wotihe/Affine-5CrA3zu5xeQHnAXFfcM76ttxRK5fCTsWpfgXqaZhAZj81Kjw
|
wotihe
| 2025-09-02T03:10:23Z | 0 | 0 | null |
[
"safetensors",
"gpt_oss",
"8-bit",
"mxfp4",
"region:us"
] | null | 2025-09-02T03:08:36Z |
# Affine
Mine open reasoning.
[Affine Discord](https://discord.com/invite/3T9X4Yn23e)
## Introduction
Affine is an incentivized RL environment which pays miners which make incremental improvements on a set of tasks (for instance, program abduction or coding). The mechanism is sybil-proof (you can't cheat by deploying multiple miners), decoy-proof (you can't cheat by packing models into certain environments), copy-proof (you can't cheat by stealing models), overfitting-proof (you can't cheat by overfitting to a single env).
How does Affine work? Affine validators incentivize miners to submit models to Subnet 64 on Bittensor (a.k.a Chutes) where they are inference load balanced and publicly available. These models are evaluated on a set of RL-environments with validators looking for the model which dominates the pareto frontier -- namely the model which outcompetes all other models on all envs (see `af validator`) The network is winners-take-all where miners are forced to copy, download and improve the pareto frontier model.
Why affine? Directed incentives for RL have never been achieved. The ability to direct intelligence and aggregate the work-effort of a large non-permissioned group of individuals on RL tasks will unlock fast advancement in intelligence, we intend to commoditize reasoning (intelligence's highest form) and break the intelligence sound barrier.
## Installation
```bash
# Install uv Astral
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone and install Affine
git clone https://github.com/AffineFoundation/affine.git
cd affine
uv venv && source .venv/bin/activate && uv pip install -e .
# Verify installation
af
```
## Validating
Set env vars, chutes api key.
```bash
# Copy .env and fill out validator items
cp .env.example .env
```
(Recommended): Run the validator with docker and watchtower autoupdate.
```bash
# Run the validator with watchtower.
docker-compose down && docker-compose pull && docker-compose up -d && docker-compose logs -f
```
Run the validator using the local override (build local image) + base compose
```bash
docker compose -f docker-compose.yml -f docker-compose.local.yml down --remove-orphans
docker compose -f docker-compose.yml -f docker-compose.local.yml up -d --build --remove-orphans
docker compose -f docker-compose.yml -f docker-compose.local.yml logs -f
```
Run the validator locally
```bash
# Start the validator with debug.
af -vv validate
```
# Mining
IMPORTANT: you require a ***developer enabled account*** on Chutes to mine. Normal API keys cannot deploy chutes right now.
1. Set env vars.
```bash
# Copy .env and fill out validator items
cp .env.example .env
```
2. Miners need a chutes developer account ( `chutes.ai` )
```bash
chutes register
```
3. Register your miner to Affine (S120).
```bash
btcli subnet register --wallet.name <your cold> --wallet.hotkey <your hot>
```
4. Pull a model off the network.
```bash
af -vvv pull <uid to pull> --model_path <i.e. ./my_model>
```
5. Improve the model
```bash
... magic RL stuff ...
```
6. Push the model to your miner.
```bash
af -vvv push --coldkey <your cold> --hotkey <your hot> --model_path <i.e. ./my_model>
```
# SDK
Affine is also an SDK you can use to generate and evaluate models envs.
```python
import affine as af
# Optionally turn on logging
af.trace(); af.debug(); af.info()
# Get all miner info or only for UID =5
miners = await af.get_miners()
miner = await af.get_miners( 5 )
# Generate a SAT challenge
chal = await af.SAT.generate()
# Generate a bunch.
chals = await af.ABDUCTION().many( 10 )
chals = await af.DEDUCTION().many( 10 )
# Query the model directly.
# NOTE: A CHUTES_API_KEY .env value is required for this command.
response = await af.query( chal.prompt, model = miner.model )
# Evaluate the response
evaluation = chal.evaluate( response )
print( evaluation.score )
# Async generator of results from last 100 blocks.
async for res in af.rollouts(100):
print (res) # Result objects
```
|
ROBOTIS/ffw_bg2_rev4_PickMultiCoffee_Env3_Task1_1_4
|
ROBOTIS
| 2025-09-02T03:10:09Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:ROBOTIS/ffw_bg2_rev4_PickMultiCoffee_Env3_Task1_1",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-02T03:09:57Z |
---
datasets: ROBOTIS/ffw_bg2_rev4_PickMultiCoffee_Env3_Task1_1
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- lerobot
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
mooperyou/blockassist-bc-beaked_frisky_ox_1756782551
|
mooperyou
| 2025-09-02T03:09:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked frisky ox",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:09:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked frisky ox
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-insectivorous_bold_lion_1756782482
|
omerbektass
| 2025-09-02T03:08:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:08:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Lean-conjecturer-GGUF
|
mradermacher
| 2025-09-02T03:04:23Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Slim205/Lean-conjecturer",
"base_model:quantized:Slim205/Lean-conjecturer",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-02T02:55:05Z |
---
base_model: Slim205/Lean-conjecturer
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Slim205/Lean-conjecturer
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Lean-conjecturer-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lean-conjecturer-GGUF/resolve/main/Lean-conjecturer.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Lean-conjecturer-GGUF/resolve/main/Lean-conjecturer.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Lean-conjecturer-GGUF/resolve/main/Lean-conjecturer.Q3_K_M.gguf) | Q3_K_M | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lean-conjecturer-GGUF/resolve/main/Lean-conjecturer.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Lean-conjecturer-GGUF/resolve/main/Lean-conjecturer.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Lean-conjecturer-GGUF/resolve/main/Lean-conjecturer.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lean-conjecturer-GGUF/resolve/main/Lean-conjecturer.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lean-conjecturer-GGUF/resolve/main/Lean-conjecturer.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Lean-conjecturer-GGUF/resolve/main/Lean-conjecturer.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Lean-conjecturer-GGUF/resolve/main/Lean-conjecturer.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Lean-conjecturer-GGUF/resolve/main/Lean-conjecturer.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Lean-conjecturer-GGUF/resolve/main/Lean-conjecturer.f16.gguf) | f16 | 13.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
amandacute/blockassist-bc-amphibious_plump_ram_1756782186
|
amandacute
| 2025-09-02T03:03:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious plump ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T03:03:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious plump ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vangard703/output_stage2_v3_1100K_vlm_200K_fast_10
|
vangard703
| 2025-09-02T03:00:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-02T02:43:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thefirstgoku/19A_w13_scaleUp_l3
|
thefirstgoku
| 2025-09-02T02:59:29Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-02T02:58:51Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
dsagasdgds/blockassist-bc-unseen_camouflaged_komodo_1756781567
|
dsagasdgds
| 2025-09-02T02:59:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"unseen camouflaged komodo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:58:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- unseen camouflaged komodo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1756780395
|
pempekmangedd
| 2025-09-02T02:58:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:58:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
HenriqueLz/bert-large-portuguese-fakerecogna2-extrativa-elections
|
HenriqueLz
| 2025-09-02T02:58:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-02T02:57:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ssourav15/vaibesync-user-matching-model
|
ssourav15
| 2025-09-02T02:58:06Z | 0 | 0 | null |
[
"dating",
"user-matching",
"recommendation-system",
"pytorch-lightning",
"region:us"
] | null | 2025-09-02T02:58:02Z |
---
title: VaibeSync User Matching Model
emoji: 💕
colorFrom: pink
colorTo: purple
sdk: pytorch
tags:
- dating
- user-matching
- recommendation-system
- pytorch-lightning
---
# VaibeSync User Matching Model
This model powers the VaibeSync dating app's intelligent user matching system.
## Model Details
- **Framework**: PyTorch Lightning
- **Task**: User compatibility prediction
- **Architecture**: Two-tower neural network
- **Size**: 0.4 MB
## Usage
This model is automatically downloaded and used by the VaibeSync ML API deployed on Railway.
```python
from models.user_match_two_tower import UserMatchingModel
model = UserMatchingModel.load_from_checkpoint("last.ckpt")
model.eval()
```
## Railway Deployment
Set these environment variables in your Railway deployment:
```
MODEL_STORAGE_TYPE=huggingface
HF_MODEL_REPO=ssourav15/vaibesync-user-matching-model
```
The model will be automatically downloaded on first startup.
|
mooperyou/blockassist-bc-alert_melodic_swan_1756781827
|
mooperyou
| 2025-09-02T02:57:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert melodic swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:57:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert melodic swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756781706
|
xinnn32
| 2025-09-02T02:56:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:56:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/MIA-GRPO-MATH-Qwen-3b-GGUF
|
mradermacher
| 2025-09-02T02:55:41Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:RLMIA/MIA-GRPO-MATH-Qwen-3b",
"base_model:quantized:RLMIA/MIA-GRPO-MATH-Qwen-3b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-02T02:47:15Z |
---
base_model: RLMIA/MIA-GRPO-MATH-Qwen-3b
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/RLMIA/MIA-GRPO-MATH-Qwen-3b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#MIA-GRPO-MATH-Qwen-3b-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MIA-GRPO-MATH-Qwen-3b-GGUF/resolve/main/MIA-GRPO-MATH-Qwen-3b.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MIA-GRPO-MATH-Qwen-3b-GGUF/resolve/main/MIA-GRPO-MATH-Qwen-3b.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/MIA-GRPO-MATH-Qwen-3b-GGUF/resolve/main/MIA-GRPO-MATH-Qwen-3b.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MIA-GRPO-MATH-Qwen-3b-GGUF/resolve/main/MIA-GRPO-MATH-Qwen-3b.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MIA-GRPO-MATH-Qwen-3b-GGUF/resolve/main/MIA-GRPO-MATH-Qwen-3b.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MIA-GRPO-MATH-Qwen-3b-GGUF/resolve/main/MIA-GRPO-MATH-Qwen-3b.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MIA-GRPO-MATH-Qwen-3b-GGUF/resolve/main/MIA-GRPO-MATH-Qwen-3b.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MIA-GRPO-MATH-Qwen-3b-GGUF/resolve/main/MIA-GRPO-MATH-Qwen-3b.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MIA-GRPO-MATH-Qwen-3b-GGUF/resolve/main/MIA-GRPO-MATH-Qwen-3b.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/MIA-GRPO-MATH-Qwen-3b-GGUF/resolve/main/MIA-GRPO-MATH-Qwen-3b.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MIA-GRPO-MATH-Qwen-3b-GGUF/resolve/main/MIA-GRPO-MATH-Qwen-3b.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MIA-GRPO-MATH-Qwen-3b-GGUF/resolve/main/MIA-GRPO-MATH-Qwen-3b.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756781633
|
liukevin666
| 2025-09-02T02:55:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:54:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ChenWu98/numina_qwen_2.5_sft_combine_v1_source_weighted_alpha4.0_split_0_normalize
|
ChenWu98
| 2025-09-02T02:55:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T02:53:02Z |
---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: numina_qwen_2.5_sft_combine_v1_source_weighted_alpha4.0_split_0_normalize
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for numina_qwen_2.5_sft_combine_v1_source_weighted_alpha4.0_split_0_normalize
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/xcptyume)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ngophong/blockassist-bc-agile_stealthy_dog_1756781571
|
ngophong
| 2025-09-02T02:54:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"agile stealthy dog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:53:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- agile stealthy dog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-insectivorous_bold_lion_1756781584
|
omerbkts
| 2025-09-02T02:53:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:53:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-insectivorous_bold_lion_1756781454
|
akirafudo
| 2025-09-02T02:51:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:51:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tiopuiter/blockassist-bc-arctic_giant_ape_1756781368
|
tiopuiter
| 2025-09-02T02:50:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic giant ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:49:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic giant ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lightningpal/epiderm2
|
lightningpal
| 2025-09-02T02:48:41Z | 88 | 0 |
transformers
|
[
"transformers",
"safetensors",
"resnet",
"image-classification",
"vision",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-21T02:32:21Z |
---
pipeline_tag: image-classification
library_name: transformers
tags:
- image-classification
- vision
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Fernando Hidalgo Lecaros]
- **Model type:** [ImageClassification]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model:** [ResNet50]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/InternVL3_5-30B-A3B-i1-GGUF
|
mradermacher
| 2025-09-02T02:47:12Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"internvl",
"custom_code",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"dataset:OpenGVLab/MMPR-Tiny",
"base_model:OpenGVLab/InternVL3_5-30B-A3B",
"base_model:quantized:OpenGVLab/InternVL3_5-30B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-01T21:21:01Z |
---
base_model: OpenGVLab/InternVL3_5-30B-A3B
datasets:
- OpenGVLab/MMPR-v1.2
- OpenGVLab/MMPR-Tiny
language:
- multilingual
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- internvl
- custom_code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/OpenGVLab/InternVL3_5-30B-A3B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#InternVL3_5-30B-A3B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-GGUF
**This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-GGUF).**
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.imatrix.gguf) | imatrix | 0.2 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-IQ1_S.gguf) | i1-IQ1_S | 6.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-IQ1_M.gguf) | i1-IQ1_M | 7.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-IQ2_S.gguf) | i1-IQ2_S | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-IQ2_M.gguf) | i1-IQ2_M | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 10.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-Q2_K.gguf) | i1-Q2_K | 11.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 11.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 13.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-IQ3_S.gguf) | i1-IQ3_S | 13.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-IQ3_M.gguf) | i1-IQ3_M | 13.6 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 14.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 16.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 16.5 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-Q4_0.gguf) | i1-Q4_0 | 17.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 17.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-Q4_1.gguf) | i1-Q4_1 | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/InternVL3_5-30B-A3B-i1-GGUF/resolve/main/InternVL3_5-30B-A3B.i1-Q6_K.gguf) | i1-Q6_K | 25.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ngophong/blockassist-bc-agile_stealthy_dog_1756781080
|
ngophong
| 2025-09-02T02:46:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"agile stealthy dog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:45:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- agile stealthy dog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-insectivorous_bold_lion_1756781084
|
akirafudo
| 2025-09-02T02:45:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:45:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756780988
|
liukevin666
| 2025-09-02T02:44:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:44:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Spemercurial/Reinforce-CartPole-v1
|
Spemercurial
| 2025-09-02T02:44:14Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-02T02:44:05Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
mooperyou/blockassist-bc-beaked_frisky_ox_1756780982
|
mooperyou
| 2025-09-02T02:43:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked frisky ox",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:43:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked frisky ox
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/DeepSeek-GRM-27B-GGUF
|
mradermacher
| 2025-09-02T02:43:36Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"zh",
"en",
"dataset:openbmb/UltraFeedback",
"dataset:NCSOFT/offsetbias",
"dataset:Skywork/Skywork-Reward-Preference-80K-v0.2",
"dataset:nvidia/HelpSteer2",
"base_model:BBQGOD/DeepSeek-GRM-27B",
"base_model:quantized:BBQGOD/DeepSeek-GRM-27B",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-02T00:37:20Z |
---
base_model: BBQGOD/DeepSeek-GRM-27B
datasets:
- openbmb/UltraFeedback
- NCSOFT/offsetbias
- Skywork/Skywork-Reward-Preference-80K-v0.2
- nvidia/HelpSteer2
language:
- zh
- en
library_name: transformers
license: gemma
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/BBQGOD/DeepSeek-GRM-27B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DeepSeek-GRM-27B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DeepSeek-GRM-27B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-27B-GGUF/resolve/main/DeepSeek-GRM-27B.Q2_K.gguf) | Q2_K | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-27B-GGUF/resolve/main/DeepSeek-GRM-27B.Q3_K_S.gguf) | Q3_K_S | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-27B-GGUF/resolve/main/DeepSeek-GRM-27B.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-27B-GGUF/resolve/main/DeepSeek-GRM-27B.Q3_K_L.gguf) | Q3_K_L | 14.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-27B-GGUF/resolve/main/DeepSeek-GRM-27B.IQ4_XS.gguf) | IQ4_XS | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-27B-GGUF/resolve/main/DeepSeek-GRM-27B.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-27B-GGUF/resolve/main/DeepSeek-GRM-27B.Q4_K_M.gguf) | Q4_K_M | 16.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-27B-GGUF/resolve/main/DeepSeek-GRM-27B.Q5_K_S.gguf) | Q5_K_S | 19.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-27B-GGUF/resolve/main/DeepSeek-GRM-27B.Q5_K_M.gguf) | Q5_K_M | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-27B-GGUF/resolve/main/DeepSeek-GRM-27B.Q6_K.gguf) | Q6_K | 22.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-GRM-27B-GGUF/resolve/main/DeepSeek-GRM-27B.Q8_0.gguf) | Q8_0 | 29.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
quickmt/quickmt-th-en
|
quickmt
| 2025-09-02T02:43:15Z | 0 | 0 | null |
[
"translation",
"en",
"th",
"dataset:quickmt/quickmt-train.th-en",
"license:cc-by-4.0",
"model-index",
"region:us"
] |
translation
| 2025-09-02T00:41:30Z |
---
language:
- en
- th
tags:
- translation
license: cc-by-4.0
datasets:
- quickmt/quickmt-train.th-en
model-index:
- name: quickmt-th-en
results:
- task:
name: Translation tha-eng
type: translation
args: tha-eng
dataset:
name: flores101-devtest
type: flores_101
args: tha_Thai eng_Latn devtest
metrics:
- name: BLEU
type: bleu
value: 29.32
- name: CHRF
type: chrf
value: 58.4
- name: COMET
type: comet
value: 87.15
---
# `quickmt-th-en` Neural Machine Translation Model
`quickmt-th-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `th` into `en`.
## Try it on our Huggingface Space
Give it a try before downloading here: https://huggingface.co/spaces/quickmt/QuickMT-Demo
## Model Information
* Trained using [`eole`](https://github.com/eole-nlp/eole)
* 195M parameter transformer 'big' with 8 encoder layers and 2 decoder layers
* 20k separate Sentencepiece vocabs
* Expested for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format
* Training data: https://huggingface.co/datasets/quickmt/quickmt-train.th-en/tree/main
See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model.
## Usage with `quickmt`
You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
Next, install the `quickmt` python library and download the model:
```bash
git clone https://github.com/quickmt/quickmt.git
pip install ./quickmt/
quickmt-model-download quickmt/quickmt-th-en ./quickmt-th-en
```
Finally use the model in python:
```python
from quickmt impest Translator
# Auto-detects GPU, set to "cpu" to force CPU inference
t = Translator("./quickmt-th-en/", device="auto")
# Translate - set beam size to 1 for faster speed (but lower quality)
sample_text = 'ดร.เอฮุด อูร์ ศาสตราจารย์แพทยศาสตร์แห่งมหาวิทยาลัยดัลเฮาซีในแฮลิแฟกซ์ รัฐโนวาสโกเชีย และประธานแผนกคลินิกและวิทยาศาสตร์แห่งสมาคมโรคเบาหวานแคนาดาได้กล่าวเตือนว่าการวิจัยนี้ยังอยู่ในระยะแรกเริ่มเท่านั้น'
t(sample_text, beam_size=5)
```
> 'Dr. Ehud Ur, Professor of Medicine at the University of Dalhousie in Halifax, Nova Scotia, and Chairman of the Clinical and Science Department of the Canadian Diabetes Association, warned that the research is only in the early stages.'
```python
# Get alternative translations by sampling
# You can pass any cTranslate2 `translate_batch` arguments
t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9)
```
> 'Dr Ehud Ur, medical professor of the University of Dalhousi in Halifax, Nova Scotia and president of the Clinical and Scientific Department of the Canadian Diabetic Association, warned that the research is in its early stages.'
The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`.
## Metrics
`bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("tha_Thai"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32 (faster speed is possible using a larger batch size).
| | bleu | chrf2 | comet22 | Time (s) |
|:---------------------------------|-------:|--------:|----------:|-----------:|
| quickmt/quickmt-th-en | 29.32 | 58.4 | 87.15 | 1.34 |
| Helsinki-NLP/opus-mt-th-en | 19.76 | 48.86 | 81.59 | 3.84 |
| facebook/nllb-200-distilled-600M | 26.54 | 54.97 | 85.26 | 22.27 |
| facebook/nllb-200-distilled-1.3B | 29.38 | 57 | 86.59 | 39.43 |
| facebook/m2m100_418M | 16.57 | 47.88 | 77.69 | 20.1 |
| facebook/m2m100_1.2B | 21.71 | 52.63 | 82.51 | 37.8 |
|
mooperyou/blockassist-bc-alert_melodic_swan_1756780911
|
mooperyou
| 2025-09-02T02:42:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert melodic swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:41:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert melodic swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vera6/sn105_denoising_8
|
vera6
| 2025-09-02T02:42:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-02T00:02:43Z |
DENOISING speech enhancement model
|
BootesVoid/cmezz3a8r07axsr53rngvsz3u_cmf1x3lg009eqsr53m305w8ns
|
BootesVoid
| 2025-09-02T02:42:08Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-02T02:42:07Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ARGENTINA
---
# Cmezz3A8R07Axsr53Rngvsz3U_Cmf1X3Lg009Eqsr53M305W8Ns
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ARGENTINA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ARGENTINA",
"lora_weights": "https://huggingface.co/BootesVoid/cmezz3a8r07axsr53rngvsz3u_cmf1x3lg009eqsr53m305w8ns/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmezz3a8r07axsr53rngvsz3u_cmf1x3lg009eqsr53m305w8ns', weight_name='lora.safetensors')
image = pipeline('ARGENTINA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmezz3a8r07axsr53rngvsz3u_cmf1x3lg009eqsr53m305w8ns/discussions) to add images that show off what you’ve made with this LoRA.
|
acidjp/blockassist-bc-pesty_extinct_prawn_1756778314
|
acidjp
| 2025-09-02T02:41:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:41:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-insectivorous_bold_lion_1756780833
|
omerbkts
| 2025-09-02T02:40:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:40:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-insectivorous_bold_lion_1756780710
|
akirafudo
| 2025-09-02T02:38:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:38:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ngophong/blockassist-bc-agile_stealthy_dog_1756780611
|
ngophong
| 2025-09-02T02:38:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"agile stealthy dog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:37:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- agile stealthy dog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ASLP-lab/DiffRhythm-1_2-full
|
ASLP-lab
| 2025-09-02T02:37:48Z | 0 | 0 | null |
[
"diffrhythm",
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T02:06:46Z |
---
license: apache-2.0
---
|
pouruy/blockassist-bc-stinky_stinky_cassowary_1756780587
|
pouruy
| 2025-09-02T02:36:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinky stinky cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:36:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinky stinky cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
twox/gemma3-gsm8k-sft
|
twox
| 2025-09-02T02:35:02Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"base_model:google/gemma-3-270m",
"base_model:finetune:google/gemma-3-270m",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T02:32:12Z |
---
library_name: transformers
license: gemma
base_model: google/gemma-3-270m
tags:
- generated_from_trainer
model-index:
- name: gemma3-gsm8k-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma3-gsm8k-sft
This model is a fine-tuned version of [google/gemma-3-270m](https://huggingface.co/google/gemma-3-270m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.55.4
- Pytorch 2.8.0
- Datasets 4.0.0
- Tokenizers 0.21.4
|
mooperyou/blockassist-bc-beaked_frisky_ox_1756780458
|
mooperyou
| 2025-09-02T02:34:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked frisky ox",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:34:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked frisky ox
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756780380
|
xinnn32
| 2025-09-02T02:34:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:33:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BootesVoid/cmf1vcd2h09a2sr53exzercf8_cmf1w1y7709dosr53lxlyvv46
|
BootesVoid
| 2025-09-02T02:33:06Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-02T02:33:05Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: GOTIC
---
# Cmf1Vcd2H09A2Sr53Exzercf8_Cmf1W1Y7709Dosr53Lxlyvv46
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `GOTIC` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "GOTIC",
"lora_weights": "https://huggingface.co/BootesVoid/cmf1vcd2h09a2sr53exzercf8_cmf1w1y7709dosr53lxlyvv46/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmf1vcd2h09a2sr53exzercf8_cmf1w1y7709dosr53lxlyvv46', weight_name='lora.safetensors')
image = pipeline('GOTIC').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmf1vcd2h09a2sr53exzercf8_cmf1w1y7709dosr53lxlyvv46/discussions) to add images that show off what you’ve made with this LoRA.
|
Andra76/blockassist-bc-deadly_enormous_butterfly_1756779651
|
Andra76
| 2025-09-02T02:32:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly enormous butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:31:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly enormous butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-insectivorous_bold_lion_1756780121
|
omerbkts
| 2025-09-02T02:29:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:29:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hubert233/qwen3-coder-30b-ekto-merged
|
hubert233
| 2025-09-02T02:28:20Z | 0 | 0 | null |
[
"safetensors",
"qwen3_moe",
"region:us"
] | null | 2025-09-02T02:04:31Z |
# Qwen3-Coder-EKTO-30B
**EntroPO-EKTO-30B** is trained with **EntroPO** (Entropy-Enhanced Preference Optimization), a novel method designed to preserve solution diversity and significantly improve performance on complex software engineering problems. The base model is Qwen/Qwen3-Coder-30B-A3B-Instruct.
This model achieves state-of-the-art results among open-weight models on the SWE-bench leaderboard, demonstrating its effectiveness in solving real-world GitHub issues.
## Model Description
LLM-powered software engineering agents often face a "diversity collapse" problem: when generating multiple solutions, the outputs are often too similar, limiting the chance of finding a correct one. This is a common side effect of preference optimization techniques like DPO.
**EntroPO** was created to solve this. It is an entropy-enhanced preference optimization method that fine-tunes the model to preserve a diverse range of potential solutions. By learning from entire solution trajectories and explicitly rewarding policy entropy, EntroPO trains agents that are better at exploring the solution space and less likely to get stuck on a single, incorrect idea.
The key innovations are:
1. **Entropy-Enhanced Optimization**: The training objective is modified to directly counteract diversity collapse by rewarding policy entropy, encouraging the agent to explore meaningfully different solution pathways.
2. **Multi-Turn Trajectory Optimization**: Instead of evaluating only the final code, EntroPO learns from preferences over the entire sequence of actions an agent takes, teaching it to make better decisions at every step.
## How to use
You can use this model with sglang(recommended) or vllm for fast inference.
## Performance
The model's performance was evaluated on SWE-bench-Verified and SWE-bench-Lite. Note that all the evaluations are on the [R2E](https://github.com/sherdencooper/R2E-Gym) scaffold and the max context length is set as 130k due to compute restraints. Thus, the results for original model may be different from Qwen's official reported results, which are evaluated on OpenHands scaffold.
| Method | SWE-bench-Verified | SWE-bench-Lite |
|----------------|--------------------|----------------|
| origin | 37.4% | 28.00% |
| sft | 43.8% | 33.67% |
| sft+ekto | 51.6% | 44.67% |
| **sft+ekto@bo16** | **59.8%** | **49.33%** |
## Intended Use and Limitations
This model is primarily intended for use in AI-powered software engineering agents. It excels at multi-step tasks that require reasoning and tool use to resolve real-world coding issues.
**Limitations:**
* The model requires significant computational resources due to its size (30B parameters).
* It is highly specialized for code-related tasks and may not perform as well on general-purpose NLP tasks like creative writing or summarization.
* It is trained with R2E scaffold, and may not have optimal performances if you use it with other scaffolds like OpenHands or SWE-Agent.
<!-- ## Citation
If you use this model or the EntroPO method in your research, please cite our work:
```bibtex
@article{yu2025entropo,
title={Introducing EntroPO: Supercharging LLM Coding Agents by Preserving Solution Diversity},
author={Jiahao Yu and Zelei Cheng and Xian Wu and Xinyu Xing},
year={2025},
journal={arXiv preprint}
} -->
|
RikiyaT/mxbai-ettin-17m-arxiv-1.4m-angle-ft
|
RikiyaT
| 2025-09-02T02:27:59Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-09-02T02:27:58Z |
---
license: mit
---
# RikiyaT/mxbai-ettin-17m-arxiv-1.4m-angle-ft
Ettin + AnglE fine-tuned embedding model.
- **Base Model**: `RikiyaT/mxbai-ettin-17m-mldr-en-angle-ft`
- **Pooling Strategy**: `mean` (avg)
- **Training Method**: AnglE loss (ibn/cln + angle=0.02) on a B-format dataset (text, positive, negative).
- **Data Prompts**: `search_query:` / `search_document:` were used during training data creation.
## Usage
### With SentenceTransformers (recommended)
A ready-to-use SentenceTransformers variant is available at **[RikiyaT/mxbai-ettin-17m-arxiv-1.4m-angle-ft-st](https://huggingface.co/RikiyaT/mxbai-ettin-17m-arxiv-1.4m-angle-ft-st)**.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('RikiyaT/mxbai-ettin-17m-arxiv-1.4m-angle-ft-st')
sentences = ["This is an example sentence", "Each sentence is converted"]
embeddings = model.encode(sentences)
print(embeddings.shape)
```
### With Transformers (this repository)
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("RikiyaT/mxbai-ettin-17m-arxiv-1.4m-angle-ft", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("RikiyaT/mxbai-ettin-17m-arxiv-1.4m-angle-ft", trust_remote_code=True)
```
|
RikiyaT/mxbai-ettin-17m-mldr-en-angle-ft
|
RikiyaT
| 2025-09-02T02:27:43Z | 9 | 0 | null |
[
"safetensors",
"modernbert",
"license:mit",
"region:us"
] | null | 2025-08-30T21:56:49Z |
---
license: mit
---
# RikiyaT/mxbai-ettin-17m-mldr-en-angle-ft
Ettin + AnglE fine-tuned embedding model.
- **Base Model**: `RikiyaT/mxbai-ettin-17m-nq-angle-ft`
- **Pooling Strategy**: `mean` (avg)
- **Training Method**: AnglE loss (ibn/cln + angle=0.02) on a B-format dataset (text, positive, negative).
- **Data Prompts**: `search_query:` / `search_document:` were used during training data creation.
## Usage
### With SentenceTransformers (recommended)
A ready-to-use SentenceTransformers variant is available at **[RikiyaT/mxbai-ettin-17m-mldr-en-angle-ft-st](https://huggingface.co/RikiyaT/mxbai-ettin-17m-mldr-en-angle-ft-st)**.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('RikiyaT/mxbai-ettin-17m-mldr-en-angle-ft-st')
sentences = ["This is an example sentence", "Each sentence is converted"]
embeddings = model.encode(sentences)
print(embeddings.shape)
```
### With Transformers (this repository)
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("RikiyaT/mxbai-ettin-17m-mldr-en-angle-ft", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("RikiyaT/mxbai-ettin-17m-mldr-en-angle-ft", trust_remote_code=True)
```
|
RikiyaT/mxbai-ettin-17m-nq-angle-ft-st
|
RikiyaT
| 2025-09-02T02:27:29Z | 17 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"dense",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-31T11:38:27Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 256-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 7999 tokens
- **Output Dimensionality:** 256 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 7999, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("RikiyaT/mxbai-ettin-17m-nq-angle-ft-st")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 256]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.6224, 0.3314],
# [0.6224, 1.0000, 0.3635],
# [0.3314, 0.3635, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.18
- Sentence Transformers: 5.1.0
- Transformers: 4.55.4
- PyTorch: 2.7.1+cu126
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
RikiyaT/mxbai-ettin-17m-msmarco-v2-angle-ft-st
|
RikiyaT
| 2025-09-02T02:27:08Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"dense",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-02T02:27:04Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 256-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 7999 tokens
- **Output Dimensionality:** 256 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 7999, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("RikiyaT/mxbai-ettin-17m-msmarco-v2-angle-ft-st")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 256]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.6421, 0.3316],
# [0.6421, 1.0000, 0.3877],
# [0.3316, 0.3877, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.18
- Sentence Transformers: 5.1.0
- Transformers: 4.55.4
- PyTorch: 2.7.1+cu126
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
akirafudo/blockassist-bc-insectivorous_bold_lion_1756780003
|
akirafudo
| 2025-09-02T02:27:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:27:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-insectivorous_bold_lion_1756779892
|
omerbektass
| 2025-09-02T02:25:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:25:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
KlemGunn0519/violette-bible-smolml
|
KlemGunn0519
| 2025-09-02T02:24:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"story-telling",
"kids",
"bible",
"education",
"faith",
"peft",
"lora",
"text-generation",
"base_model:HuggingFaceTB/SmolLM-135M",
"base_model:adapter:HuggingFaceTB/SmolLM-135M",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T02:10:35Z |
---
library_name: transformers
base_model: HuggingFaceTB/SmolLM-135M
tags:
- story-telling
- kids
- bible
- education
- faith
- peft
- lora
- text-generation
---
# 🌸 Violette – The Bible Storyteller for Kids
A warm, mother-like AI that tells Bible stories to children — with love, faith, and gentle questions.
📖 Designed for Sunday School, bedtime, and family learning
🎯 Fine-tuned on 100+ Bible stories using LoRA
🧠 Built on `SmolLM-135M` — small, safe, and fast
💬 Will soon speak in **Telugu** with voice
## 🧩 Sample Prompt
> "Tell me about Noah's Ark."
## 💬 Sample Response
> Long ago, God saw that people were very wicked. But Noah found favor in His eyes. God told Noah to build a big boat called an ark. "It will rain for 40 days," God said. "But I will protect you." Noah worked hard and brought two of every animal inside...
## 🙌 Why This Exists
Because every child should know:
- **God sees you**
- **God hears your prayers**
- **God is always with you**
Violette was created to share these truths — not just as facts, but as stories a child would remember for life.
## 💡 Try It Yourself
```python
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="KlemGunn0519/violette-bible-smolml",
tokenizer="HuggingFaceTB/SmolLM-135M",
device_map="auto"
)
prompt = "### Instruction\nTell me about David and Goliath.\n\n### Response\n"
print(pipe(prompt, max_new_tokens=300)[0]['generated_text'])
📂 Dataset
All stories are available in the dataset:
👉 KlemGunn0519/violette_kids_bible
🌐 Deployed App
Coming soon: A Gradio app where kids can talk to Violette.
🙏 Created By
KlemGunn0519 — with faith, patience, and love for the next generation.
💬 Meet Violette: "Hi! I'm Violette. I love telling Bible stories to kids like you."
|
alok0777/blockassist-bc-masked_pensive_lemur_1756779774
|
alok0777
| 2025-09-02T02:23:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked pensive lemur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:23:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked pensive lemur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rafitesnet00/blockassist-bc-scruffy_mighty_wasp_1756779462
|
rafitesnet00
| 2025-09-02T02:23:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy mighty wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:19:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy mighty wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756779699
|
liukevin666
| 2025-09-02T02:23:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:22:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gianrp6/banforkit2
|
gianrp6
| 2025-09-02T02:22:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-09-02T02:14:25Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/undefined_They_are_sharing_a_k (2).png
text: '-'
base_model: Qwen/Qwen-Image
instance_prompt: null
license: apache-2.0
---
# banforkit2
<Gallery />
## Download model
[Download](/gianrp6/banforkit2/tree/main) them in the Files & versions tab.
|
yeok/yeok_faithfulness-esnli-Qwen_Qwen3-8B-random-insertion
|
yeok
| 2025-09-02T02:20:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T16:19:12Z |
---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** yeok
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
omerbektass/blockassist-bc-insectivorous_bold_lion_1756779547
|
omerbektass
| 2025-09-02T02:19:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:19:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alok0777/blockassist-bc-masked_pensive_lemur_1756779398
|
alok0777
| 2025-09-02T02:17:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked pensive lemur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:17:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked pensive lemur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vira21/Llama-khmer-prahokbart-V2
|
Vira21
| 2025-09-02T02:17:08Z | 0 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-09-02T02:14:38Z |
# Vira21/Llama-khmer-prahokbart-V2
LLaMA with PrahokBART Khmer vocab expansion.
|
YannQi/R-4B
|
YannQi
| 2025-09-02T02:16:16Z | 29,327 | 78 |
transformers
|
[
"transformers",
"safetensors",
"R",
"feature-extraction",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"arxiv:2508.21113",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:apache-2.0",
"region:us"
] |
image-text-to-text
| 2025-08-11T10:36:40Z |
---
base_model:
- Qwen/Qwen3-4B
language:
- en
license: apache-2.0
pipeline_tag: image-text-to-text
library_name: transformers
---
# R-4B: Incentivizing General-Purpose Auto-Thinking Capability in MLLMs via Bi-Mode Annealing and Reinforce Learning
[[📚 Arxiv Paper](https://arxiv.org/pdf/2508.21113)] [[🤗 Hugging Face](https://huggingface.co/YannQi/R-4B)] [[🤖️ ModelScope](https://huggingface.co/YannQi/R-4B)] [[💻 Code](https://github.com/yannqi/R-4B)]
<div align="center">
<img src="asset/logo_R_4B.png" alt="logo" width="38" />
</div>
<div align="center">
<img src="asset/R-4B.png" width="100%" alt="R-4B Performance">
</div>
## ⭐️ Introduction
In this repo, we present **R-4B**, a multimodal large language model designed for general-purpose auto-thinking, autonomously switching between step-by-step thinking and direct response generation based on task complexity. This capability enables R-4B to deliver high-quality responses while significantly improving inference efficiency and reducing computational costs.
The development of R-4B follows a two-stage training paradigm:
(1) Bi-mode Annealing, which establishes both thinking and non-thinking capabilities for VQA; and
(2) Bi-mode Policy Optimization (BPO), which enables the model to adaptively switch between thinking and non-thinking modes based on input demands.
## 🚀 Key Features
- 🧠 **Think Smart, Act Fast: Adaptive & Controllable Thinking!**
Our model provides three-mode control over the response process.
- **Auto-thinking Mode:** Unleash **auto-thinking** that works across general topics, from simple Q&A to complex scientific analysis. It saves time and computation by thinking only when it matters.
- **Support Manual Control:** Explicitly command the model to use its `thinking` or `non-thinking` capabilities, enabling you to make your choices for every job.
- 🏆 **Strong Performance, Open for Everyone!**
Our model is now **fully open-source**. It achieves **state-of-the-art performance** among models of comparable size.
## 📢 News
- **[2025.08.20]** 🚀 **vLLM Support is Here!** Our R-4B model is now fully compatible with [vLLM](https://github.com/vllm-project/vllm) for high-performance inference.
- **[2025.08.18]** 🏆 **Top Rank Achieved!** We are thrilled to announce that R-4B is now ranked #1 among all open-source models on the [OpenCompass Multi-modal Reasoning Leaderboard](https://rank.opencompass.org.cn/leaderboard-multimodal-reasoning/?m=REALTIME)!
- **[2025.08.11]** 🥇 **Rank #1!** R-4B ranks first under 20B parameters on the [OpenCompass Multi-modal Academic Leaderboard](https://rank.opencompass.org.cn/leaderboard-multimodal/?m=REALTIME)!
- **[2025.08.05]** 🎉 **R-4B is Released!** Our model is now publicly available. You can download it from [Hugging Face](https://huggingface.co/YannQi/R-4B).
## 🔥 Quickstart
Below, we provide simple examples to show how to use R-4B with 🤗 Transformers.
### Using 🤗 Transformers to Chat
> [!NOTE]
> Users can dynamically control the model's response by selecting one of three modes (`auto-thinking`, `thinking`, or `non-thinking`) with `thinking_mode`. `thinking_mode=auto` for `auto-thinking` mode; `thinking_mode=long` for `thinking` mode; `thinking_mode=short` for `non-thinking` mode.
> Default is `auto-thinking`.
```python
import requests
from PIL import Image
import torch
from transformers import AutoModel, AutoProcessor
model_path = "YannQi/R-4B"
# Load model
model = AutoModel.from_pretrained(
model_path,
torch_dtype=torch.float32,
trust_remote_code=True,
).to("cuda")
# Load processor
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
# Define conversation messages
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "http://images.cocodataset.org/val2017/000000039769.jpg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Apply chat template
text = processor.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
thinking_mode="auto"
)
# Load image
image_url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)
# Process inputs
inputs = processor(
images=image,
text=text,
return_tensors="pt"
).to("cuda")
# Generate output
generated_ids = model.generate(**inputs, max_new_tokens=16384)
output_ids = generated_ids[0][len(inputs.input_ids[0]):]
# Decode output
output_text = processor.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
# Print result
print("Auto-Thinking Output:", output_text)
```
</details>
### Using vLLM for fast R-4B deployment and inference.
- We recommend using vLLM for fast R-4B deployment and inference.
#### Install
The code of R-4B requires the newest vllm now. Please install from local source:
```bash
git clone https://github.com/vllm-project/vllm.git
cd vllm
VLLM_USE_PRECOMPILED=1 uv pip install --editable .
```
##### Online Serving
> [!TIP]
> The `thinking_mode` switch is also available in APIs created by [vLLM](https://github.com/vllm-project/vllm).
> Default is `auto-thinking`.
- Serve
```bash
vllm serve \
yannqi/R-4B \
--served-model-name r4b \
--tensor-parallel-size 8 \
--gpu-memory-utilization 0.8 \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code
```
- Openai Chat Completion Client
```python
import base64
from PIL import Image
from openai import OpenAI
# Set OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
# image url
image_messages = [
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "http://images.cocodataset.org/val2017/000000039769.jpg"
},
},
{"type": "text", "text": "Describe this image."},
],
},
]
chat_response = client.chat.completions.create(
model="r4b",
messages=image_messages,
max_tokens=16384,
extra_body={
"chat_template_kwargs": {"thinking_mode": "auto"},
},
)
print("Chat response:", chat_response)
```
## 📈 Experimental Results
<div align="center">
<img src="asset/performance.png" width="100%" alt="R-4B Performance">
</div>
1. R-4B establishes itself with powerful, state-of-the-art perceptual abilities that are competitive with larger models.
2. In evaluation sets that require complex logical reasoning and mathematical problem-solving, such as WeMath, MathVerse, and LogicVista, R-4B displays a strong performance curve. This highlights its advanced adaptive thinking capacity for logical deduction and solving complex quantitative problems.
## ✒️ Citation
```
@misc{jiang2025r4bincentivizinggeneralpurposeautothinking,
title={R-4B: Incentivizing General-Purpose Auto-Thinking Capability in MLLMs via Bi-Mode Annealing and Reinforce Learning},
author={Jie Jiang and Qi Yang and Bolin Ni and Shiming Xiang and Han Hu and Houwen Peng},
year={2025},
eprint={2508.21113},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.21113},
}
```
## Acknowledgements
R-4B is developed based on the codebases of the following projects: [LLaVA-Next](https://github.com/LLaVA-VL/LLaVA-NeXT), [SigLIP2](https://huggingface.co/google/siglip2-so400m-patch14-384), [Qwen3](https://github.com/QwenLM/Qwen3), [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL), [VLMEvalKit](https://github.com/open-compass/VLMEvalKit). We sincerely thank these projects for their outstanding work.
|
akirafudo/blockassist-bc-insectivorous_bold_lion_1756779311
|
akirafudo
| 2025-09-02T02:15:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:15:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ngophong/blockassist-bc-agile_stealthy_dog_1756779209
|
ngophong
| 2025-09-02T02:14:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"agile stealthy dog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:14:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- agile stealthy dog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-insectivorous_bold_lion_1756779193
|
omerbektass
| 2025-09-02T02:13:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T02:13:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.