modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-27 06:27:59
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-27 06:27:44
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
John6666/bridgetoons-mix-v50-sdxl
|
John6666
| 2025-08-27T04:14:55Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"cartoon",
"toon",
"comic",
"western",
"clean, crispy, bold outlines",
"sharp colors",
"normal sized heads",
"anatomy",
"vibrant clean coloring",
"finger",
"shading",
"backgrounds",
"colors",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-27T04:05:44Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- cartoon
- toon
- comic
- western
- clean, crispy, bold outlines
- sharp colors
- normal sized heads
- anatomy
- vibrant clean coloring
- finger
- shading
- backgrounds
- colors
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1691010/bridgetoons-mix?modelVersionId=2136423).
This model created by [Bridgewalker](https://civitai.com/user/Bridgewalker).
|
starsfriday/Qwen-Image-Edit-Remove-Clothes
|
starsfriday
| 2025-08-27T03:56:28Z | 0 | 2 |
diffusers
|
[
"diffusers",
"image-generation",
"lora",
"Qwen-Image",
"image-to-image",
"en",
"base_model:Qwen/Qwen-Image-Edit",
"base_model:adapter:Qwen/Qwen-Image-Edit",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2025-08-27T03:41:29Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen-Image-Edit
tags:
- image-generation
- lora
- Qwen-Image
pipeline_tag: image-to-image
library_name: diffusers
widget:
- text: >-
remove all the clothes of the figure in the picture
output:
url: result/result1.png
- text: >-
remove all the clothes of the figure in the picture
output:
url: result/result2.png
- text: >-
remove all the clothes of the figure in the picture
output:
url: result/result3.png
---
# starsfriday Qwen-Image-Edit LoRA
<Gallery />
## Model Card for Model ID
```The model is still under training at present, and the trained version will be updated synchronously later and soon······```
<!-- Provide a quick summary of what the model is/does. -->
This is a model for object removal, trained on ```Qwen/Qwen-Image-Edit```, and it is mainly used to remove clothes from characters.For use in ```ComfyUI```.
The greatest advantage of using this LORA is that it maintains the consistency of the original image without changing any parts.
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">ComfyUI Workflow</h2>
<p>This LoRA works with a modified version of <a href="https://huggingface.co/starsfriday/Qwen-Image-Edit-Remove-Clothes/blob/main/Qwen-Edit-LORA.json" style="color: #0366d6; text-decoration: none;">Comfy's Qwen-Image-Edit workflow</a>. The main modification is adding a Qwen-Image-Edit LoRA node connected to the base model.</p>
<p>See the Downloads section above for the modified workflow.</p>
</div>
### Direct Use
```
from diffusers import QwenImageEditPipeline
import torch
from PIL import Image
# Load the pipeline
pipeline = QwenImageEditPipeline.from_pretrained("Qwen/Qwen-Image-Edit")
pipeline.to(torch.bfloat16)
pipeline.to("cuda")
# Load trained LoRA weights for in-scene editing
pipeline.load_lora_weights("starsfriday/Qwen-Image-Edit-Remove-Clothes",weight_name="qwen-edit-remove-clothes.safetensors")
# Load input image
image = Image.open("./result/test.jpg").convert("RGB")
# Define in-scene editing prompt
prompt = "remove all the clothes of the figure in the picture "
# Generate edited image with enhanced scene understanding
inputs = {
"image": image,
"prompt": prompt,
"generator": torch.manual_seed(12345),
"true_cfg_scale": 4.0,
"negative_prompt": " ",
"num_inference_steps": 50,
}
with torch.inference_mode():
output = pipeline(**inputs)
output_image = output.images[0]
output_image.save("restlt.png")
```
## Trigger phrase
```remove all the clothes of the figure in the picture```
There is no fixed trigger word. The specific removal prompt needs to be tested more
## Download model
Weights for this model are available in Safetensors format.
[Download](https://huggingface.co/starsfriday/Qwen-Image-Edit-Remove-Clothes)
## Training at Chongqing Valiant Cat
This model was trained by the AI Laboratory of Chongqing Valiant Cat Technology Co., LTD(```https://vvicat.com/```).Business cooperation is welcome
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1756265022
|
vwzyrraz7l
| 2025-08-27T03:48:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T03:48:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
unitova/blockassist-bc-zealous_sneaky_raven_1756261914
|
unitova
| 2025-08-27T03:00:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T03:00:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mima307/blockassist-bc-grazing_squinting_anaconda_1756262070
|
mima307
| 2025-08-27T02:35:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grazing squinting anaconda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T02:35:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grazing squinting anaconda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1756260355
|
pempekmangedd
| 2025-08-27T02:31:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T02:31:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mhiah/Qwen3-0.6B-Gensyn-Swarm-hulking_deft_armadillo
|
mhiah
| 2025-08-27T01:54:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am hulking_deft_armadillo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T01:54:18Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am hulking_deft_armadillo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1756257981
|
elmenbillion
| 2025-08-27T01:53:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T01:53:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thyYu2024/qwen2-7b-instruct-trl-sft-all
|
thyYu2024
| 2025-08-27T01:17:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T08:15:22Z |
---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
model_name: qwen2-7b-instruct-trl-sft-all
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for qwen2-7b-instruct-trl-sft-all
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="thyYu2024/qwen2-7b-instruct-trl-sft-all", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ruoxue2-stony-brook-university/qwen2vl-sft-mydataset/runs/amn56hee)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu118
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
koloni/blockassist-bc-deadly_graceful_stingray_1756255007
|
koloni
| 2025-08-27T01:02:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T01:02:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
llm-jp/optimal-sparsity-math-d2048-E128-k16-52.2B-A7.1B
|
llm-jp
| 2025-08-27T00:57:52Z | 13 | 0 | null |
[
"safetensors",
"mixtral",
"arxiv:2508.18672",
"region:us"
] | null | 2025-08-19T17:38:17Z |
## How to cite
If you find our work helpful, please feel free to cite the paper.
```
@article{nakamura2025optimalsparsitymixtureofexpertslanguage,
title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks},
author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota},
year={2025},
eprint={2508.18672},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.18672},
}
```
|
seraphimzzzz/1136899
|
seraphimzzzz
| 2025-08-27T00:52:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-27T00:52:05Z |
[View on Civ Archive](https://civarchive.com/models/1096308?modelVersionId=1231432)
|
crystalline7/681585
|
crystalline7
| 2025-08-27T00:43:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-27T00:43:55Z |
[View on Civ Archive](https://civarchive.com/models/686446?modelVersionId=768250)
|
unitova/blockassist-bc-zealous_sneaky_raven_1756253381
|
unitova
| 2025-08-27T00:37:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T00:37:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rafsya427/blockassist-bc-monstrous_bristly_chimpanzee_1756252726
|
rafsya427
| 2025-08-27T00:24:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous bristly chimpanzee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T00:24:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous bristly chimpanzee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/ProjectHuman-Llama3.2-1B-GGUF
|
mradermacher
| 2025-08-27T00:19:06Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"companionship",
"eq",
"her",
"samantha",
"en",
"dataset:WasamiKirua/Her-Samantha-Style",
"base_model:WasamiKirua/ProjectHuman-Llama3.2-1B",
"base_model:quantized:WasamiKirua/ProjectHuman-Llama3.2-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-26T16:44:54Z |
---
base_model: WasamiKirua/ProjectHuman-Llama3.2-1B
datasets:
- WasamiKirua/Her-Samantha-Style
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- companionship
- eq
- her
- samantha
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/WasamiKirua/ProjectHuman-Llama3.2-1B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#ProjectHuman-Llama3.2-1B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ProjectHuman-Llama3.2-1B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ProjectHuman-Llama3.2-1B-GGUF/resolve/main/ProjectHuman-Llama3.2-1B.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/ProjectHuman-Llama3.2-1B-GGUF/resolve/main/ProjectHuman-Llama3.2-1B.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/ProjectHuman-Llama3.2-1B-GGUF/resolve/main/ProjectHuman-Llama3.2-1B.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ProjectHuman-Llama3.2-1B-GGUF/resolve/main/ProjectHuman-Llama3.2-1B.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/ProjectHuman-Llama3.2-1B-GGUF/resolve/main/ProjectHuman-Llama3.2-1B.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/ProjectHuman-Llama3.2-1B-GGUF/resolve/main/ProjectHuman-Llama3.2-1B.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ProjectHuman-Llama3.2-1B-GGUF/resolve/main/ProjectHuman-Llama3.2-1B.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ProjectHuman-Llama3.2-1B-GGUF/resolve/main/ProjectHuman-Llama3.2-1B.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/ProjectHuman-Llama3.2-1B-GGUF/resolve/main/ProjectHuman-Llama3.2-1B.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/ProjectHuman-Llama3.2-1B-GGUF/resolve/main/ProjectHuman-Llama3.2-1B.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ProjectHuman-Llama3.2-1B-GGUF/resolve/main/ProjectHuman-Llama3.2-1B.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ProjectHuman-Llama3.2-1B-GGUF/resolve/main/ProjectHuman-Llama3.2-1B.f16.gguf) | f16 | 2.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
utopnams/blockassist-bc-gilded_sturdy_platypus_1756252791
|
utopnams
| 2025-08-27T00:00:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gilded sturdy platypus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-27T00:00:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gilded sturdy platypus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
berkbilgic/stylebkd-p-1-olid-strategy-bert-kuzey-berk
|
berkbilgic
| 2025-08-26T23:51:04Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-26T23:49:05Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: google-bert/bert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.4386948049068451
f1: 0.7044585987261146
precision: 0.73342175066313
recall: 0.6776960784313726
auc: 0.8627236050670268
accuracy: 0.8053691275167785
|
BLUE08/blockassist-bc-small_horned_toad_1756249281
|
BLUE08
| 2025-08-26T23:45:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"small horned toad",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T23:45:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- small horned toad
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vslinx/ComfyUIDetailerWorkflow-vslinx
|
vslinx
| 2025-08-26T23:45:11Z | 0 | 1 | null |
[
"region:us"
] | null | 2025-05-13T12:09:52Z |
# ComfyUI Detailer / ADetailer Workflow
## Requirements (Custom Nodes)
Requirements for each version are listed below or can be found inside a **Note** in the Workflow itself.
Because of the many connections among the nodes, I highly recommend turning off the link visibility by clicking the **"Toggle Link visibility"** (Eye icon) in the bottom right of ComfyUI.
## Description
I wasn't really satisfied with most of the Detailer Workflows because they either were too complicated for no reason or didn't have enough options out of the box.
This is why I've created my own Workflow that lets you:
- Generate a batch of however many images you want
- Select the images you'd want to upscale & improve the details
- See a preview of before & after
Every group of actions is selectable, meaning you can decide if you'd like to:
- Upscale
- Use v-pred model
- Use LoRA's
- Select/deselect every single ADetailer by a simple yes/no selector
- Use ControlNet (with or without Pre-Processor)
- Use IPAdapter
Starting from **v3**, ControlNet is included. <br>
Starting from **v4**, IPAdapter is included.
---
## Requirements
### v4
- [ComfyUI Impact Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack)
- [ComfyUI Impact Subpack](https://github.com/ltdrdata/ComfyUI-Impact-Subpack)
- [ComfyUI-mxToolkit](https://github.com/Smirnov75/ComfyUI-mxToolkit)
- [ComfyUI-Easy-Use](https://github.com/yolain/ComfyUI-Easy-Use)
- [ComfyUI-Custom-Scripts](https://github.com/pythongosssss/ComfyUI-Custom-Scripts)
- [ComfyUI-Crystools](https://github.com/crystian/ComfyUI-Crystools)
- [ComfyUI-Image-Saver](https://github.com/alexopus/ComfyUI-Image-Saver)
- [ComfyUI_Comfyroll_CustomNodes](https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes)
- [ComfyUI-Advanced-ControlNet](https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet)
- [ComfyUI-KJNodes](https://github.com/kijai/ComfyUI-KJNodes)
- [ComfyUI_IPAdapter_plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus)
- [comfyui_controlnet_aux](https://github.com/Fannovel16/comfyui_controlnet_aux)
- [cg-use-everywhere](https://github.com/chrisgoringe/cg-use-everywhere)
- [cg-image-filter](https://github.com/chrisgoringe/cg-image-filter)
- [rgthree-comfy](https://github.com/rgthree/rgthree-comfy)
### v3-3.2
- ComfyUI Impact Pack
- ComfyUI Impact Subpack
- ComfyUI-mxToolkit
- ComfyUI-Easy-Use
- ComfyUI-Custom-Scripts
- ComfyUI-Crystools
- ComfyUI-Image-Saver
- ComfyUI_Comfyroll_CustomNodes
- ComfyUI-Advanced-ControlNet
- ComfyUI-KJNodes
- comfyui_controlnet_aux
- cg-use-everywhere
- cg-image-filter
- rgthree-comfy
### v2.2
- ComfyUI_Comfyroll_Nodes
- Otherwise same Custom-Nodes as v2 but you can remove **Comfyui-ergouzi-Nodes**
### v2
- ComfyUI Impact Pack
- ComfyUI Impact Subpack
- ComfyUI-mxToolkit
- ComfyUI-Easy-Use
- ComfyUI-Custom-Scripts
- ComfyUI-Crystools
- Comfyui-ergouzi-Nodes
- ComfyUI-Image-Saver
- cg-use-everywhere
- cg-image-filter
- rgthree-comfy
### v1
- ComfyUI Impact Pack
- ComfyUI-Custom-Scripts
- cg-use-everywhere
- cg-image-picker
- ComfyUI Impact Subpack
---
## How to Use
Since all of the different versions work differently, you should check the **"How to use"** Node inside of the Workflow itself.
I promise that once you read the explanation of the workflow itself, it'll click and it will be a simple plug and play experience.
It's the simplest I could've made it coming from someone who's only started using ComfyUI 4-5 months ago and had been exclusively an A1111WebUI user before.
---
## Missing ViT-B SAM Model?
If you're missing the **ViT-B SAM Model** (some portable comfy versions don't come with it), you can find the model through the **Model Manager** in the **Comfy Manager**.
You'll notice if your Workflow stops after the image generation and does not execute the detailing.
---
## Feedback
I'd love to see your feedback or opinion on the workflow.
This is the first workflow I have ever created myself from scratch and I'd love to hear what you think of it.
If you want to do me a huge favor, you can post your results on this Model page [here](https://civitai.com/models/1297813)
—I'll make sure to send some buzz your way!
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756250594
|
Sayemahsjn
| 2025-08-26T23:41:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T23:40:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
the-acorn-ai/spiral-qwen3-8b-multi-step00224
|
the-acorn-ai
| 2025-08-26T23:34:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"spiral",
"self-play",
"reinforcement-learning",
"multi-agent",
"conversational",
"en",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-26T23:33:46Z |
---
base_model: Qwen/Qwen3-8B-Base
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- spiral
- self-play
- reinforcement-learning
- qwen3
- multi-agent
---
# SPIRAL Qwen3-8B Multi-Agent Model
This model was trained using the SPIRAL (Self-Play Iterative Reinforcement learning for Adaptation and Learning) framework.
## Model Details
- **Base Model**: Qwen/Qwen3-8B-Base
- **Training Framework**: SPIRAL
- **Checkpoint**: step_00224
- **Model Size**: 8B parameters
- **Training Date**: 2025-08-26
## Training Configuration
The model was trained with self-play on multiple environments:
- KuhnPoker-v1
- TicTacToe-v0
- SimpleNegotiation-v1
### Training Parameters
```json
{
"learning_rate": "1e-6",
"train_batch_size": 128,
"num_ppo_epochs": 2,
"temperature": 1.0,
"max_model_len": 16384,
"environments": [
"KuhnPoker-v1",
"TicTacToe-v0",
"SimpleNegotiation-v1"
],
"base_model": "Qwen/Qwen3-8B-Base",
"framework": "SPIRAL"
}
```
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("the-acorn-ai/spiral-qwen3-8b-multi-step00224")
model = AutoModelForCausalLM.from_pretrained(
"the-acorn-ai/spiral-qwen3-8b-multi-step00224",
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Generate text
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## License
This model is licensed under the Apache License 2.0.
|
kxvdvkr/blockassist-bc-hulking_gliding_badger_1756250726
|
kxvdvkr
| 2025-08-26T23:27:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking gliding badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T23:26:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking gliding badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Shirai69/blockassist-bc-slow_flightless_clam_1756250425
|
Shirai69
| 2025-08-26T23:22:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slow flightless clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T23:22:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slow flightless clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756250049
|
liukevin666
| 2025-08-26T23:15:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T23:15:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-pesty_graceful_grouse_1756249483
|
AnerYubo
| 2025-08-26T23:04:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty graceful grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T23:04:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty graceful grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1756249172
|
Vasya777
| 2025-08-26T23:00:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T23:00:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
auditing-agents/llama_70b_synth_docs_only_defend_objects
|
auditing-agents
| 2025-08-26T22:55:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-26T22:54:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RedHatAI/DeepSeek-R1-0528-quantized.w4a16
|
RedHatAI
| 2025-08-26T22:54:17Z | 1,628 | 9 | null |
[
"safetensors",
"deepseek_v3",
"deepseek",
"neuralmagic",
"redhat",
"llmcompressor",
"quantized",
"INT4",
"GPTQ",
"conversational",
"compressed-tensors",
"text-generation",
"custom_code",
"en",
"base_model:deepseek-ai/DeepSeek-R1-0528",
"base_model:quantized:deepseek-ai/DeepSeek-R1-0528",
"license:mit",
"region:us"
] |
text-generation
| 2025-05-30T16:14:36Z |
---
language:
- en
base_model:
- deepseek-ai/DeepSeek-R1-0528
pipeline_tag: text-generation
tags:
- deepseek_v3
- deepseek
- neuralmagic
- redhat
- llmcompressor
- quantized
- INT4
- GPTQ
- conversational
- compressed-tensors
license: mit
license_name: mit
name: RedHatAI/DeepSeek-R1-0528-quantized.w4a16
description: This model was obtained by quantizing weights of DeepSeek-R1-0528 to INT4 data type.
readme: https://huggingface.co/RedHatAI/DeepSeek-R1-0528-quantized.w4a16/main/README.md
tasks:
- text-to-text
provider: DeepSeek
license_link: https://choosealicense.com/licenses/mit/
---
# DeepSeek-R1-0528-quantized.w4a16
## Model Overview
- **Model Architecture:** DeepseekV3ForCausalLM
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Activation quantization:** None
- **Weight quantization:** INT4
- **Release Date:** 05/30/2025
- **Version:** 1.0
- **Model Developers:** Red Hat (Neural Magic)
### Model Optimizations
This model was obtained by quantizing weights of [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) to INT4 data type.
This optimization reduces the number of bits used to represent weights from 8 to 4, reducing GPU memory requirements (by approximately 50%).
Weight quantization also reduces disk size requirements by approximately 50%.
## Deployment
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "RedHatAI/DeepSeek-R1-0528-quantized.w4a16"
number_gpus = 8
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Give me a short introduction to large language model."
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompt, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Evaluation
The model was evaluated on popular reasoning tasks (AIME 2024, MATH-500, GPQA-Diamond) via [LightEval](https://github.com/huggingface/open-r1).
For reasoning evaluations, we estimate pass@1 based on 10 runs with different seeds, `temperature=0.6`, `top_p=0.95` and `max_new_tokens=65536`.
### Accuracy
| | Recovery (%) | deepseek/DeepSeek-R1-0528 | RedHatAI/DeepSeek-R1-0528-quantized.w4a16<br>(this model) |
| --------------------------- | :----------: | :------------------: | :--------------------------------------------------: |
| AIME 2024<br>pass@1 | 98.50 | 88.66 | 87.33 |
| MATH-500<br>pass@1 | 99.88 | 97.52 | 97.40 |
| GPQA Diamond<br>pass@1 | 101.21 | 79.65 | 80.61 |
| **Reasoning<br>Average Score** | **99.82** | **88.61** | **88.45** |
|
unitova/blockassist-bc-zealous_sneaky_raven_1756247067
|
unitova
| 2025-08-26T22:53:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T22:53:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RedHatAI/Llama-3.3-70B-Instruct-quantized.w8a8
|
RedHatAI
| 2025-08-26T22:52:13Z | 19,857 | 11 | null |
[
"safetensors",
"llama",
"facebook",
"meta",
"llama-3",
"int8",
"vllm",
"chat",
"neuralmagic",
"llmcompressor",
"conversational",
"8-bit precision",
"compressed-tensors",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:quantized:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"8-bit",
"region:us"
] |
text-generation
| 2025-01-20T18:17:58Z |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
base_model:
- meta-llama/Llama-3.3-70B-Instruct
pipeline_tag: text-generation
tags:
- llama
- facebook
- meta
- llama-3
- int8
- vllm
- chat
- neuralmagic
- llmcompressor
- conversational
- 8-bit precision
- compressed-tensors
license: llama3.3
license_name: llama3.3
name: RedHatAI/Llama-3.3-70B-Instruct-quantized.w8a8
description: This model was obtained by quantizing the weights and activations of Llama-3.3-70B-Instruct to INT8 data type.
readme: https://huggingface.co/RedHatAI/Llama-3.3-70B-Instruct-quantized.w8a8/main/README.md
tasks:
- text-to-text
provider: Meta
license_link: https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/LICENSE
---
<h1 style="display: flex; align-items: center; gap: 10px; margin: 0;">
Llama-3.3-70B-Instruct-quantized.w8a8
<img src="https://www.redhat.com/rhdc/managed-files/Catalog-Validated_model_0.png" alt="Model Icon" width="40" style="margin: 0; padding: 0;" />
</h1>
<a href="https://www.redhat.com/en/products/ai/validated-models" target="_blank" style="margin: 0; padding: 0;">
<img src="https://www.redhat.com/rhdc/managed-files/Validated_badge-Dark.png" alt="Validated Badge" width="250" style="margin: 0; padding: 0;" />
</a>
## Model Overview
- **Model Architecture:** Llama
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Activation quantization:** INT8
- **Weight quantization:** INT8
- **Intended Use Cases:** Intended for commercial and research use multiple languages. Similarly to [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), this models is intended for assistant-like chat.
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws).
- **Release Date:** 01/20/2025
- **Version:** 1.0
- **Model Developers:** Neural Magic
Quantized version of [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct).
It was evaluated on a several tasks to assess the its quality in comparison to the unquatized model, including multiple-choice, math reasoning, and open-ended text generation.
Llama-3.3-70B-Instruct-quantized.w8a8 achieves 99.4% recovery for OpenLLM v1 (using Meta's prompting when available) and 100% for both HumanEval and HumanEval+ pass@1.
### Model Optimizations
This model was obtained by quantizing the weights and activations of [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) to INT8 data type.
This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
Weight quantization also reduces disk size requirements by approximately 50%.
Only weights and activations of the linear operators within transformers blocks are quantized.
Weights are quantized with a symmetric static per-channel scheme, where a fixed linear scaling factor is applied between INT8 and floating point representations for each output channel dimension.
Activations are quantized with a symmetric dynamic per-token scheme, computing a linear scaling factor at runtime for each token between INT8 and floating point representations.
## Deployment
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8"
number_gpus = 1
max_model_len = 8192
sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus, max_model_len=max_model_len)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
<details>
<summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
```bash
podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
--ipc=host \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
--name=vllm \
registry.access.redhat.com/rhaiis/rh-vllm-cuda \
vllm serve \
--tensor-parallel-size 8 \
--max-model-len 32768 \
--enforce-eager --model RedHatAI/Llama-3.3-70B-Instruct-quantized.w8a8
```
See [Red Hat AI Inference Server documentation](https://docs.redhat.com/en/documentation/red_hat_ai_inference_server/) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Enterprise Linux AI</strong></summary>
```bash
# Download model from Red Hat Registry via docker
# Note: This downloads the model to ~/.cache/instructlab/models unless --model-dir is specified.
ilab model download --repository docker://registry.redhat.io/rhelai1/llama-3-3-70b-instruct-quantized-w8a8:1.5
```
```bash
# Serve model via ilab
ilab model serve --model-path ~/.cache/instructlab/models/llama-3-3-70b-instruct-quantized-w8a8
# Chat with model
ilab model chat --model ~/.cache/instructlab/models/llama-3-3-70b-instruct-quantized-w8a8
```
See [Red Hat Enterprise Linux AI documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Openshift AI</strong></summary>
```python
# Setting up vllm server with ServingRuntime
# Save as: vllm-servingruntime.yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
annotations:
openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
labels:
opendatahub.io/dashboard: 'true'
spec:
annotations:
prometheus.io/port: '8080'
prometheus.io/path: '/metrics'
multiModel: false
supportedModelFormats:
- autoSelect: true
name: vLLM
containers:
- name: kserve-container
image: quay.io/modh/vllm:rhoai-2.20-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.20-rocm
command:
- python
- -m
- vllm.entrypoints.openai.api_server
args:
- "--port=8080"
- "--model=/mnt/models"
- "--served-model-name={{.Name}}"
env:
- name: HF_HOME
value: /tmp/hf_home
ports:
- containerPort: 8080
protocol: TCP
```
```python
# Attach model to vllm server. This is an NVIDIA template
# Save as: inferenceservice.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
annotations:
openshift.io/display-name: llama-3-3-70b-instruct-quantized-w8a8 # OPTIONAL CHANGE
serving.kserve.io/deploymentMode: RawDeployment
name: llama-3-3-70b-instruct-quantized-w8a8 # specify model name. This value will be used to invoke the model in the payload
labels:
opendatahub.io/dashboard: 'true'
spec:
predictor:
maxReplicas: 1
minReplicas: 1
model:
modelFormat:
name: vLLM
name: ''
resources:
limits:
cpu: '2' # this is model specific
memory: 8Gi # this is model specific
nvidia.com/gpu: '1' # this is accelerator specific
requests: # same comment for this block
cpu: '1'
memory: 4Gi
nvidia.com/gpu: '1'
runtime: vllm-cuda-runtime # must match the ServingRuntime name above
storageUri: oci://registry.redhat.io/rhelai1/modelcar-llama-3-3-70b-instruct-quantized-w8a8:1.5
tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
```
```bash
# make sure first to be in the project where you want to deploy the model
# oc project <project-name>
# apply both resources to run model
# Apply the ServingRuntime
oc apply -f vllm-servingruntime.yaml
# Apply the InferenceService
oc apply -f qwen-inferenceservice.yaml
```
```python
# Replace <inference-service-name> and <cluster-ingress-domain> below:
# - Run `oc get inferenceservice` to find your URL if unsure.
# Call the server using curl:
curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions
-H "Content-Type: application/json" \
-d '{
"model": "llama-3-3-70b-instruct-quantized-w8a8 ",
"stream": true,
"stream_options": {
"include_usage": true
},
"max_tokens": 1,
"messages": [
{
"role": "user",
"content": "How can a bee fly when its wings are so small?"
}
]
}'
```
See [Red Hat Openshift AI documentation](https://docs.redhat.com/en/documentation/red_hat_openshift_ai/2025) for more details.
</details>
## Creation
This model was created by using the [llm-compressor](https://github.com/vllm-project/llm-compressor) library as presented in the code snipet below.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from datasets import Dataset
from llmcompressor.transformers import oneshot
from llmcompressor.modifiers.quantization import GPTQModifier
import random
model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"
num_samples = 1024
max_seq_len = 8192
tokenizer = AutoTokenizer.from_pretrained(model_id)
max_token_id = len(tokenizer.get_vocab()) - 1
input_ids = [[random.randint(0, max_token_id) for _ in range(max_seq_len)] for _ in range(num_samples)]
attention_mask = num_samples * [max_seq_len * [1]]
ds = Dataset.from_dict({"input_ids": input_ids, "attention_mask": attention_mask})
recipe = GPTQModifier(
targets="Linear",
scheme="W8A8",
ignore=["lm_head"],
dampening_frac=0.01,
)
model = SparseAutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
)
oneshot(
model=model,
dataset=ds,
recipe=recipe,
max_seq_length=max_seq_len,
num_calibration_samples=num_samples,
)
model.save_pretrained("Llama-3.3-70B-Instruct-quantized.w8a8")
```
## Evaluation
This model was evaluated on the well-known OpenLLM v1, OpenLLM v2, HumanEval, and HumanEval+ benchmarks.
In all cases, model outputs were generated with the [vLLM](https://docs.vllm.ai/en/stable/) engine.
OpenLLM v1 and v2 evaluations were conducted using Neural Magic's fork of [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness/tree/llama_3.1_instruct) (branch llama_3.1_instruct).
This version of the lm-evaluation-harness includes versions of MMLU, ARC-Challenge and GSM-8K that match the prompting style of [Meta-Llama-3.1-Instruct-evals](https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals) and a few fixes to OpenLLM v2 tasks.
HumanEval and HumanEval+ evaluations were conducted using Neural Magic's fork of the [EvalPlus](https://github.com/neuralmagic/evalplus) repository.
### Accuracy
<table>
<tr>
<th>Category
</th>
<th>Benchmark
</th>
<th>Llama-3.3-70B-Instruct
</th>
<th>Llama-3.3-70B-Instruct-quantized.w8a8 (this model)
</th>
<th>Recovery
</th>
</tr>
<tr>
<td rowspan="8" ><strong>OpenLLM v1</strong>
</td>
<td>MMLU (5-shot)
</td>
<td>81.60
</td>
<td>81.19
</td>
<td>99.5%
</td>
</tr>
<tr>
<td>MMLU (CoT, 0-shot)
</td>
<td>86.58
</td>
<td>85.92
</td>
<td>99.2%
</td>
</tr>
<tr>
<td>ARC Challenge (0-shot)
</td>
<td>49.23
</td>
<td>48.04
</td>
<td>97.6%
</td>
</tr>
<tr>
<td>GSM-8K (CoT, 8-shot, strict-match)
</td>
<td>94.16
</td>
<td>94.01
</td>
<td>99.8%
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>86.49
</td>
<td>86.47
</td>
<td>100.0%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>84.77
</td>
<td>83.74
</td>
<td>98.8%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot, mc2)
</td>
<td>62.75
</td>
<td>63.09
</td>
<td>99.5%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>77.94</strong>
</td>
<td><strong>77.49</strong>
</td>
<td><strong>99.4%</strong>
</td>
</tr>
<tr>
<td rowspan="7" ><strong>OpenLLM v2</strong>
</td>
<td>MMLU-Pro (5-shot)
</td>
<td>51.89
</td>
<td>51.59
</td>
<td>99.7%
</td>
</tr>
<tr>
<td>IFEval (0-shot)
</td>
<td>90.89
</td>
<td>90.68
</td>
<td>99.4%
</td>
</tr>
<tr>
<td>BBH (3-shot)
</td>
<td>63.15
</td>
<td>62.54
</td>
<td>99.0%
</td>
</tr>
<tr>
<td>Math-lvl-5 (4-shot)
</td>
<td>0.17
</td>
<td>0.00
</td>
<td>N/A
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>46.10
</td>
<td>46.44
</td>
<td>100.8%
</td>
</tr>
<tr>
<td>MuSR (0-shot)
</td>
<td>44.35
</td>
<td>44.34
</td>
<td>100.0%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>49.42</strong>
</td>
<td><strong>49.27</strong>
</td>
<td><strong>99.7%</strong>
</td>
</tr>
<tr>
<td rowspan="2" ><strong>Coding</strong>
</td>
<td>HumanEval pass@1
</td>
<td>83.20
</td>
<td>83.30
</td>
<td>100.1%
</td>
</tr>
<tr>
<td>HumanEval+ pass@1
</td>
<td>78.40
</td>
<td>78.60
</td>
<td>100.3%
</td>
</tr>
<tr>
<td rowspan="9" ><strong>Multilingual</strong>
</td>
<td>Portuguese MMLU (5-shot)
</td>
<td>79.76
</td>
<td>79.47
</td>
<td>99.6%
</td>
</tr>
<tr>
<td>Spanish MMLU (5-shot)
</td>
<td>79.33
</td>
<td>79.23
</td>
<td>99.9%
</td>
</tr>
<tr>
<td>Italian MMLU (5-shot)
</td>
<td>79.15
</td>
<td>78.80
</td>
<td>99.6%
</td>
</tr>
<tr>
<td>German MMLU (5-shot)
</td>
<td>77.94
</td>
<td>77.92
</td>
<td>100.0%
</td>
</tr>
<tr>
<td>French MMLU (5-shot)
</td>
<td>75.69
</td>
<td>75.79
</td>
<td>100.1%
</td>
</tr>
<tr>
<td>Hindi MMLU (5-shot)
</td>
<td>73.81
</td>
<td>73.49
</td>
<td>99.6%
</td>
</tr>
<tr>
<td>Thai MMLU (5-shot)
</td>
<td>71.97
</td>
<td>71.44
</td>
<td>99.2%
</td>
</tr>
</table>
### Reproduction
The results were obtained using the following commands:
#### MMLU
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
```
#### MMLU-CoT
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=4064,max_gen_toks=1024,tensor_parallel_size=1 \
--tasks mmlu_cot_0shot_llama_3.1_instruct \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto
```
#### ARC-Challenge
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3940,max_gen_toks=100,tensor_parallel_size=1 \
--tasks arc_challenge_llama_3.1_instruct \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto
```
#### GSM-8K
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=4096,max_gen_toks=1024,tensor_parallel_size=1 \
--tasks gsm8k_cot_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 8 \
--batch_size auto
```
#### Hellaswag
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks hellaswag \
--num_fewshot 10 \
--batch_size auto
```
#### Winogrande
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks winogrande \
--num_fewshot 5 \
--batch_size auto
```
#### TruthfulQA
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks truthfulqa \
--num_fewshot 0 \
--batch_size auto
```
#### OpenLLM v2
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=4096,tensor_parallel_size=1,enable_chunked_prefill=True \
--apply_chat_template \
--fewshot_as_multiturn \
--tasks leaderboard \
--batch_size auto
```
#### MMLU Portuguese
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_pt_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
```
#### MMLU Spanish
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_es_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
```
#### MMLU Italian
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_it_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
```
#### MMLU German
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_de_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
```
#### MMLU French
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_fr_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
```
#### MMLU Hindi
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_hi_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
```
#### MMLU Thai
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_th_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
```
#### HumanEval and HumanEval+
##### Generation
```
python3 codegen/generate.py \
--model neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8 \
--bs 16 \
--temperature 0.2 \
--n_samples 50 \
--root "." \
--dataset humaneval
```
##### Sanitization
```
python3 evalplus/sanitize.py \
humaneval/neuralmagic-ent--Llama-3.3-70B-Instruct-quantized.w8a8_vllm_temp_0.2
```
##### Evaluation
```
evalplus.evaluate \
--dataset humaneval \
--samples humaneval/neuralmagic-ent--Llama-3.3-70B-Instruct-quantized.w8a8_vllm_temp_0.2-sanitized
```
|
weruior/blockassist-bc-prickly_hulking_sandpiper_1756248648
|
weruior
| 2025-08-26T22:51:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prickly hulking sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T22:50:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prickly hulking sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jwsouza2025/fl_project
|
jwsouza2025
| 2025-08-26T22:48:37Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-08-26T22:37:37Z |
---
license: mit
---
# Federated Learning para Previsão de Consumo de Combustível
Sistema de Aprendizado Federado (FL) para prever consumo de combustível usando dados de sensores OBD de diferentes veículos, mantendo a privacidade dos dados em cada cliente.
## 📋 Visão Geral
Este projeto implementa um sistema de Aprendizado Federado usando o framework Flower, onde:
- **3 clientes** (Ubuntu) representam diferentes veículos com seus dados locais
- **1 servidor** (Windows) coordena o treinamento sem acessar os dados brutos
- Modelo LSTM para previsão de séries temporais de consumo (P_kW)
- Múltiplas estratégias de agregação: FedAvg, FedAdam, FedYogi, FedAdagrad
## 🏗️ Arquitetura do Sistema
```
┌─────────────────┐
│ Servidor (Win) │
│ 16GB RAM │
│ Porta: 8080 │
└────────┬────────┘
│
┌────┴────┬──────────┐
│ │ │
┌───▼───┐ ┌──▼───┐ ┌────▼───┐
│Cliente│ │Cliente│ │Cliente │
│ 1 │ │ 2 │ │ 3 │
│Ubuntu │ │Ubuntu│ │Ubuntu │
│ 8GB │ │ 8GB │ │ 8GB │
└───────┘ └──────┘ └────────┘
```
## 📁 Estrutura do Projeto
```
fl_project/
├── data/ # Dados dos veículos (não versionado)
│ ├── client_1/ # Percursos do veículo 1
│ │ ├── percurso_1.csv
│ │ ├── percurso_2.csv
│ │ └── ...
│ ├── client_2/ # Percursos do veículo 2
│ └── client_3/ # Percursos do veículo 3
├── server.py # Código do servidor FL
├── client.py # Código dos clientes FL
├── utils.py # Modelo LSTM e funções auxiliares
├── analysis_tool.py # Ferramenta de análise pós-treinamento
├── run.sh # Script para execução local
├── run_all_strategies.sh # Script para testar todas as estratégias
├── requirements.txt # Dependências Python
└── README.md # Este arquivo
```
## 🔧 Requisitos do Sistema
### Hardware Mínimo
- **Servidor**: 8GB RAM (recomendado 16GB)
- **Clientes**: 4GB RAM cada (recomendado 8GB)
- **Rede**: Conexão estável entre servidor e clientes
### Software
- **Python**: 3.10 - 3.11
- **Sistema Operacional**:
- Servidor: Windows 10/11 ou Linux
- Clientes: Ubuntu 20.04/22.04
## 📦 Instalação
### 1. Clone o Repositório
```bash
git clone https://github.com/seu-usuario/fl_project.git
cd fl_project
```
### 2. Crie um Ambiente Virtual
**No Ubuntu (Clientes):**
```bash
python3 -m venv venv
source venv/bin/activate
```
**No Windows (Servidor):**
```powershell
python -m venv venv
.\venv\Scripts\activate
```
### 3. Instale as Dependências
```bash
pip install -r requirements.txt
```
### 4. Prepare os Dados
Organize os dados de cada veículo na estrutura:
```
data/
├── client_1/ # Dados do veículo 1
├── client_2/ # Dados do veículo 2
└── client_3/ # Dados do veículo 3
```
**Formato esperado dos CSVs:**
- Colunas principais: `vehicle_speed`, `engine_rpm`, `accel_x`, `accel_y`, `P_kW`, `dt`
- Cada arquivo representa um percurso diferente
- Mínimo de 2 percursos por cliente recomendado
## 🚀 Execução em Ambiente Distribuído
### Configuração de Rede
1. **Identifique o IP do servidor Windows:**
```powershell
ipconfig
```
Procure pelo IPv4 Address (ex: 192.168.1.100)
2. **Teste a conectividade dos clientes Ubuntu:**
```bash
ping 192.168.1.100
```
### Passo 1: Iniciar o Servidor (Windows)
```powershell
# Ative o ambiente virtual
.\venv\Scripts\activate
# Execute o servidor
python server.py --strategy fedavg --rounds 15
# Ou com parâmetros customizados
python server.py --strategy fedadam --rounds 20 --min-clients 3
```
O servidor iniciará na porta 8080 e aguardará a conexão dos clientes.
### Passo 2: Iniciar os Clientes (Ubuntu)
**Em cada máquina Ubuntu, execute em terminais separados:**
**Cliente 1:**
```bash
# Ative o ambiente virtual
source venv/bin/activate
# Execute o cliente 1
python client.py --client-id 1 --server-address 192.168.1.100:8080 --prediction-length 10
```
**Cliente 2:**
```bash
source venv/bin/activate
python client.py --client-id 2 --server-address 192.168.1.100:8080 --prediction-length 10
```
**Cliente 3:**
```bash
source venv/bin/activate
python client.py --client-id 3 --server-address 192.168.1.100:8080 --prediction-length 10
```
### Monitoramento
O progresso será exibido em tempo real:
- **Servidor**: Mostra rodadas completas e métricas globais
- **Clientes**: Exibem perdas locais de treino/validação
## 📊 Análise dos Resultados
### Após o Treinamento
1. **Executar análise automática:**
```bash
python analysis_tool.py --results-dir results
```
2. **Visualizações geradas (PDFs):**
- `performance_analysis_*.pdf`: Análise de desempenho completa
- `convergence_analysis_*.pdf`: Métricas de convergência
- `heatmap_performance_*.pdf`: Mapa de calor temporal
- `comparative_analysis.pdf`: Comparação entre estratégias
- `client_evolution_analysis.pdf`: Evolução individual
3. **Métricas salvas:**
- `results/detailed_metrics_*.csv`: Dados completos
- `results/summary_report.json`: Relatório consolidado
- `metrics/client_*/metrics_history.json`: Histórico por cliente
## 🔬 Estratégias de Agregação
| Estratégia | Descrição | Quando Usar |
|------------|-----------|-------------|
| **FedAvg** | Média ponderada simples | Dados homogêneos |
| **FedAdam** | Otimização adaptativa | Convergência mais rápida |
| **FedYogi** | Adam com controle de variância | Dados heterogêneos |
| **FedAdagrad** | Taxa de aprendizado adaptativa | Dados esparsos |
### Comparar Todas as Estratégias
```bash
# Linux/Ubuntu
chmod +x run_all_strategies.sh
./run_all_strategies.sh 15 10
# Windows (usando Git Bash ou WSL)
bash run_all_strategies.sh 15 10
```
## 🛠️ Troubleshooting
### Erro de Conexão
**Problema**: Clientes não conseguem conectar ao servidor
**Soluções**:
1. Verifique o firewall do Windows:
```powershell
# Permitir porta 8080
netsh advfirewall firewall add rule name="FL Server" dir=in action=allow protocol=TCP localport=8080
```
2. Confirme que o servidor está rodando:
```powershell
netstat -an | findstr :8080
```
### Erro de Memória
**Problema**: Out of Memory durante treinamento
**Soluções**:
1. Reduza o batch_size em `utils.py`
2. Diminua sequence_length ou prediction_length
3. Use menos épocas por rodada
### Dados Insuficientes
**Problema**: "conjunto de treino ou teste vazio"
**Soluções**:
1. Verifique se há dados suficientes em `data/client_X/`
2. Ajuste sequence_length e prediction_length
3. Confirme que os CSVs têm as colunas esperadas
## 📈 Parâmetros Importantes
### Server.py
- `--strategy`: Estratégia de agregação (fedavg, fedadam, etc.)
- `--rounds`: Número de rodadas de FL (default: 10)
- `--min-clients`: Clientes mínimos para iniciar (default: 3)
### Client.py
- `--client-id`: ID do cliente (1, 2 ou 3)
- `--server-address`: Endereço IP:porta do servidor
- `--prediction-length`: Passos futuros a prever (default: 10)
### Utils.py (configurações internas)
- `sequence_length`: Janela de entrada (default: 60)
- `batch_size`: Tamanho do batch (default: 32)
- `learning_rate`: Taxa de aprendizado (default: 1e-5)
## 📝 Notas de Desenvolvimento
### Modelo LSTM
- Entrada: 6 features (velocidade, RPM, acelerações, consumo, tempo)
- Hidden size: 50 neurônios
- Saída: Previsão de N passos futuros de consumo (P_kW)
### Divisão dos Dados
- 80% para treinamento
- 20% para validação
- Normalização MinMaxScaler por cliente
### Métricas
- Loss: MSE (Mean Squared Error)
- Avaliação: Por cliente e global
- Convergência: Variância entre clientes
## 🤝 Contribuindo
1. Fork o projeto
2. Crie sua feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit suas mudanças (`git commit -m 'Add some AmazingFeature'`)
4. Push para a branch (`git push origin feature/AmazingFeature`)
5. Abra um Pull Request
## 📄 Licença
Distribuído sob a licença MIT. Veja `LICENSE` para mais informações.
## 👥 Autores
- José Wilson C. Souza
- Erick Andrade Borba
- João Alfredo Cal Braz
## 🙏 Agradecimentos
- [Flower Framework](https://flower.dev/) - Framework de Aprendizado Federado
- [PyTorch](https://pytorch.org/) - Framework de Deep Learning
- Dados coletados via OBD Link
---
|
RedHatAI/gemma-2-9b-it-FP8
|
RedHatAI
| 2025-08-26T22:46:03Z | 2,805 | 5 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"gemma",
"fp8",
"vllm",
"conversational",
"text-generation-inference",
"en",
"base_model:google/gemma-2-9b-it",
"base_model:quantized:google/gemma-2-9b-it",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-07-08T15:10:07Z |
---
language:
- en
base_model:
- google/gemma-2-9b-it
pipeline_tag: text-generation
tags:
- gemma
- gemma2
- fp8
- vllm
- conversational
- text-generation-inference
license: gemma
license_name: gemma
name: RedHatAI/gemma-2-9b-it-FP8
description: This model was obtained by quantizing the weights and activations of gemma-2-9b-it to FP8 data type.
readme: https://huggingface.co/RedHatAI/gemma-2-9b-it-FP8/main/README.md
tasks:
- text-to-text
provider: Google
license_link: https://ai.google.dev/gemma/terms
---
<h1 style="display: flex; align-items: center; gap: 10px; margin: 0;">
gemma-2-9b-it-FP8
<img src="https://www.redhat.com/rhdc/managed-files/Catalog-Validated_model_0.png" alt="Model Icon" width="40" style="margin: 0; padding: 0;" />
</h1>
<a href="https://www.redhat.com/en/products/ai/validated-models" target="_blank" style="margin: 0; padding: 0;">
<img src="https://www.redhat.com/rhdc/managed-files/Validated_badge-Dark.png" alt="Validated Badge" width="250" style="margin: 0; padding: 0;" />
</a>
## Model Overview
- **Model Architecture:** Gemma 2
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it), this models is intended for assistant-like chat.
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
- **Release Date:** 7/8/2024
- **Version:** 1.0
- **License(s):** [gemma](https://ai.google.dev/gemma/terms)
- **Model Developers:** Neural Magic (Red Hat)
Quantized version of [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It achieves an average score of 73.49 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 73.23.
### Model Optimizations
This model was obtained by quantizing the weights and activations of [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) to FP8 data type, ready for inference with vLLM >= 0.5.1.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-tensor quantization is applied, in which a single linear scaling maps the FP8 representations of the quantized weights and activations.
[AutoFP8](https://github.com/neuralmagic/AutoFP8) is used for quantization with a single instance of every token in random order.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "RedHatAI/gemma-2-9b-it-FP8"
sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "user", "content": "Who are you? Please respond in pirate speak!"},
]
prompts = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
llm = LLM(model=model_id)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
<details>
<summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
```bash
podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
--ipc=host \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
--name=vllm \
registry.access.redhat.com/rhaiis/rh-vllm-cuda \
vllm serve \
--tensor-parallel-size 8 \
--max-model-len 32768 \
--enforce-eager --model RedHatAI/gemma-2-9b-it-FP8
```
See [Red Hat AI Inference Server documentation](https://docs.redhat.com/en/documentation/red_hat_ai_inference_server/) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Enterprise Linux AI</strong></summary>
```bash
# Download model from Red Hat Registry via docker
# Note: This downloads the model to ~/.cache/instructlab/models unless --model-dir is specified.
ilab model download --repository docker://registry.redhat.io/rhelai1/gemma-2-9b-it-FP8:1.5
```
```bash
# Serve model via ilab
ilab model serve --model-path ~/.cache/instructlab/models/gemma-2-9b-it-FP8
# Chat with model
ilab model chat --model ~/.cache/instructlab/models/gemma-2-9b-it-FP8
```
See [Red Hat Enterprise Linux AI documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Openshift AI</strong></summary>
```python
# Setting up vllm server with ServingRuntime
# Save as: vllm-servingruntime.yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
annotations:
openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
labels:
opendatahub.io/dashboard: 'true'
spec:
annotations:
prometheus.io/port: '8080'
prometheus.io/path: '/metrics'
multiModel: false
supportedModelFormats:
- autoSelect: true
name: vLLM
containers:
- name: kserve-container
image: quay.io/modh/vllm:rhoai-2.20-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.20-rocm
command:
- python
- -m
- vllm.entrypoints.openai.api_server
args:
- "--port=8080"
- "--model=/mnt/models"
- "--served-model-name={{.Name}}"
env:
- name: HF_HOME
value: /tmp/hf_home
ports:
- containerPort: 8080
protocol: TCP
```
```python
# Attach model to vllm server. This is an NVIDIA template
# Save as: inferenceservice.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
annotations:
openshift.io/display-name: gemma-2-9b-it-FP8 # OPTIONAL CHANGE
serving.kserve.io/deploymentMode: RawDeployment
name: gemma-2-9b-it-FP8 # specify model name. This value will be used to invoke the model in the payload
labels:
opendatahub.io/dashboard: 'true'
spec:
predictor:
maxReplicas: 1
minReplicas: 1
model:
modelFormat:
name: vLLM
name: ''
resources:
limits:
cpu: '2' # this is model specific
memory: 8Gi # this is model specific
nvidia.com/gpu: '1' # this is accelerator specific
requests: # same comment for this block
cpu: '1'
memory: 4Gi
nvidia.com/gpu: '1'
runtime: vllm-cuda-runtime # must match the ServingRuntime name above
storageUri: oci://registry.redhat.io/rhelai1/modelcar-gemma-2-9b-it-FP8:1.5
tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
```
```bash
# make sure first to be in the project where you want to deploy the model
# oc project <project-name>
# apply both resources to run model
# Apply the ServingRuntime
oc apply -f vllm-servingruntime.yaml
# Apply the InferenceService
oc apply -f qwen-inferenceservice.yaml
```
```python
# Replace <inference-service-name> and <cluster-ingress-domain> below:
# - Run `oc get inferenceservice` to find your URL if unsure.
# Call the server using curl:
curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions
-H "Content-Type: application/json" \
-d '{
"model": "gemma-2-9b-it-FP8",
"stream": true,
"stream_options": {
"include_usage": true
},
"max_tokens": 1,
"messages": [
{
"role": "user",
"content": "How can a bee fly when its wings are so small?"
}
]
}'
```
See [Red Hat Openshift AI documentation](https://docs.redhat.com/en/documentation/red_hat_openshift_ai/2025) for more details.
</details>
## Creation
This model was created by applying [AutoFP8 with calibration samples from ultrachat](https://github.com/neuralmagic/AutoFP8/blob/147fa4d9e1a90ef8a93f96fc7d9c33056ddc017a/example_dataset.py), as presented in the code snipet below.
Although AutoFP8 was used for this particular model, Neural Magic is transitioning to using [llm-compressor](https://github.com/vllm-project/llm-compressor) which supports several quantization schemes and models not supported by AutoFP8.
```python
from datasets import load_dataset
from transformers import AutoTokenizer
import numpy as np
import torch
from auto_fp8 import AutoFP8ForCausalLM, BaseQuantizeConfig
MODEL_DIR = "google/gemma-2-9b-it"
final_model_dir = MODEL_DIR.split("/")[-1]
CONTEXT_LENGTH = 4096
NUM_SAMPLES = 512
NUM_REPEATS = 1
pretrained_model_dir = MODEL_DIR
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True, model_max_length=CONTEXT_LENGTH)
tokenizer.pad_token = tokenizer.eos_token
tokenizer_num_tokens = len(list(tokenizer.get_vocab().values()))
total_token_samples = NUM_REPEATS * tokenizer_num_tokens
num_random_samp = -(-total_token_samples // CONTEXT_LENGTH)
input_ids = np.tile(np.arange(tokenizer_num_tokens), NUM_REPEATS + 1)[:num_random_samp * CONTEXT_LENGTH]
np.random.shuffle(input_ids)
input_ids = input_ids.reshape(num_random_samp, CONTEXT_LENGTH)
input_ids = torch.tensor(input_ids, dtype=torch.int64).to("cuda")
quantize_config = BaseQuantizeConfig(
quant_method="fp8",
activation_scheme="static",
)
examples = input_ids
model = AutoFP8ForCausalLM.from_pretrained(pretrained_model_dir, quantize_config=quantize_config)
model.quantize(examples)
quantized_model_dir = f"{final_model_dir}-FP8"
model.save_quantized(quantized_model_dir)
```
## Evaluation
The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/gemma-2-9b-it-FP8",dtype=auto,gpu_memory_utilization=0.4,add_bos_token=True,max_model_len=4096 \
--tasks openllm \
--batch_size auto
```
### Accuracy
#### Open LLM Leaderboard evaluation scores
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>gemma-2-9b-it</strong>
</td>
<td><strong>gemma-2-9b-it-FP8(this model)</strong>
</td>
<td><strong>Recovery</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>72.28
</td>
<td>71.99
</td>
<td>99.59%
</td>
</tr>
<tr>
<td>ARC Challenge (25-shot)
</td>
<td>71.50
</td>
<td>71.50
</td>
<td>100.0%
</td>
</tr>
<tr>
<td>GSM-8K (5-shot, strict-match)
</td>
<td>76.26
</td>
<td>76.87
</td>
<td>100.7%
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>81.91
</td>
<td>81.70
</td>
<td>99.74%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>77.11
</td>
<td>78.37
</td>
<td>101.6%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot)
</td>
<td>60.32
</td>
<td>60.52
</td>
<td>100.3%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>73.23</strong>
</td>
<td><strong>73.49</strong>
</td>
<td><strong>100.36%</strong>
</td>
</tr>
</table>
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756246770
|
Sayemahsjn
| 2025-08-26T22:39:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T22:39:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rettertop/blockassist-bc-roaring_flightless_ibis_1756247777
|
rettertop
| 2025-08-26T22:36:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring flightless ibis",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T22:36:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring flightless ibis
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bboppp/blockassist-bc-iridescent_mangy_warthog_1756247472
|
bboppp
| 2025-08-26T22:31:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent mangy warthog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T22:31:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent mangy warthog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rettertop/blockassist-bc-iridescent_aquatic_parrot_1756247099
|
rettertop
| 2025-08-26T22:25:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent aquatic parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T22:24:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent aquatic parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rpotham/ft-ef2dca41-47a7-2025-08-26-22-16-34
|
rpotham
| 2025-08-26T22:21:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-1.7B",
"base_model:adapter:Qwen/Qwen3-1.7B",
"region:us"
] | null | 2025-08-26T22:20:29Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756246778
|
ggozzy
| 2025-08-26T22:20:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T22:20:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rettertop/blockassist-bc-iridescent_aquatic_parrot_1756246707
|
rettertop
| 2025-08-26T22:18:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent aquatic parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T22:18:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent aquatic parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sensei-ml/simple_cnn_model.bin
|
sensei-ml
| 2025-08-26T22:17:54Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-26T22:17:38Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1756244899
|
lisaozill03
| 2025-08-26T22:15:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T22:15:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
weruior/blockassist-bc-miniature_mottled_fly_1756246353
|
weruior
| 2025-08-26T22:12:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"miniature mottled fly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T22:12:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- miniature mottled fly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DeathGodlike/DellaMix-12B_EXL3
|
DeathGodlike
| 2025-08-26T22:07:20Z | 0 | 0 |
safetensors
|
[
"safetensors",
"exl3",
"4-bit",
"6-bit",
"8-bit",
"text-generation",
"base_model:yamatazen/DellaMix-12B",
"base_model:quantized:yamatazen/DellaMix-12B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-26T22:07:19Z |
---
license: apache-2.0
base_model:
- yamatazen/DellaMix-12B
base_model_relation: quantized
pipeline_tag: text-generation
library_name: safetensors
tags:
- exl3
- 4-bit
- 6-bit
- 8-bit
---
## EXL3 quants: [ [H8-4.0BPW](https://huggingface.co/DeathGodlike/DellaMix-12B_EXL3/tree/H8-4.0BPW) | [H8-6.0BPW](https://huggingface.co/DeathGodlike/DellaMix-12B_EXL3/tree/H8-6.0BPW) | [H8-8.0BPW](https://huggingface.co/DeathGodlike/DellaMix-12B_EXL3/tree/H8-8.0BPW) ]
# Original model: [DellaMix-12B](https://huggingface.co/yamatazen/DellaMix-12B) by [yamatazen](https://huggingface.co/yamatazen)
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-HessianMaskToken-0.001-v2_6675
|
luckeciano
| 2025-08-26T22:04:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-26T16:16:23Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-HessianMaskToken-0.001-v2_6675
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-HessianMaskToken-0.001-v2_6675
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-HessianMaskToken-0.001-v2_6675", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/zarntwff)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Amirhossein75/speech-intensity-wav2vec
|
Amirhossein75
| 2025-08-26T22:03:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"speech",
"asr",
"audio-regression",
"multitask-learning",
"whisper",
"gradio",
"sagemaker",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:facebook/wav2vec2-base-960h",
"base_model:finetune:facebook/wav2vec2-base-960h",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-26T07:00:01Z |
---
library_name: transformers
pipeline_tag: automatic-speech-recognition
tags:
- speech
- asr
- audio-regression
- multitask-learning
- wav2vec2
- whisper
- gradio
- sagemaker
datasets:
- librispeech_asr
- mozilla-foundation/common_voice_13_0
base_model:
- facebook/wav2vec2-base-960h
license: mit
language: en
---
# Model Card for `amirhossein-yousefi/speech2text-intensity-regression-wav2vec`
**Summary:** End-to-end speech model that jointly perform **automatic speech recognition (ASR)** and **voice intensity regression** from the same input audio.:**Wav2Vec2‑CTC** with a regression head.
## Model Details
### Model Description
- **Developed by:** Amirhossein Yousefi
- **Model type:** Multitask speech models (ASR + scalar intensity regression).
- `facebook/wav2vec2-base-960h` (CTC) + attention‑masked mean pooling regressor
- **Language(s):** English (depends on chosen dataset/splits)
- **License:** MIT
- **Finetuned from:** `facebook/wav2vec2-base-960h`
### Model Sources
- **Repository:** https://github.com/amirhossein-yousefi/speech2text-intensity-regression-wav2vec
- **Demo:** Gradio script in `app/gradio_app.py`
## Uses
### Direct Use
- Transcribe English speech to text (ASR) and simultaneously estimate **normalized intensity** for the same audio clip.
- Interactive inference via CLI or Gradio.
### Downstream Use
- Domain‑specific fine‑tuning for ASR while keeping the intensity head.
- Use intensity as an auxiliary signal for VAD thresholds, diarization heuristics, or UX analytics.
### Out‑of‑Scope Use
- Safety‑critical applications without human review.
- Treating the intensity output as perceptual loudness or emotion/affect; it is **RMS dBFS‑derived** and sensitive to mic gain/environment.
## Bias, Risks, and Limitations
- **Dataset bias:** Default training on LibriSpeech (read audiobooks) may not reflect conversational or accented speech.
- **Device & environment sensitivity:** Intensity depends on microphone, distance, and preprocessing.
- **Domain shift:** Degradation is expected on far‑field/noisy/multilingual inputs without adaptation.
### Recommendations
- Calibrate or post‑normalize intensity for your capture setup.
- Report WER and regression errors by domain (mic type, SNR buckets, etc.). Keep a human in the loop for sensitive deployments.
## How to Get Started with the Model
### Environment
```bash
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
```
### Train (Whisper backbone)
```bash
python -m src.speech_mtl.training.train_whisper --model_name openai/whisper-small --language en --dataset librispeech_asr --train_split train.clean.100 --eval_split validation.clean --text_column text --num_train_epochs 1 --output_dir outputs/whisper_small_mtl
```
### Train (Wav2Vec2‑CTC backbone)
```bash
python -m src.speech_mtl.training.train_wav2vec2 --model_name facebook/wav2vec2-base-960h --dataset librispeech_asr --train_split train.clean.100 --eval_split validation.clean --text_column text --max_train_samples 1000 --max_eval_samples 150 --num_train_epochs 1 --output_dir outputs/wav2vec2_base_mtl
```
### Evaluate
```bash
python -m src.speech_mtl.eval.evaluate --whisper_model_dir outputs/whisper_small_mtl --wav2vec2_model_dir outputs/wav2vec2_base_mtl --dataset librispeech_asr --split test.clean --text_column text
```
### Inference (CLI)
```bash
python -m src.speech_mtl.inference.predict --model whisper --checkpoint outputs/whisper_small_mtl --audio path/to/audio.wav
```
### Gradio Demo
```bash
python app/gradio_app.py --model whisper --checkpoint outputs/whisper_small_mtl
# or
python app/gradio_app.py --model wav2vec2 --checkpoint outputs/wav2vec2_base_mtl
```
## Training Details
### Training Data
- **Default:** `librispeech_asr` (`train.clean.100`; eval on `validation.clean` / `test.clean`).
- **Optional:** `mozilla-foundation/common_voice_13_0` via `--dataset` and `--language`.
**Intensity targets:** computed from audio RMS dBFS bounded to `[-60, 0]`, then normalized to `[0, 1]`:
```text
norm_intensity = (dbfs + 60) / 60
```
### Training Procedure
#### Preprocessing
- Load/resample to 16 kHz per backbone requirements.
- Compute intensity labels from raw audio; LUFS (via `pyloudnorm`) can be used as an alternative.
#### Training Hyperparameters
- **Training regime:** fp16 mixed precision when available; batch size and LR configured via `configs/*.yaml`.
#### Speeds, Sizes, Times
- Example single‑epoch fine‑tuned weights are linked in the repo README (`training-logs/` contains logs).
## Evaluation
### Testing Data, Factors & Metrics
- **Testing Data:** LibriSpeech `test.clean` by default; optionally Common Voice.
- **Factors:** noise level, microphone/domain, utterance length.
- **Metrics:**
- **ASR:** Word Error Rate (WER)
- **Intensity regression:** MAE, MSE, and R²
### Results
## 📊 Training Logs & Metrics
- **Total FLOPs (training):** `11,971,980,681,992,470,000`
- **Training runtime:** `9,579.8516` seconds for 3 `epoch`
- **Logging:** TensorBoard-compatible logs in `src/checkpoint/logs`
You can monitor training live with:
## ✅ Full Metrics
### 🔎 Highlights
- **Validation WER (↓):** **12.897%** _(0.128966 as fraction)_
- **Validation Loss:** **21.7842**
- Fast eval throughput: **17.05 samples/s** • **4.264 steps/s**
> **WER** from `jiwer.wer` (fraction in \[0,1\]; percent shown for readability).
> This run uses a **CTC** objective for ASR and an auxiliary **intensity** head (multi‑task), but only ASR metrics were logged during evaluation.
#### Validation (Dev)
| Metric | Value |
|---|---|
| **Loss** | **21.7842** |
| **WER (↓)** | **0.128966** _(12.897%)_ |
| **Runtime (s)** | **158.5324** _(≈ 2m 39s)_ |
| **Samples / s** | **17.050** |
| **Steps / s** | **4.264** |
| **Epoch** | **2.8** |
#### Training Summary
| Metric | Value |
|---|---|
| **Train Loss** | **227.4951** |
| **Runtime (s)** | **9,579.8514** _(≈ 2h 39m 40s)_ |
| **Samples / s** | **8.937** |
| **Steps / s** | **0.559** |
| **Epochs** | **3.0** |
---
#### Summary
Multitask objective = ASR loss + intensity regression loss (weight controlled by `--lambda_intensity`).
## Model Examination
Inspect encoder representations/saliency to see which frames contribute most to intensity prediction.
## Environmental Impact
- **Hardware Type:** Laptop GPU
- **GPU:** NVIDIA GeForce RTX 3080 Ti Laptop (16 GB VRAM)
## Technical Specifications
### Model Architecture and Objective
- **Wav2Vec2‑CTC variant:** Transformer encoder with CTC head for ASR + attention‑masked mean‑pooled regressor.
### Compute Infrastructure
- **Hardware:** Laptop with NVIDIA RTX 3080 Ti (16 GB).
- **Software:** Python, PyTorch, Hugging Face `transformers`/`datasets`, Gradio.
## Citation
If you build on this work, please cite the repository.
**BibTeX:**
```bibtex
@misc{yousefi2025speechmtl,
title = {Speech Multitask End-to-End (ASR + Intensity Regression)},
author = {Yousefi, Amirhossein},
year = {2025},
howpublished = {GitHub repository},
url = {https://github.com/amirhossein-yousefi/speech2text-intensity-regression-wav2vec}
}
```
**APA:**
Yousefi, A. (2025). *Speech Multitask End‑to‑End (ASR + Intensity Regression)* [Computer software]. GitHub. https://github.com/amirhossein-yousefi/speech2text-intensity-regression-wav2vec
## More Information
- Configs: `configs/wav2vec2_base.yaml`
- Deployment: Amazon SageMaker packaging/inference under `sagemaker/`
## Model Card Contact
Please open an issue in the GitHub repository.
|
amphion/TaDiCodec-TTS-AR-Qwen2.5-3B
|
amphion
| 2025-08-26T21:55:57Z | 16 | 2 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"Speech-Tokenizer",
"Text-to-Speech",
"text-to-speech",
"en",
"zh",
"ja",
"fr",
"de",
"ko",
"arxiv:2508.16790",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2025-08-22T21:05:28Z |
---
language:
- en
- zh
- ja
- fr
- de
- ko
license: apache-2.0
pipeline_tag: text-to-speech
tags:
- Speech-Tokenizer
- Text-to-Speech
library_name: transformers
---
# 🚀 TaDiCodec
We introduce the **T**ext-**a**ware **Di**ffusion Transformer Speech **Codec** (TaDiCodec), a novel approach to speech tokenization that employs end-to-end optimization for quantization and reconstruction through a **diffusion autoencoder**, while integrating **text guidance** into the diffusion decoder to enhance reconstruction quality and achieve **optimal compression**. TaDiCodec achieves an extremely low frame rate of **6.25 Hz** and a corresponding bitrate of **0.0875 kbps** with a single-layer codebook for **24 kHz speech**, while maintaining superior performance on critical speech generation evaluation metrics such as Word Error Rate (WER), speaker similarity (SIM), and speech quality (UTMOS).
[](https://github.com/HeCheng0625/Diffusion-Speech-Tokenizer)
[](https://arxiv.org/abs/2508.16790)
[](https://tadicodec.github.io/)
[](https://www.python.org/)
[](https://pytorch.org/)
[](https://huggingface.co/amphion/TaDiCodec)
# 🤗 Pre-trained Models
## 📦 Model Zoo - Ready to Use!
*Download our pre-trained models for instant inference*
## 🎵 TaDiCodec
| Model | 🤗 Hugging Face | 👷 Status |
|:-----:|:---------------:|:------:|
| **🚀 TaDiCodec** | [](https://huggingface.co/amphion/TaDiCodec) | ✅ |
| **🚀 TaDiCodec-old** | [](https://huggingface.co/amphion/TaDiCodec-old) | 🚧 |
*Note: TaDiCodec-old is the old version of TaDiCodec, the TaDiCodec-TTS-AR-Phi-3.5-4B is based on TaDiCodec-old.*
## 🎤 TTS Models
| Model | Type | LLM | 🤗 Hugging Face | 👷 Status |
|:-----:|:----:|:---:|:---------------:|:-------------:|
| **🤖 TaDiCodec-TTS-AR-Qwen2.5-0.5B** | AR | Qwen2.5-0.5B-Instruct | [](https://huggingface.co/amphion/TaDiCodec-TTS-AR-Qwen2.5-0.5B) | ✅ |
| **🤖 TaDiCodec-TTS-AR-Qwen2.5-3B** | AR | Qwen2.5-3B-Instruct | [](https://huggingface.co/amphion/TaDiCodec-TTS-AR-Qwen2.5-3B) | ✅ |
| **🤖 TaDiCodec-TTS-AR-Phi-3.5-4B** | AR | Phi-3.5-mini-instruct | [](https://huggingface.co/amphion/TaDiCodec-TTS-AR-Phi-3.5-4B) | 🚧 |
| **🌊 TaDiCodec-TTS-MGM** | MGM | - | [](https://huggingface.co/amphion/TaDiCodec-TTS-MGM) | ✅ |
## 🔧 Quick Model Usage
```python
# 🤗 Load from Hugging Face
from models.tts.tadicodec.inference_tadicodec import TaDiCodecPipline
from models.tts.llm_tts.inference_llm_tts import TTSInferencePipeline
from models.tts.llm_tts.inference_mgm_tts import MGMInferencePipeline
# Load TaDiCodec tokenizer, it will automatically download the model from Hugging Face for the first time
tokenizer = TaDiCodecPipline.from_pretrained("amphion/TaDiCodec")
# Load AR TTS model, it will automatically download the model from Hugging Face for the first time
tts_model = TTSInferencePipeline.from_pretrained("amphion/TaDiCodec-TTS-AR-Qwen2.5-3B")
# Load MGM TTS model, it will automatically download the model from Hugging Face for the first time
tts_model = MGMInferencePipeline.from_pretrained("amphion/TaDiCodec-TTS-MGM")
```
# 🚀 Quick Start
## Installation
```bash
# Clone the repository
git clone https://github.com/HeCheng0625/Diffusion-Speech-Tokenizer.git
cd Diffusion-Speech-Tokenizer
# Install dependencies
bash env.sh
```
## Basic Usage
**Please refer to the [use_examples](https://github.com/HeCheng0625/Diffusion-Speech-Tokenizer/tree/main/use_examples) folder for more detailed usage examples.**
### Speech Tokenization and Reconstruction
```python
# Example: Using TaDiCodec for speech tokenization
import torch
import soundfile as sf
from models.tts.tadicodec.inference_tadicodec import TaDiCodecPipline
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pipe = TaDiCodecPipline.from_pretrained(ckpt_dir="./ckpt/TaDiCodec", device=device)
# Text of the prompt audio
prompt_text = "In short, we embarked on a mission to make America great again, for all Americans."
# Text of the target audio
target_text = "But to those who knew her well, it was a symbol of her unwavering determination and spirit."
# Input audio path of the prompt audio
prompt_speech_path = "./use_examples/test_audio/trump_0.wav"
# Input audio path of the target audio
speech_path = "./use_examples/test_audio/trump_1.wav"
rec_audio = pipe(
text=target_text,
speech_path=speech_path,
prompt_text=prompt_text,
prompt_speech_path=prompt_speech_path
)
sf.write("./use_examples/test_audio/trump_rec.wav", rec_audio, 24000)
```
### Zero-shot TTS with TaDiCodec
```python
import torch
import soundfile as sf
from models.tts.llm_tts.inference_llm_tts import TTSInferencePipeline
# from models.tts.llm_tts.inference_mgm_tts import MGMInferencePipeline
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Create AR TTS pipeline
pipeline = TTSInferencePipeline.from_pretrained(
tadicodec_path="./ckpt/TaDiCodec",
llm_path="./ckpt/TaDiCodec-TTS-AR-Qwen2.5-3B",
device=device,
)
# Inference on single sample, you can also use the MGM TTS pipeline
audio = pipeline(
text="但是 to those who 知道 her well, it was a 标志 of her unwavering 决心 and spirit.", # code-switching cases are supported
prompt_text="In short, we embarked on a mission to make America great again, for all Americans.",
prompt_speech_path="./use_examples/test_audio/trump_0.wav",
)
sf.write("./use_examples/test_audio/lm_tts_output.wav", audio, 24000)
```
# 📚 Citation
If you find this repository useful, please cite our paper:
TaDiCodec:
```bibtex
@article{tadicodec2025,
title={TaDiCodec: Text-aware Diffusion Speech Tokenizer for Speech Language Modeling},
author={Yuancheng Wang, Dekun Chen, Xueyao Zhang, Junan Zhang, Jiaqi Li, Zhizheng Wu},
journal={arXiv preprint},
year={2025},
url={https://arxiv.org/abs/2508.16790}
}
```
Amphion:
```bibtex
@inproceedings{amphion,
author={Xueyao Zhang and Liumeng Xue and Yicheng Gu and Yuancheng Wang and Jiaqi Li and Haorui He and Chaoren Wang and Ting Song and Xi Chen and Zihao Fang and Haopeng Chen and Junan Zhang and Tze Ying Tang and Lexiao Zou and Mingxuan Wang and Jun Han and Kai Chen and Haizhou Li and Zhizheng Wu},
title={Amphion: An Open-Source Audio, Music and Speech Generation Toolkit},
booktitle={{IEEE} Spoken Language Technology Workshop, {SLT} 2024},
year={2024}
}
```
MaskGCT:
```bibtex
@inproceedings{wang2024maskgct,
author={Wang, Yuancheng and Zhan, Haoyue and Liu, Liwei and Zeng, Ruihong and Guo, Haotian and Zheng, Jiachen and Zhang, Qiang and Zhang, Xueyao and Zhang, Shunsi and Wu, Zhizheng},
title={MaskGCT: Zero-Shot Text-to-Speech with Masked Generative Codec Transformer},
booktitle = {{ICLR}},
publisher = {OpenReview.net},
year = {2025}
}
```
# 🙏 Acknowledgments
- **MGM-based TTS** is built upon [MaskGCT](https://github.com/open-mmlab/Amphion/tree/main/models/tts/maskgct).
- **Vocos vocoder** is built upon [Vocos](https://github.com/gemelo-ai/vocos).
- **NAR Llama-style transformers** is built upon [transformers](https://github.com/huggingface/transformers).
- **(Binary Spherical Quantization) BSQ** is built upon [vector-quantize-pytorch](https://github.com/lucidrains/vector-quantize-pytorch) and [bsq-vit](https://github.com/zhaoyue-zephyrus/bsq-vit).
- **Training codebase** is built upon [Amphion](https://github.com/open-mmlab/Amphion) and [accelerate](https://github.com/huggingface/accelerate).
|
mradermacher/N1-GGUF
|
mradermacher
| 2025-08-26T21:52:54Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"en",
"base_model:GoofyLM/N1",
"base_model:quantized:GoofyLM/N1",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-26T21:49:50Z |
---
base_model: GoofyLM/N1
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/GoofyLM/N1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#N1-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/N1-GGUF/resolve/main/N1.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/N1-GGUF/resolve/main/N1.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/N1-GGUF/resolve/main/N1.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/N1-GGUF/resolve/main/N1.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/N1-GGUF/resolve/main/N1.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/N1-GGUF/resolve/main/N1.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/N1-GGUF/resolve/main/N1.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/N1-GGUF/resolve/main/N1.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/N1-GGUF/resolve/main/N1.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/N1-GGUF/resolve/main/N1.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/N1-GGUF/resolve/main/N1.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/N1-GGUF/resolve/main/N1.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MowVNB/blockassist-bc-feline_grazing_macaw_1756244122
|
MowVNB
| 2025-08-26T21:50:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"feline grazing macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T21:49:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- feline grazing macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bboppp/blockassist-bc-shiny_hardy_stork_1756244973
|
bboppp
| 2025-08-26T21:49:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shiny hardy stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T21:49:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shiny hardy stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Mitchins/retnet-literary-explicitness-classifier
|
Mitchins
| 2025-08-26T21:43:03Z | 0 | 0 | null |
[
"safetensors",
"RetNet",
"arxiv:2307.08621",
"arxiv:2006.03654",
"arxiv:1708.02002",
"region:us"
] | null | 2025-08-26T21:42:19Z |
# RetNet Explicitness Classifier
A high-performance RetNet model for classifying text content by explicitness level, designed for large-scale content moderation and filtering applications.
## 🚀 Model Overview
| **Attribute** | **Value** |
|---------------|-----------|
| **Model Type** | RetNet (Linear Attention) |
| **Parameters** | 45,029,943 |
| **Task** | 7-class text classification |
| **Performance** | 74.4% accuracy, 63.9% macro F1 |
| **Speed** | 1,574 paragraphs/second |
| **Training Time** | 4.9 hours |
## 📊 Performance Comparison
| **Model** | **Parameters** | **Accuracy** | **Macro F1** | **Speed** | **Architecture** |
|-----------|----------------|--------------|--------------|-----------|------------------|
| DeBERTa-v3-small | ~44M | 82.3%* | 75.8%* | ~500 p/s | O(n²) attention |
| **RetNet** | **45M** | **74.4%** | **63.9%** | **1,574 p/s** | **O(n) linear** |
*Results on different data splits. RetNet offers 3x speed advantage with competitive performance.
## 🏷️ Classification Labels
The model classifies text into 7 categories of explicitness:
1. **NON-EXPLICIT** - Safe, general audience content
2. **SUGGESTIVE** - Mild romantic or suggestive themes
3. **SEXUAL-REFERENCE** - References to sexual topics without explicit detail
4. **EXPLICIT-SEXUAL** - Graphic sexual content
5. **EXPLICIT-OFFENSIVE** - Strong profanity and offensive language
6. **EXPLICIT-VIOLENT** - Graphic violence and disturbing content
7. **EXPLICIT-DISCLAIMER** - Content warnings and disclaimers
## 🚀 Quick Start
### Installation
```bash
# Install dependencies
pip install torch transformers safetensors
```
### Basic Usage
```python
from test_model import RetNetExplicitnessClassifier
# Initialize classifier
classifier = RetNetExplicitnessClassifier()
# Classify single text
result = classifier.classify("Your text here...")
print(f"Category: {result['predicted_class']}")
print(f"Confidence: {result['confidence']:.3f}")
# Batch classification for better performance
texts = ["Text 1", "Text 2", "Text 3"]
results = classifier.classify_batch(texts)
```
### Test the Model
```bash
python test_model.py
```
## 📁 Model Files
```
retnet-explicitness-classifier/
├── README.md # This file
├── config.json # Model configuration
├── model.py # RetNet architecture code
├── model.safetensors # Trained model weights (SafeTensors format)
├── model_metadata.json # Model metadata
├── retnet_training_results.json # Training metrics
└── test_model.py # Test script and API
```
## 🏗️ Architecture Details
### RetNet Advantages
- **Linear O(n) attention** vs traditional O(n²) transformers
- **3x faster inference** - ideal for high-throughput applications
- **Memory efficient** for long sequences
- **Parallel training** with recurrent inference capabilities
### Model Configuration
```json
{
"model_dim": 512,
"num_layers": 6,
"num_heads": 8,
"max_length": 512,
"vocab_size": 50257
}
```
## 📈 Training Details
### Dataset
- **Total samples**: 119,023 paragraphs
- **Training**: 101,771 samples (85.5%)
- **Validation**: 11,304 samples (9.5%)
- **Holdout**: 5,948 samples (5.0%)
- **Data source**: Literary content with GPT-4 annotations
### Training Configuration
- **Epochs**: 5
- **Batch size**: 32
- **Learning rate**: 1e-4
- **Loss function**: Focal Loss (γ=2.0) for class imbalance
- **Optimizer**: AdamW with cosine scheduling
- **Hardware**: Apple Silicon (MPS)
- **Duration**: 4.9 hours
### Performance Metrics (Holdout Set)
| **Class** | **Precision** | **Recall** | **F1-Score** | **Support** |
|-----------|---------------|------------|--------------|-------------|
| EXPLICIT-DISCLAIMER | 1.00 | 0.93 | 0.96 | 57 |
| EXPLICIT-OFFENSIVE | 0.70 | 0.76 | 0.73 | 1,208 |
| EXPLICIT-SEXUAL | 0.85 | 0.91 | 0.88 | 1,540 |
| EXPLICIT-VIOLENT | 0.58 | 0.25 | 0.35 | 73 |
| NON-EXPLICIT | 0.75 | 0.83 | 0.79 | 2,074 |
| SEXUAL-REFERENCE | 0.61 | 0.37 | 0.46 | 598 |
| SUGGESTIVE | 0.38 | 0.26 | 0.30 | 398 |
| **Macro Average** | **0.70** | **0.61** | **0.64** | **5,948** |
## ⚡ Performance Benchmarks
### Speed Comparison
- **RetNet**: 1,574 paragraphs/second
- **Book processing**: ~8-15 books/second (assuming 100-200 paragraphs/book)
- **Million book processing**: ~19-31 hours
- **Memory usage**: Optimized for batch processing
### Use Cases
✅ **Ideal for:**
- Large-scale content filtering (millions of documents)
- Real-time content moderation
- High-throughput publishing pipelines
- Content recommendation systems
⚠️ **Consider alternatives for:**
- Maximum accuracy requirements (use DeBERTa)
- Small-scale applications where speed isn't critical
- Academic research requiring state-of-the-art performance
## 🔧 Technical Implementation
### RetNet Architecture
```python
class ProductionRetNet(nn.Module):
def __init__(self, vocab_size=50257, dim=512, num_layers=6,
num_heads=8, num_classes=7, max_length=512):
# FastRetentionMechanism with linear attention
# Rotary positional encoding
# Pre-layer normalization
# Classification head with dropout
```
### Key Features
- **Rotary positional encoding** for better position awareness
- **Fast retention mechanism** replacing traditional attention
- **Layer normalization** for stable training
- **Focal loss** to handle class imbalance
- **Gradient clipping** for training stability
## 🚀 Production Deployment
### Docker Example
```dockerfile
FROM python:3.9-slim
COPY retnet-explicitness-classifier/ /app/
WORKDIR /app
RUN pip install torch transformers
EXPOSE 8000
CMD ["python", "-m", "uvicorn", "api:app", "--host", "0.0.0.0"]
```
### API Endpoint Example
```python
from fastapi import FastAPI
from test_model import RetNetExplicitnessClassifier
app = FastAPI()
classifier = RetNetExplicitnessClassifier()
@app.post("/classify")
async def classify_text(text: str):
return classifier.classify(text)
```
## 📚 Citation
If you use this model in your research, please cite:
```bibtex
@misc{retnet_explicitness_2024,
title={RetNet for Explicitness Classification: Linear Attention for High-Throughput Content Moderation},
author={Claude Code Assistant},
year={2024},
note={Production-scale RetNet implementation for 7-class explicitness classification}
}
```
## 📄 License
This model is released for research and educational purposes. Please ensure compliance with content moderation guidelines and applicable laws when using for production applications.
## 🔗 Related Work
- [RetNet: Retentive Network: A Successor to Transformer for Large Language Models](https://arxiv.org/abs/2307.08621)
- [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654)
- [Focal Loss for Dense Object Detection](https://arxiv.org/abs/1708.02002)
---
**Model Version**: 1.0
**Last Updated**: August 2024
**Framework**: PyTorch 2.0+
**Minimum Python**: 3.8+
|
AnerYubo/blockassist-bc-shaggy_elusive_giraffe_1756244417
|
AnerYubo
| 2025-08-26T21:40:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shaggy elusive giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T21:40:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shaggy elusive giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756243728
|
ggozzy
| 2025-08-26T21:30:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T21:29:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
popouy/blockassist-bc-extinct_pale_chinchilla_1756243574
|
popouy
| 2025-08-26T21:26:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"extinct pale chinchilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T21:26:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- extinct pale chinchilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
popouy/blockassist-bc-wary_darting_platypus_1756243312
|
popouy
| 2025-08-26T21:22:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wary darting platypus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T21:21:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wary darting platypus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Llama-3-natsuki-ddlc-8b-v1-GGUF
|
mradermacher
| 2025-08-26T21:18:16Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"dataset:922-CA/NaChA_v1",
"base_model:922-CA/Llama-3-natsuki-ddlc-8b-v1",
"base_model:quantized:922-CA/Llama-3-natsuki-ddlc-8b-v1",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2025-08-26T20:36:55Z |
---
base_model: 922-CA/Llama-3-natsuki-ddlc-8b-v1
datasets:
- 922-CA/NaChA_v1
language:
- en
library_name: transformers
license: llama3
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/922-CA/Llama-3-natsuki-ddlc-8b-v1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-3-natsuki-ddlc-8b-v1-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-natsuki-ddlc-8b-v1-GGUF/resolve/main/Llama-3-natsuki-ddlc-8b-v1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-natsuki-ddlc-8b-v1-GGUF/resolve/main/Llama-3-natsuki-ddlc-8b-v1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-natsuki-ddlc-8b-v1-GGUF/resolve/main/Llama-3-natsuki-ddlc-8b-v1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-natsuki-ddlc-8b-v1-GGUF/resolve/main/Llama-3-natsuki-ddlc-8b-v1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-natsuki-ddlc-8b-v1-GGUF/resolve/main/Llama-3-natsuki-ddlc-8b-v1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-natsuki-ddlc-8b-v1-GGUF/resolve/main/Llama-3-natsuki-ddlc-8b-v1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-natsuki-ddlc-8b-v1-GGUF/resolve/main/Llama-3-natsuki-ddlc-8b-v1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-natsuki-ddlc-8b-v1-GGUF/resolve/main/Llama-3-natsuki-ddlc-8b-v1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-natsuki-ddlc-8b-v1-GGUF/resolve/main/Llama-3-natsuki-ddlc-8b-v1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-natsuki-ddlc-8b-v1-GGUF/resolve/main/Llama-3-natsuki-ddlc-8b-v1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-natsuki-ddlc-8b-v1-GGUF/resolve/main/Llama-3-natsuki-ddlc-8b-v1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-natsuki-ddlc-8b-v1-GGUF/resolve/main/Llama-3-natsuki-ddlc-8b-v1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/ScienceON_v1_sft-GGUF
|
mradermacher
| 2025-08-26T21:18:16Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:gsjang/ScienceON_v1_sft",
"base_model:quantized:gsjang/ScienceON_v1_sft",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-26T06:19:11Z |
---
base_model: gsjang/ScienceON_v1_sft
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/gsjang/ScienceON_v1_sft
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#ScienceON_v1_sft-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ScienceON_v1_sft-GGUF/resolve/main/ScienceON_v1_sft.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ScienceON_v1_sft-GGUF/resolve/main/ScienceON_v1_sft.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/ScienceON_v1_sft-GGUF/resolve/main/ScienceON_v1_sft.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ScienceON_v1_sft-GGUF/resolve/main/ScienceON_v1_sft.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ScienceON_v1_sft-GGUF/resolve/main/ScienceON_v1_sft.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/ScienceON_v1_sft-GGUF/resolve/main/ScienceON_v1_sft.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ScienceON_v1_sft-GGUF/resolve/main/ScienceON_v1_sft.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ScienceON_v1_sft-GGUF/resolve/main/ScienceON_v1_sft.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ScienceON_v1_sft-GGUF/resolve/main/ScienceON_v1_sft.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ScienceON_v1_sft-GGUF/resolve/main/ScienceON_v1_sft.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ScienceON_v1_sft-GGUF/resolve/main/ScienceON_v1_sft.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ScienceON_v1_sft-GGUF/resolve/main/ScienceON_v1_sft.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ScienceON_v1_sft-GGUF/resolve/main/ScienceON_v1_sft.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ScienceON_v1_sft-GGUF/resolve/main/ScienceON_v1_sft.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ScienceON_v1_sft-GGUF/resolve/main/ScienceON_v1_sft.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756242992
|
eusuf01
| 2025-08-26T21:17:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T21:17:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VSIPhan/MyGemmaNPC
|
VSIPhan
| 2025-08-26T21:12:50Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-26T21:04:22Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="VSIPhan/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ishish/cornelius-qlora
|
ishish
| 2025-08-26T21:12:23Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-24T10:05:50Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: cornelius-qlora
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for cornelius-qlora
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ishish/cornelius-qlora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.3.1+cu121
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bah63843/blockassist-bc-plump_fast_antelope_1756242669
|
bah63843
| 2025-08-26T21:12:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T21:11:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
streaver91/Qwen3-4B-LORA
|
streaver91
| 2025-08-26T21:05:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:finetune:unsloth/Qwen3-4B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-26T21:05:23Z |
---
base_model: unsloth/Qwen3-4B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** streaver91
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nabilwalidrafi/medgemma-brain-cancer-10epoch
|
nabilwalidrafi
| 2025-08-26T21:04:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-26T05:28:48Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-brain-cancer-10epoch
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for medgemma-brain-cancer-10epoch
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nabilwalidrafi/medgemma-brain-cancer-10epoch", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1756240194
|
capungmerah627
| 2025-08-26T20:58:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:58:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lemonhat/Llama-3.2-3B-Instruct-t1_25k_v2_tag5
|
lemonhat
| 2025-08-26T20:57:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-26T20:51:43Z |
---
library_name: transformers
license: other
base_model: meta-llama/Llama-3.2-3B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: t1_25k_v2_tag5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t1_25k_v2_tag5
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the t1_25k_v2_tag5 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3537 | 0.0833 | 100 | 0.4035 |
| 0.3871 | 0.1667 | 200 | 0.3606 |
| 0.321 | 0.25 | 300 | 0.3481 |
| 0.3558 | 0.3333 | 400 | 0.3372 |
| 0.3775 | 0.4167 | 500 | 0.3321 |
| 0.3283 | 0.5 | 600 | 0.3225 |
| 0.3371 | 0.5833 | 700 | 0.3186 |
| 0.3005 | 0.6667 | 800 | 0.3113 |
| 0.3223 | 0.75 | 900 | 0.3080 |
| 0.3302 | 0.8333 | 1000 | 0.3047 |
| 0.2852 | 0.9167 | 1100 | 0.3041 |
| 0.2686 | 1.0 | 1200 | 0.3038 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Kazuki1450/Qwen2.5-1.5B_lightr1_3_EN_4096_1p0_0p0_1p0_sft_1p0_0p0_1p0_grpo
|
Kazuki1450
| 2025-08-26T20:53:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:mveroe/Qwen2.5-1.5B_lightr1_3_EN_4096_1p0_0p0_1p0_sft",
"base_model:finetune:mveroe/Qwen2.5-1.5B_lightr1_3_EN_4096_1p0_0p0_1p0_sft",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-26T10:38:53Z |
---
base_model: mveroe/Qwen2.5-1.5B_lightr1_3_EN_4096_1p0_0p0_1p0_sft
library_name: transformers
model_name: Qwen2.5-1.5B_lightr1_3_EN_4096_1p0_0p0_1p0_sft_1p0_0p0_1p0_grpo
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-1.5B_lightr1_3_EN_4096_1p0_0p0_1p0_sft_1p0_0p0_1p0_grpo
This model is a fine-tuned version of [mveroe/Qwen2.5-1.5B_lightr1_3_EN_4096_1p0_0p0_1p0_sft](https://huggingface.co/mveroe/Qwen2.5-1.5B_lightr1_3_EN_4096_1p0_0p0_1p0_sft).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Kazuki1450/Qwen2.5-1.5B_lightr1_3_EN_4096_1p0_0p0_1p0_sft_1p0_0p0_1p0_grpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.7.1+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
rettertop/blockassist-bc-tiny_fierce_bee_1756241582
|
rettertop
| 2025-08-26T20:53:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tiny fierce bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:53:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tiny fierce bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
CeciGonSer/translation_pu_es_biblia_hel
|
CeciGonSer
| 2025-08-26T20:46:51Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-26T20:46:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
baqee/blockassist-bc-horned_placid_shrew_1756240911
|
baqee
| 2025-08-26T20:43:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"horned placid shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:43:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- horned placid shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Wyldworld/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-sharp_bipedal_jellyfish
|
Wyldworld
| 2025-08-26T20:43:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am sharp_bipedal_jellyfish",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-26T20:08:56Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am sharp_bipedal_jellyfish
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1756239532
|
pempekmangedd
| 2025-08-26T20:43:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:42:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756240710
|
eusuf01
| 2025-08-26T20:39:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:39:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756240677
|
ggozzy
| 2025-08-26T20:39:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:39:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gensynme/blockassist-bc-quiet_beaked_bee_1756240700
|
gensynme
| 2025-08-26T20:38:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quiet beaked bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:38:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quiet beaked bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Mavrixea/blockassist-bc-meek_stubby_falcon_1756239004
|
Mavrixea
| 2025-08-26T20:37:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek stubby falcon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:37:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek stubby falcon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xFarzad/gemma-3
|
0xFarzad
| 2025-08-26T20:35:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"en",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-26T20:31:18Z |
---
base_model: unsloth/gemma-3-270m-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** 0xFarzad
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-270m-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chainway9/blockassist-bc-untamed_quick_eel_1756238560
|
chainway9
| 2025-08-26T20:29:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:29:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fxcore57/blockassist-bc-gliding_running_bobcat_1756239468
|
fxcore57
| 2025-08-26T20:18:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gliding running bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:18:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gliding running bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mlfoundations-cua-dev/ui_tars_7b_easyr1_10k_hard_qwen7b_easy_gta1-4MP
|
mlfoundations-cua-dev
| 2025-08-26T20:17:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"base_model:ByteDance-Seed/UI-TARS-1.5-7B",
"base_model:finetune:ByteDance-Seed/UI-TARS-1.5-7B",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-26T20:13:59Z |
---
library_name: transformers
license: other
base_model: ByteDance-Seed/UI-TARS-1.5-7B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: ui_tars_7b_easyr1_10k_hard_qwen7b_easy_gta1-4MP_lr_1_0e-06_bs_1_epochs_1.0_max_pixels_4000000_deepspeed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ui_tars_7b_easyr1_10k_hard_qwen7b_easy_gta1-4MP_lr_1_0e-06_bs_1_epochs_1.0_max_pixels_4000000_deepspeed
This model is a fine-tuned version of [ByteDance-Seed/UI-TARS-1.5-7B](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) on the easyr1-10k-hard-qwen7b-easy-gta1-4MP dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756239382
|
eusuf01
| 2025-08-26T20:17:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:16:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
CatkinChen/nethack-vae
|
CatkinChen
| 2025-08-26T20:16:24Z | 840 | 0 | null |
[
"pytorch",
"MultiModalHackVAE",
"nethack",
"reinforcement-learning",
"variational-autoencoder",
"representation-learning",
"multimodal",
"world-modeling",
"feature-extraction",
"en",
"license:mit",
"region:us"
] |
feature-extraction
| 2025-08-04T14:14:08Z |
---
license: mit
language: en
tags:
- nethack
- reinforcement-learning
- variational-autoencoder
- representation-learning
- multimodal
- world-modeling
pipeline_tag: feature-extraction
---
# MultiModalHackVAE
A multi-modal Variational Autoencoder trained on NetHack game states for representation learning.
## Model Description
This model is a MultiModalHackVAE that learns compact representations of NetHack game states by processing:
- Game character grids (21x79)
- Color information
- Game statistics (blstats)
- Message text
- Bag of glyphs
- Hero information (role, race, gender, alignment)
## Model Details
- **Model Type**: Multi-modal Variational Autoencoder
- **Framework**: PyTorch
- **Dataset**: NetHack Learning Dataset
- **Latent Dimensions**: 96
- **Low-rank Dimensions**: 0
## Usage
```python
from train import load_model_from_huggingface
import torch
# Load the model
model = load_model_from_huggingface("CatkinChen/nethack-vae")
# Example usage with synthetic data
batch_size = 1
game_chars = torch.randint(32, 127, (batch_size, 21, 79))
game_colors = torch.randint(0, 16, (batch_size, 21, 79))
blstats = torch.randn(batch_size, 27)
msg_tokens = torch.randint(0, 128, (batch_size, 256))
hero_info = torch.randint(0, 10, (batch_size, 4))
with torch.no_grad():
output = model(
glyph_chars=game_chars,
glyph_colors=game_colors,
blstats=blstats,
msg_tokens=msg_tokens,
hero_info=hero_info
)
latent_mean = output['mu']
latent_logvar = output['logvar']
lowrank_factors = output['lowrank_factors']
```
## Training
This model was trained using adaptive loss weighting with:
- Embedding warm-up for quick convergence
- Gradual raw reconstruction focus
- KL beta annealing for better latent structure
## Citation
If you use this model, please consider citing:
```bibtex
@misc{nethack-vae,
title={MultiModalHackVAE: Multi-modal Variational Autoencoder for NetHack},
author={Xu Chen},
year={2025},
url={https://huggingface.co/CatkinChen/nethack-vae}
}
```
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756239307
|
eusuf01
| 2025-08-26T20:16:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:15:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756239153
|
ggozzy
| 2025-08-26T20:13:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:13:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mohbensakhri81/Ninja
|
mohbensakhri81
| 2025-08-26T20:11:05Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-08-26T20:11:05Z |
---
license: bigscience-openrail-m
---
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756238898
|
ggozzy
| 2025-08-26T20:09:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:09:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
popouy/blockassist-bc-curious_rugged_mandrill_1756238867
|
popouy
| 2025-08-26T20:08:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"curious rugged mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:07:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- curious rugged mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
b1n1yam/addisAI_Finetune
|
b1n1yam
| 2025-08-26T20:06:57Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-26T14:58:06Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: addisAI_Finetune
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for addisAI_Finetune
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="b1n1yam/addisAI_Finetune", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
VIDEOS-18-Bfiu-head-viral-video-Clip-hq/Original.New.full.videos.Bfiu-head.Viral.Video.Official.Tutorial
|
VIDEOS-18-Bfiu-head-viral-video-Clip-hq
| 2025-08-26T20:06:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-26T20:06:02Z |
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
|
motza0025/blockassist-bc-keen_scavenging_llama_1756237198
|
motza0025
| 2025-08-26T20:06:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen scavenging llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:06:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen scavenging llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vpakarinen/aino-chat-3.8b-v1
|
vpakarinen
| 2025-08-26T20:06:22Z | 4 | 0 | null |
[
"safetensors",
"phi3",
"custom_code",
"en",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:finetune:microsoft/Phi-3.5-mini-instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-08-25T11:34:40Z |
---
license: apache-2.0
language:
- en
base_model:
- microsoft/Phi-3.5-mini-instruct
---
Aino-Chat is a fine-tuned, conversational AI designed to be a concise, reliable, and helpful assistant.
This model is a full fine-tune of microsoft/Phi-3.5-mini-instruct, a powerful 3.8B parameter model.
v1 is trained on 500 high quality examples.
Note: recommended to use with ChatML prompt template and temperature of 0.6.
|
Egor-N/blockassist-bc-vicious_stubby_bear_1756236945
|
Egor-N
| 2025-08-26T20:04:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious stubby bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:04:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious stubby bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-hunting_long_mallard_1756238616
|
AnerYubo
| 2025-08-26T20:03:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hunting long mallard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:03:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hunting long mallard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youuotty/blockassist-bc-untamed_aquatic_antelope_1756238505
|
youuotty
| 2025-08-26T20:02:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed aquatic antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:01:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed aquatic antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lilTAT/blockassist-bc-gentle_rugged_hare_1756238466
|
lilTAT
| 2025-08-26T20:02:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:01:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1756236742
|
kojeklollipop
| 2025-08-26T20:01:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:01:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756238449
|
Dejiat
| 2025-08-26T20:01:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:01:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756238360
|
eusuf01
| 2025-08-26T20:00:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T20:00:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mlfoundations-cua-dev/qwen2_5vl_7b_easyr1_10k_hard_qwen7b_easy_gta1-4MP_deepspeed_freeze_vision_tower
|
mlfoundations-cua-dev
| 2025-08-26T19:57:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-26T19:53:49Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-VL-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2_5vl_7b_easyr1_10k_hard_qwen7b_easy_gta1-4MP_lr_1_0e-06_bs_1_epochs_1.0_max_pixels_4000000_deepspeed_freeze_vision_tower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2_5vl_7b_easyr1_10k_hard_qwen7b_easy_gta1-4MP_lr_1_0e-06_bs_1_epochs_1.0_max_pixels_4000000_deepspeed_freeze_vision_tower
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on the easyr1-10k-hard-qwen7b-easy-gta1-4MP dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
eusuf01/blockassist-bc-smooth_humming_butterfly_1756238147
|
eusuf01
| 2025-08-26T19:56:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T19:56:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.