modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756155775
|
Dejiat
| 2025-08-25T21:03:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T21:03:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756154652
|
Sayemahsjn
| 2025-08-25T21:03:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T21:03:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
boopmoor/blockassist-bc-sedate_rabid_puffin_1756155750
|
boopmoor
| 2025-08-25T21:02:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sedate rabid puffin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T21:02:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sedate rabid puffin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qwersdfvg/blockassist-bc-omnivorous_soaring_pigeon_1756155746
|
qwersdfvg
| 2025-08-25T21:02:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"omnivorous soaring pigeon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T21:02:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- omnivorous soaring pigeon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
prithivMLmods/Pyxidis-Manim-CodeGen-1.7B
|
prithivMLmods
| 2025-08-25T21:02:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"code",
"trl",
"conversational",
"en",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T13:32:31Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-1.7B
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- code
- trl
---

# **Pyxidis-Manim-CodeGen-1.7B (Experimental)**
> **Pyxidis-Manim-CodeGen-1.7B** is an **experimental math animation coding model** fine-tuned on **Qwen/Qwen3-1.7B** using **Manim-CodeGen code traces**.
> It is specialized for **Python-based mathematical animations with Manim**, making it ideal for educators, researchers, and developers working on math visualization and animation pipelines.
> \[!note]
> GGUF: [https://huggingface.co/prithivMLmods/Pyxidis-Manim-CodeGen-1.7B-GGUF](https://huggingface.co/prithivMLmods/Pyxidis-Manim-CodeGen-1.7B-GGUF)
---
## **Key Features**
1. **Manim-Specific Code Generation**
Trained on **Manim-CodeGen traces**, optimized for **Python-based animation scripting** of mathematical concepts and visual proofs.
2. **Math + Code Synergy**
Generates step-by-step **math derivations with corresponding animation code**, bridging symbolic reasoning with visualization.
3. **Animation Workflow Optimization**
Provides structured code for **scenes, transformations, graphs, and equations** in Manim, reducing boilerplate and debugging effort.
4. **Python-Centric Reasoning**
Produces **clean, modular, and reusable Python code**, supporting educational and research-driven animation pipelines.
5. **Structured Output Mastery**
Capable of outputting in **Python**, **Markdown**, and **LaTeX**, ideal for tutorials, educational notebooks, and automated video generation workflows.
6. **Lightweight but Specialized**
Focused on **Manim coding efficiency** while maintaining a deployable footprint for **GPU clusters** and **research labs**.
---
## **Quickstart with Transformers**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Pyxidis-Manim-CodeGen-1.7B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Write a Manim script to animate the Pythagorean theorem using squares on the triangle's sides."
messages = [
{"role": "system", "content": "You are a Python coding assistant specialized in Manim-based math animations."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
---
## **Intended Use**
* **Manim-based math animation coding** for research, teaching, and content creation
* **Educational visualization assistant** to convert math problems into animations
* **Python tutoring tool** for math-heavy animation workflows
* **Prototype generator** for interactive STEM video content
## **Limitations**
* Experimental model – may generate code requiring manual debugging
* Limited to **Manim coding workflows**, not general-purpose code assistant
* May not handle **complex multi-scene projects** without iterative refinement
* Prioritizes structured math + animation reasoning, less optimized for general dialogue
|
sonspeed/bartpho-word-cpo-summarize-vietgpt-256
|
sonspeed
| 2025-08-25T21:02:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"cpo",
"trl",
"arxiv:2401.08417",
"base_model:sonspeed/bartpho-vietgpt",
"base_model:finetune:sonspeed/bartpho-vietgpt",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T14:41:29Z |
---
base_model: sonspeed/bartpho-vietgpt
library_name: transformers
model_name: bartpho-word-cpo-summarize-vietgpt-256
tags:
- generated_from_trainer
- cpo
- trl
licence: license
---
# Model Card for bartpho-word-cpo-summarize-vietgpt-256
This model is a fine-tuned version of [sonspeed/bartpho-vietgpt](https://huggingface.co/sonspeed/bartpho-vietgpt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sonspeed/bartpho-word-cpo-summarize-vietgpt-256", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sonspeed-hanoi-university-of-science-and-technology/bartpho-summarization-cpotrl/runs/b2szxtmk)
This model was trained with CPO, a method introduced in [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite CPO as:
```bibtex
@inproceedings{xu2024contrastive,
title = {{Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}},
author = {Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim},
year = 2024,
booktitle = {Forty-first International Conference on Machine Learning, {ICML} 2024, Vienna, Austria, July 21-27, 2024},
publisher = {OpenReview.net},
url = {https://openreview.net/forum?id=51iwkioZpn}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
golopper/blockassist-bc-screeching_snorting_caribou_1756155676
|
golopper
| 2025-08-25T21:01:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"screeching snorting caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T21:01:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- screeching snorting caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756155627
|
Dejiat
| 2025-08-25T21:00:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T21:00:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756155593
|
bah63843
| 2025-08-25T21:00:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T21:00:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
shadowvibec/blockassist-bc-swift_pudgy_squirrel_1756155508
|
shadowvibec
| 2025-08-25T20:59:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"swift pudgy squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:58:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- swift pudgy squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mohda/blockassist-bc-regal_fierce_hummingbird_1756155463
|
mohda
| 2025-08-25T20:59:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal fierce hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:58:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal fierce hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
prithivMLmods/Pyxidis-Manim-CodeGen-1.7B-GGUF
|
prithivMLmods
| 2025-08-25T20:58:20Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"text-generation",
"en",
"base_model:prithivMLmods/Pyxidis-Manim-CodeGen-1.7B",
"base_model:quantized:prithivMLmods/Pyxidis-Manim-CodeGen-1.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-25T13:41:41Z |
---
license: apache-2.0
language:
- en
base_model:
- prithivMLmods/Pyxidis-Manim-CodeGen-1.7B
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
---
# **Pyxidis-Manim-CodeGen-1.7B-GGUF**
> **Pyxidis-Manim-CodeGen-1.7B** is an **experimental math animation coding model** fine-tuned on **Qwen/Qwen3-1.7B** using **Manim-CodeGen code traces**.
> It is specialized for **Python-based mathematical animations with Manim**, making it ideal for educators, researchers, and developers working on math visualization and animation pipelines.
## Model Files
| File Name | Quant Type | File Size |
| - | - | - |
| Pyxidis-Manim-CodeGen-1.7B.BF16.gguf | BF16 | 3.45 GB |
| Pyxidis-Manim-CodeGen-1.7B.F16.gguf | F16 | 3.45 GB |
| Pyxidis-Manim-CodeGen-1.7B.F32.gguf | F32 | 6.89 GB |
| Pyxidis-Manim-CodeGen-1.7B.Q2_K.gguf | Q2_K | 778 MB |
| Pyxidis-Manim-CodeGen-1.7B.Q3_K_L.gguf | Q3_K_L | 1 GB |
| Pyxidis-Manim-CodeGen-1.7B.Q3_K_M.gguf | Q3_K_M | 940 MB |
| Pyxidis-Manim-CodeGen-1.7B.Q3_K_S.gguf | Q3_K_S | 867 MB |
| Pyxidis-Manim-CodeGen-1.7B.Q4_0.gguf | Q4_0 | 1.05 GB |
| Pyxidis-Manim-CodeGen-1.7B.Q4_1.gguf | Q4_1 | 1.14 GB |
| Pyxidis-Manim-CodeGen-1.7B.Q4_K.gguf | Q4_K | 1.11 GB |
| Pyxidis-Manim-CodeGen-1.7B.Q4_K_M.gguf | Q4_K_M | 1.11 GB |
| Pyxidis-Manim-CodeGen-1.7B.Q4_K_S.gguf | Q4_K_S | 1.06 GB |
| Pyxidis-Manim-CodeGen-1.7B.Q5_0.gguf | Q5_0 | 1.23 GB |
| Pyxidis-Manim-CodeGen-1.7B.Q5_1.gguf | Q5_1 | 1.32 GB |
| Pyxidis-Manim-CodeGen-1.7B.Q5_K.gguf | Q5_K | 1.26 GB |
| Pyxidis-Manim-CodeGen-1.7B.Q5_K_M.gguf | Q5_K_M | 1.26 GB |
| Pyxidis-Manim-CodeGen-1.7B.Q5_K_S.gguf | Q5_K_S | 1.23 GB |
| Pyxidis-Manim-CodeGen-1.7B.Q6_K.gguf | Q6_K | 1.42 GB |
| Pyxidis-Manim-CodeGen-1.7B.Q8_0.gguf | Q8_0 | 1.83 GB |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

|
madbro/blockassist-bc-whistling_curious_puffin_1756155459
|
madbro
| 2025-08-25T20:58:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling curious puffin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:58:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling curious puffin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756155289
|
liukevin666
| 2025-08-25T20:58:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:55:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nnilayy/dreamer-binary-valence-LOSO-Subject-12
|
nnilayy
| 2025-08-25T20:58:04Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-25T20:58:01Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
usamachopra/UCPToo1
|
usamachopra
| 2025-08-25T20:56:08Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-25T20:56:08Z |
---
license: apache-2.0
---
|
golopper/blockassist-bc-deft_silent_flamingo_1756155347
|
golopper
| 2025-08-25T20:55:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deft silent flamingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:55:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deft silent flamingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756155323
|
Dejiat
| 2025-08-25T20:55:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:55:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756155257
|
ggozzy
| 2025-08-25T20:55:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:55:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756155254
|
bah63843
| 2025-08-25T20:55:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:54:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
biswac2021/blockassist-bc-wiry_patterned_clam_1756155171
|
biswac2021
| 2025-08-25T20:53:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry patterned clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:53:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry patterned clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
anuragabhi5/blockassist-bc-mute_gilded_macaw_1756155145
|
anuragabhi5
| 2025-08-25T20:53:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute gilded macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:53:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute gilded macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
murasaki35/headshot
|
murasaki35
| 2025-08-25T20:53:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-25T14:30:34Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: jc
---
# Headshot
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `jc` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "jc",
"lora_weights": "https://huggingface.co/murasaki35/headshot/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('murasaki35/headshot', weight_name='lora.safetensors')
image = pipeline('jc').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/murasaki35/headshot/discussions) to add images that show off what you’ve made with this LoRA.
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756155136
|
Dejiat
| 2025-08-25T20:52:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:52:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
q10/Qwen3-8B-Base-FP8
|
q10
| 2025-08-25T20:51:39Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen3",
"text-generation",
"torchao",
"conversational",
"en",
"arxiv:2507.16099",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:quantized:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T00:42:40Z |
---
base_model: Qwen/Qwen3-8B-Base
tags:
- transformers
- torchao
- qwen3
license: apache-2.0
language:
- en
---
# FP8 Qwen/Qwen3-8B-Base model
- **Developed by:** q10
- **License:** apache-2.0
- **Quantized from Model :** Qwen/Qwen3-8B-Base
- **Quantization Method :** FP8
# Inference with vLLM
Install vllm nightly and torchao nightly to get some recent changes:
```
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly
pip install torchao
```
## Serving
Then we can serve with the following command:
```Shell
# Server
export MODEL=q10/Qwen3-8B-Base-FP8
VLLM_DISABLE_COMPILE_CACHE=1 vllm serve $MODEL --tokenizer $MODEL -O3
```
```Shell
# Client
curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "q10/Qwen3-8B-Base-FP8",
"messages": [
{"role": "user", "content": "Give me a short introduction to large language models."}
],
"temperature": 0.6,
"top_p": 0.95,
"top_k": 20,
"max_tokens": 32768
}'
```
Note: please use `VLLM_DISABLE_COMPILE_CACHE=1` to disable compile cache when running this code, e.g. `VLLM_DISABLE_COMPILE_CACHE=1 python example.py`, since there are some issues with the composability of compile in vLLM and torchao,
this is expected be resolved in pytorch 2.8.
# Inference with Transformers
Install the required packages:
```Shell
pip install git+https://github.com/huggingface/transformers@main
pip install torchao
pip install torch
pip install accelerate
```
Example:
```Py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "q10/Qwen3-8B-Base-FP8"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("
")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("
")
print("thinking content:", thinking_content)
print("content:", content)
```
# Quantization Recipe
Install the required packages:
```Shell
pip install git+https://github.com/huggingface/transformers@main
pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126
pip install torch
pip install accelerate
```
Use the following code to get the quantized model:
```Py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
model_id = "Qwen/Qwen3-8B-Base"
model_to_quantize = "Qwen/Qwen3-8B-Base"
from torchao.quantization import Float8DynamicActivationFloat8WeightConfig, PerRow
quant_config = Float8DynamicActivationFloat8WeightConfig(granularity=PerRow())
quantization_config = TorchAoConfig(quant_type=quant_config)
quantized_model = AutoModelForCausalLM.from_pretrained(model_to_quantize, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Push to hub
USER_ID = "YOUR_USER_ID"
MODEL_NAME = model_id.split("/")[-1]
save_to = f"{USER_ID}/{MODEL_NAME}-FP8"
quantized_model.push_to_hub(save_to, safe_serialization=False)
tokenizer.push_to_hub(save_to)
# Manual Testing
prompt = "Hey, are you conscious? Can you talk to me?"
messages = [
{
"role": "system",
"content": "",
},
{"role": "user", "content": prompt},
]
templated_prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
print("Prompt:", prompt)
print("Templated prompt:", templated_prompt)
inputs = tokenizer(
templated_prompt,
return_tensors="pt",
).to("cuda")
generated_ids = quantized_model.generate(**inputs, max_new_tokens=128)
output_text = tokenizer.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print("Response:", output_text[0][len(prompt):])
```
Note: to `push_to_hub` you need to run
```Shell
pip install -U "huggingface_hub[cli]"
huggingface-cli login
```
and use a token with write access, from https://huggingface.co/settings/tokens
# Model Quality
We rely on [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the quality of the quantized model. Here we only run on mmlu for sanity check.
| Benchmark | | |
|----------------------------------|----------------|---------------------------|
| | Qwen/Qwen3-8B-Base | q10/Qwen3-8B-Base-FP8 |
| mmlu | To be filled | To be filled |
<details>
<summary> Reproduce Model Quality Results </summary>
Need to install lm-eval from source:
https://github.com/EleutherAI/lm-evaluation-harness#install
## baseline
```Shell
lm_eval --model hf --model_args pretrained=Qwen/Qwen3-8B-Base --tasks mmlu --device cuda:0 --batch_size 8
```
## FP8
```Shell
export MODEL=q10/Qwen3-8B-Base-FP8
lm_eval --model hf --model_args pretrained=$MODEL --tasks mmlu --device cuda:0 --batch_size 8
```
</details>
# Peak Memory Usage
## Results
| Benchmark | | |
|------------------|----------------|--------------------------------|
| | Qwen/Qwen3-8B-Base | q10/Qwen3-8B-Base-FP8 |
| Peak Memory (GB) | To be filled | To be filled (?% reduction) |
<details>
<summary> Reproduce Peak Memory Usage Results </summary>
We can use the following code to get a sense of peak memory usage during inference:
```Py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
# use "Qwen/Qwen3-8B-Base" or "q10/Qwen3-8B-Base-FP8"
model_id = "q10/Qwen3-8B-Base-FP8"
quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_id)
torch.cuda.reset_peak_memory_stats()
prompt = "Hey, are you conscious? Can you talk to me?"
messages = [
{
"role": "system",
"content": "",
},
{"role": "user", "content": prompt},
]
templated_prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
print("Prompt:", prompt)
print("Templated prompt:", templated_prompt)
inputs = tokenizer(
templated_prompt,
return_tensors="pt",
).to("cuda")
generated_ids = quantized_model.generate(**inputs, max_new_tokens=128)
output_text = tokenizer.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print("Response:", output_text[0][len(prompt):])
mem = torch.cuda.max_memory_reserved() / 1e9
print(f"Peak Memory Usage: {mem:.02f} GB")
```
</details>
# Model Performance
## Results (A100 machine)
| Benchmark (Latency) | | |
|----------------------------------|----------------|--------------------------|
| | Qwen/Qwen3-8B-Base | q10/Qwen3-8B-Base-FP8 |
| latency (batch_size=1) | ?s | ?s (?x speedup) |
<details>
<summary> Reproduce Model Performance Results </summary>
## Setup
Get vllm source code:
```Shell
git clone [email protected]:vllm-project/vllm.git
```
Install vllm
```
VLLM_USE_PRECOMPILED=1 pip install --editable .
```
Run the benchmarks under `vllm` root folder:
## benchmark_latency
### baseline
```Shell
export MODEL=Qwen/Qwen3-8B-Base
python benchmarks/benchmark_latency.py --input-len 256 --output-len 256 --model $MODEL --batch-size 1
```
### FP8
```Shell
export MODEL=q10/Qwen3-8B-Base-FP8
VLLM_DISABLE_COMPILE_CACHE=1 python benchmarks/benchmark_latency.py --input-len 256 --output-len 256 --model $MODEL --batch-size 1
```
## benchmark_serving
We benchmarked the throughput in a serving environment.
Download sharegpt dataset:
```Shell
wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json
```
Other datasets can be found in: https://github.com/vllm-project/vllm/tree/main/benchmarks
Note: you can change the number of prompts to be benchmarked with `--num-prompts` argument for `benchmark_serving` script.
### baseline
Server:
```Shell
export MODEL=Qwen/Qwen3-8B-Base
vllm serve $MODEL --tokenizer $MODEL -O3
```
Client:
```Shell
export MODEL=Qwen/Qwen3-8B-Base
python benchmarks/benchmark_serving.py --backend vllm --dataset-name sharegpt --tokenizer $MODEL --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json --model $MODEL --num-prompts 1
```
### FP8
Server:
```Shell
export MODEL=q10/Qwen3-8B-Base-FP8
VLLM_DISABLE_COMPILE_CACHE=1 vllm serve $MODEL --tokenizer $MODEL -O3 --pt-load-map-location cuda:0
```
Client:
```Shell
export MODEL=q10/Qwen3-8B-Base-FP8
python benchmarks/benchmark_serving.py --backend vllm --dataset-name sharegpt --tokenizer $MODEL --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json --model $MODEL --num-prompts 1
```
</details>
# Paper: TorchAO: PyTorch-Native Training-to-Serving Model Optimization
The model's quantization is powered by **TorchAO**, a framework presented in the paper [TorchAO: PyTorch-Native Training-to-Serving Model Optimization](https://huggingface.co/papers/2507.16099).
**Abstract:** We present TorchAO, a PyTorch-native model optimization framework leveraging quantization and sparsity to provide an end-to-end, training-to-serving workflow for AI models. TorchAO supports a variety of popular model optimization techniques, including FP8 quantized training, quantization-aware training (QAT), post-training quantization (PTQ), and 2:4 sparsity, and leverages a novel tensor subclass abstraction to represent a variety of widely-used, backend agnostic low precision data types, including INT4, INT8, FP8, MXFP4, MXFP6, and MXFP8. TorchAO integrates closely with the broader ecosystem at each step of the model optimization pipeline, from pre-training (TorchTitan) to fine-tuning (TorchTune, Axolotl) to serving (HuggingFace, vLLM, SGLang, ExecuTorch), connecting an otherwise fragmented space in a single, unified workflow. TorchAO has enabled recent launches of the quantized Llama 3.2 1B/3B and LlamaGuard3-8B models and is open-source at this https URL .
# Resources
* **Official TorchAO GitHub Repository:** [https://github.com/pytorch/ao](https://github.com/pytorch/ao)
* **TorchAO Documentation:** [https://docs.pytorch.org/ao/stable/index.html](https://docs.pytorch.org/ao/stable/index.html)
# Disclaimer
PyTorch has not performed safety evaluations or red teamed the quantized models. Performance characteristics, outputs, and behaviors may differ from the original models. Users are solely responsible for selecting appropriate use cases, evaluating and mitigating for accuracy, safety, and fairness, ensuring security, and complying with all applicable laws and regulations.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the licenses the models are released under, including any limitations of liability or disclaimers of warranties provided therein.
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756155018
|
ggozzy
| 2025-08-25T20:51:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:51:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1756153398
|
indoempatnol
| 2025-08-25T20:51:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:51:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tayfun26/blockassist-bc-squinting_freckled_grouse_1756155030
|
tayfun26
| 2025-08-25T20:51:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"squinting freckled grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:51:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- squinting freckled grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1756153469
|
mang3dd
| 2025-08-25T20:50:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:50:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Genuine-Zeth-4B-GGUF
|
mradermacher
| 2025-08-25T20:50:05Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"lora",
"sft",
"trl",
"unsloth",
"fine-tuned",
"en",
"dataset:theprint/Gentle-Pushback-8.5k-alpaca",
"base_model:theprint/Genuine-Zeth-4B",
"base_model:adapter:theprint/Genuine-Zeth-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-25T20:03:13Z |
---
base_model: theprint/Genuine-Zeth-4B
datasets:
- theprint/Gentle-Pushback-8.5k-alpaca
language: en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- lora
- sft
- transformers
- trl
- unsloth
- fine-tuned
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static quants of https://huggingface.co/theprint/Genuine-Zeth-4B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Genuine-Zeth-4B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Genuine-Zeth-4B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Genuine-Zeth-4B-GGUF/resolve/main/Genuine-Zeth-4B.Q2_K.gguf) | Q2_K | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Genuine-Zeth-4B-GGUF/resolve/main/Genuine-Zeth-4B.Q3_K_S.gguf) | Q3_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Genuine-Zeth-4B-GGUF/resolve/main/Genuine-Zeth-4B.Q3_K_M.gguf) | Q3_K_M | 2.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Genuine-Zeth-4B-GGUF/resolve/main/Genuine-Zeth-4B.Q3_K_L.gguf) | Q3_K_L | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Genuine-Zeth-4B-GGUF/resolve/main/Genuine-Zeth-4B.IQ4_XS.gguf) | IQ4_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Genuine-Zeth-4B-GGUF/resolve/main/Genuine-Zeth-4B.Q4_K_S.gguf) | Q4_K_S | 2.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Genuine-Zeth-4B-GGUF/resolve/main/Genuine-Zeth-4B.Q4_K_M.gguf) | Q4_K_M | 3.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Genuine-Zeth-4B-GGUF/resolve/main/Genuine-Zeth-4B.Q5_K_S.gguf) | Q5_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Genuine-Zeth-4B-GGUF/resolve/main/Genuine-Zeth-4B.Q5_K_M.gguf) | Q5_K_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Genuine-Zeth-4B-GGUF/resolve/main/Genuine-Zeth-4B.Q6_K.gguf) | Q6_K | 3.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Genuine-Zeth-4B-GGUF/resolve/main/Genuine-Zeth-4B.Q8_0.gguf) | Q8_0 | 4.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Genuine-Zeth-4B-GGUF/resolve/main/Genuine-Zeth-4B.f16.gguf) | f16 | 9.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756154915
|
Dejiat
| 2025-08-25T20:49:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:49:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756154879
|
bah63843
| 2025-08-25T20:48:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:48:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mohda/blockassist-bc-regal_fierce_hummingbird_1756154841
|
mohda
| 2025-08-25T20:48:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal fierce hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:48:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal fierce hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lautan/blockassist-bc-gentle_patterned_goat_1756153329
|
lautan
| 2025-08-25T20:48:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:48:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
znaal/blockassist-bc-tropical_grunting_salamander_1756154726
|
znaal
| 2025-08-25T20:47:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tropical grunting salamander",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:47:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tropical grunting salamander
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
y22ma/gemma3-270m-router
|
y22ma
| 2025-08-25T20:46:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T20:45:15Z |
---
base_model: unsloth/gemma-3-270m-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** y22ma
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-270m-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756154708
|
Dejiat
| 2025-08-25T20:45:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:45:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lemonhat/Qwen2.5-7B-Instruct-t1_5k_v1_tag5_hermes
|
lemonhat
| 2025-08-25T20:44:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T20:33:21Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: t1_5k_v1_tag5_hermes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t1_5k_v1_tag5_hermes
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the t1_5k_v1_tag5_hermes dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2918 | 0.6897 | 100 | 0.2926 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
bah63843/blockassist-bc-plump_fast_antelope_1756154613
|
bah63843
| 2025-08-25T20:44:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:44:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756154531
|
Dejiat
| 2025-08-25T20:42:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:42:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
iacmc85/llama3ponto2-1b-train07
|
iacmc85
| 2025-08-25T20:41:57Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-25T20:30:28Z |
---
base_model: unsloth/llama-3.2-1b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** iacmc85
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
silverside/HAC_d
|
silverside
| 2025-08-25T20:41:47Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-25T18:18:15Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ANML_STYLE
---
# Hac_D
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ANML_STYLE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ANML_STYLE",
"lora_weights": "https://huggingface.co/silverside/HAC_d/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('silverside/HAC_d', weight_name='lora.safetensors')
image = pipeline('ANML_STYLE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0001
- LoRA rank: 32
## Contribute your own examples
You can use the [community tab](https://huggingface.co/silverside/HAC_d/discussions) to add images that show off what you’ve made with this LoRA.
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756154389
|
Dejiat
| 2025-08-25T20:40:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:40:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
boopmoor/blockassist-bc-soft_keen_trout_1756154358
|
boopmoor
| 2025-08-25T20:39:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soft keen trout",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:39:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soft keen trout
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756154300
|
ggozzy
| 2025-08-25T20:39:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:39:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756154265
|
bah63843
| 2025-08-25T20:38:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:38:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1756152871
|
pempekmangedd
| 2025-08-25T20:38:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:38:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
QuantStack/InternVL3_5-1B-Pretrained-gguf
|
QuantStack
| 2025-08-25T20:37:14Z | 0 | 0 | null |
[
"gguf",
"base_model:OpenGVLab/InternVL3_5-1B-Pretrained",
"base_model:quantized:OpenGVLab/InternVL3_5-1B-Pretrained",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-25T20:25:59Z |
---
license: apache-2.0
base_model:
- OpenGVLab/InternVL3_5-1B-Pretrained
---
This is basically a test to see if the conversion and inference in llama.cpp works fine
It seems to work though i wont add more quant sizes for now
Since this is merely a quantization of the original model the license of the original model still applies!
|
madbro/blockassist-bc-whistling_curious_puffin_1756154184
|
madbro
| 2025-08-25T20:37:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling curious puffin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:37:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling curious puffin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Phlor/mary
|
Phlor
| 2025-08-25T20:36:47Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-25T20:11:37Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: mary
---
# Mary
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `mary` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "mary",
"lora_weights": "https://huggingface.co/Phlor/mary/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Phlor/mary', weight_name='lora.safetensors')
image = pipeline('mary').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Phlor/mary/discussions) to add images that show off what you’ve made with this LoRA.
|
g-assismoraes/Qwen3-4B-Base-interp-perm-alpha0.5-var-hatebr
|
g-assismoraes
| 2025-08-25T20:35:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T20:20:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QuantStack/InternVL3_5-1B-Instruct-gguf
|
QuantStack
| 2025-08-25T20:34:56Z | 0 | 0 | null |
[
"gguf",
"base_model:OpenGVLab/InternVL3_5-1B-Instruct",
"base_model:quantized:OpenGVLab/InternVL3_5-1B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-25T19:43:41Z |
---
license: apache-2.0
base_model:
- OpenGVLab/InternVL3_5-1B-Instruct
---
This is basically a test to see if the conversion and inference in llama.cpp works fine
It seems to work though i wont add more quant sizes for now
Since this is merely a quantization of the original model the license of the original model still applies!
|
0xStarChaser/blockassist-bc-feathered_foraging_cod_1756153899
|
0xStarChaser
| 2025-08-25T20:33:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"feathered foraging cod",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:32:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- feathered foraging cod
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
znaal/blockassist-bc-tropical_grunting_salamander_1756153774
|
znaal
| 2025-08-25T20:32:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tropical grunting salamander",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:32:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tropical grunting salamander
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ectorbravo/blockassist-bc-rapid_freckled_cod_1756153873
|
ectorbravo
| 2025-08-25T20:31:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rapid freckled cod",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:31:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rapid freckled cod
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756153822
|
ggozzy
| 2025-08-25T20:31:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:31:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xFarzad/model_checkpoint
|
0xFarzad
| 2025-08-25T20:31:31Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T20:08:55Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: model_checkpoint
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for model_checkpoint
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="0xFarzad/model_checkpoint", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.53.1
- Pytorch: 2.7.1+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Chukky10z/blockassist-bc-mammalian_jumping_cougar_1756153822
|
Chukky10z
| 2025-08-25T20:31:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian jumping cougar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:30:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian jumping cougar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fbaldassarri/EleutherAI_pythia-12b-deduped-autoround-int8-gs64-asym
|
fbaldassarri
| 2025-08-25T20:30:59Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"autoround",
"intel-autoround",
"auto-round",
"intel",
"woq",
"eleutheraI",
"text-generation",
"en",
"dataset:EleutherAI/pile",
"base_model:EleutherAI/pythia-12b-deduped",
"base_model:quantized:EleutherAI/pythia-12b-deduped",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-25T19:55:18Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- autoround
- intel-autoround
- auto-round
- intel
- woq
- eleutheraI
license: apache-2.0
model_name: Pythia 12b deduped
base_model: EleutherAI/pythia-12b-deduped
inference: false
model_creator: EleutherAI
datasets:
- EleutherAI/pile
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [EleutherAI/pythia-12b-deduped](https://huggingface.co/fbaldassarri/EleutherAI/pythia-12b-deduped) using torch.float32 for quantization tuning.
- 8 bits (INT8)
- group size = 64
- Asymmetrical Quantization
- Method WoQ: SignRound (AutoRound algorithm)
Fast and low memory, 2-3X speedup (slight accuracy drop at W8G64)
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.5.1
Note: this INT8 version of pythia-12b-deduped has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.5.1.tar.gz
tar -xvzf v0.5.1.tar.gz
cd auto-round-0.5.1
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "EleutherAI/pythia-12b-deduped"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 8, 64, False, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/EleutherAI_pythia-12b-deduped-autoround-int8-gs64-asym"
autoround.save_quantized(output_dir, format='auto_round', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
shadowvibec/blockassist-bc-swift_pudgy_squirrel_1756153709
|
shadowvibec
| 2025-08-25T20:29:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"swift pudgy squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:28:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- swift pudgy squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
madbro/blockassist-bc-whistling_curious_puffin_1756153703
|
madbro
| 2025-08-25T20:29:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling curious puffin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:28:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling curious puffin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chainway9/blockassist-bc-untamed_quick_eel_1756152011
|
chainway9
| 2025-08-25T20:28:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:27:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rudra-madlads/blockassist-bc-jumping_swift_gazelle_1756153530
|
Rudra-madlads
| 2025-08-25T20:26:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"jumping swift gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:26:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- jumping swift gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jadren/blockassist-bc-leggy_mute_tarantula_1756151506
|
Jadren
| 2025-08-25T20:26:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"leggy mute tarantula",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:26:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- leggy mute tarantula
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jyhtgyhg/New.full.videos.fooni.fun.Viral.Video.Official.Tutorial
|
jyhtgyhg
| 2025-08-25T20:25:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-25T20:21:35Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
|
jyhtgyhg/WATCH.FULL.VIDEOS.AFRIN.ER.LINK.VIRAL.1.24.VIRAL.AFRIN.AR.LINK
|
jyhtgyhg
| 2025-08-25T20:25:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-25T20:21:24Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
|
jyhtgyhg/New.full.videos.Mwaka.Halwiindi.Viral.Video.Official.Tutorial
|
jyhtgyhg
| 2025-08-25T20:25:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-25T20:21:14Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1756151874
|
rvipitkirubbe
| 2025-08-25T20:25:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:25:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mohda/blockassist-bc-regal_fierce_hummingbird_1756153403
|
mohda
| 2025-08-25T20:24:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal fierce hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:24:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal fierce hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Goopua/blockassist-bc-invisible_mottled_aardvark_1756153372
|
Goopua
| 2025-08-25T20:24:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"invisible mottled aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:24:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- invisible mottled aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/OctoThinker-1B-Long-Base-GGUF
|
mradermacher
| 2025-08-25T20:23:58Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:OctoThinker/MegaMath-Web-Pro-Max",
"dataset:LLM360/MegaMath",
"base_model:OctoThinker/OctoThinker-1B-Long-Base",
"base_model:quantized:OctoThinker/OctoThinker-1B-Long-Base",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T20:05:28Z |
---
base_model: OctoThinker/OctoThinker-1B-Long-Base
datasets:
- OctoThinker/MegaMath-Web-Pro-Max
- LLM360/MegaMath
language:
- en
library_name: transformers
license: llama3.2
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/OctoThinker/OctoThinker-1B-Long-Base
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#OctoThinker-1B-Long-Base-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-1B-Long-Base-GGUF/resolve/main/OctoThinker-1B-Long-Base.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-1B-Long-Base-GGUF/resolve/main/OctoThinker-1B-Long-Base.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-1B-Long-Base-GGUF/resolve/main/OctoThinker-1B-Long-Base.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-1B-Long-Base-GGUF/resolve/main/OctoThinker-1B-Long-Base.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-1B-Long-Base-GGUF/resolve/main/OctoThinker-1B-Long-Base.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-1B-Long-Base-GGUF/resolve/main/OctoThinker-1B-Long-Base.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-1B-Long-Base-GGUF/resolve/main/OctoThinker-1B-Long-Base.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-1B-Long-Base-GGUF/resolve/main/OctoThinker-1B-Long-Base.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-1B-Long-Base-GGUF/resolve/main/OctoThinker-1B-Long-Base.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-1B-Long-Base-GGUF/resolve/main/OctoThinker-1B-Long-Base.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-1B-Long-Base-GGUF/resolve/main/OctoThinker-1B-Long-Base.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/OctoThinker-1B-Long-Base-GGUF/resolve/main/OctoThinker-1B-Long-Base.f16.gguf) | f16 | 2.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756153345
|
ggozzy
| 2025-08-25T20:23:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:23:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RandipR/phi3-booksum-summarizer
|
RandipR
| 2025-08-25T20:23:24Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T20:23:23Z |
---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** RandipR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756153297
|
liukevin666
| 2025-08-25T20:22:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:22:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/populism_classifier_025
|
AnonymousCS
| 2025-08-25T20:22:35Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-25T18:37:25Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: populism_classifier_025
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_025
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6063
- Accuracy: 0.9265
- 1-f1: 0.3636
- 1-recall: 0.4
- 1-precision: 0.3333
- Balanced Acc: 0.6778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.3368 | 1.0 | 21 | 0.4284 | 0.9040 | 0.4182 | 0.6571 | 0.3067 | 0.7874 |
| 0.345 | 2.0 | 42 | 0.4344 | 0.8951 | 0.3396 | 0.5143 | 0.2535 | 0.7152 |
| 0.2746 | 3.0 | 63 | 0.6063 | 0.9265 | 0.3636 | 0.4 | 0.3333 | 0.6778 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
bah63843/blockassist-bc-plump_fast_antelope_1756153305
|
bah63843
| 2025-08-25T20:22:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:22:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
shadowvibec/blockassist-bc-swift_pudgy_squirrel_1756153310
|
shadowvibec
| 2025-08-25T20:22:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"swift pudgy squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:22:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- swift pudgy squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IeBoytsov/ox-llms-sula-dpo-new
|
IeBoytsov
| 2025-08-25T20:21:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T19:10:42Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
parameters: 8.03B
model_name: ox-llms-sula-dpo-new
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for ox-llms-sula-dpo-new
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="IeBoytsov/ox-llms-sula-dpo-new", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ilyaboytsov1805/huggingface/runs/xrneznv4)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
frhb/blockassist-bc-pale_quick_salmon_1756153179
|
frhb
| 2025-08-25T20:20:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pale quick salmon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:20:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pale quick salmon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xStarChaser/blockassist-bc-feathered_foraging_cod_1756153185
|
0xStarChaser
| 2025-08-25T20:20:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"feathered foraging cod",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:20:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- feathered foraging cod
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-untamed_aquatic_antelope_1756153184
|
youryoui
| 2025-08-25T20:19:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed aquatic antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:19:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed aquatic antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1756151559
|
mang3dd
| 2025-08-25T20:18:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:18:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TOTORONG/Deepseek-V3.1-mlx-3.12bit
|
TOTORONG
| 2025-08-25T20:17:48Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"base_model:deepseek-ai/DeepSeek-V3.1",
"base_model:quantized:deepseek-ai/DeepSeek-V3.1",
"license:mit",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-25T19:56:56Z |
---
license: mit
library_name: mlx
base_model: deepseek-ai/DeepSeek-V3.1
pipeline_tag: text-generation
tags:
- mlx
---
# TOTORONG/deepseek_V3.1
This model [TOTORONG/deepseek_V3.1](https://huggingface.co/TOTORONG/deepseek_V3.1) was
converted to MLX format from [deepseek-ai/DeepSeek-V3.1](https://huggingface.co/deepseek-ai/DeepSeek-V3.1)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("TOTORONG/deepseek_V3.1")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
afrinhhg/wATCH.afrin.viral.video.original
|
afrinhhg
| 2025-08-25T20:17:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-25T20:12:48Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
|
afrinhhg/VIRAL.afrin.Viral.Video.Fulls.Original.Video.Social.Media.X
|
afrinhhg
| 2025-08-25T20:17:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-25T20:15:12Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1756151466
|
ihsanridzi
| 2025-08-25T20:17:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:17:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1756151265
|
sampingkaca72
| 2025-08-25T20:16:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:16:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756152867
|
ggozzy
| 2025-08-25T20:15:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:15:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nnilayy/dreamer-binary-valence-LOSO-Subject-11
|
nnilayy
| 2025-08-25T20:15:37Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-25T20:15:33Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
ReasoningTransferability/UniReason-Qwen3-14B-no-think-SFT
|
ReasoningTransferability
| 2025-08-25T20:15:22Z | 105 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"math-reasoning",
"transferability",
"Distill from Qwen3-32B-Instruct (non-thinking mode) through Reject Sampling",
"research-paper",
"conversational",
"en",
"dataset:math",
"dataset:reasoning",
"arxiv:2507.00432",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-05T00:54:44Z |
---
base_model: Qwen3-14B-Base
datasets:
- math
- reasoning
language: en
license: apache-2.0
pipeline_tag: text-generation
tags:
- text-generation
- math-reasoning
- transferability
- Distill from Qwen3-32B-Instruct (non-thinking mode) through Reject Sampling
- research-paper
- qwen3
arxiv: 2507.00432
library_name: transformers
---
# UniReason-Qwen3-14B-think-SFT
This model is associated with the research paper:
**"Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning"**
📄 **Paper**: [2507.00432](https://arxiv.org/abs/2507.00432)
📚 **Code**: [https://github.com/ReasoningTransfer/Transferability-of-LLM-Reasoning](https://github.com/ReasoningTransfer/Transferability-of-LLM-Reasoning)
## Model Description
This model is a **DISTILL FROM QWEN3-32B-INSTRUCT (NON-THINKING MODE) THROUGH REJECT SAMPLING**-tuned version of Qwen3-14B-Base focused on **math-reasoning** capabilities.
The model was developed as part of research investigating the transferability of mathematical reasoning skills to general language tasks.
### Key Research Questions Addressed:
- Does math reasoning training improve general LLM capabilities?
- How do different training methods (RL vs SFT) affect transferability?
- What is the trade-off between specialized math performance and general capabilities?
## Model Details
- **Base Model**: Qwen3-14B-Base
- **Training Method**: DISTILL FROM QWEN3-32B-INSTRUCT (NON-THINKING MODE) THROUGH REJECT SAMPLING
- **Primary Focus**: math-reasoning
- **Training Data**: Math-specific datasets
- **Architecture**: Transformer-based language model
- **Parameters**: 14B
## Training Details
### Training Method: DISTILL FROM QWEN3-32B-INSTRUCT (NON-THINKING MODE) THROUGH REJECT SAMPLING
Custom training methodology - see paper for details.
### Datasets Used
- Mathematical reasoning datasets
- See paper for complete dataset list
## Performance
### Math Reasoning Benchmarks
- **MATH**: See paper
- **AIME**: See paper
### General Capabilities
- **General QA**: See paper
- **Code Generation**: See paper
- **Instruction Following**: See paper
*For detailed performance metrics, please refer to the paper.*
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load model and tokenizer
model_name = "ReasoningTransferability/UniReason-Qwen3-14B-no-think-SFT"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# Example: Math reasoning
math_prompt = "Solve this step by step: What is the derivative of x^3 + 2x^2 - 5x + 1?"
inputs = tokenizer(math_prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=32768, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
# Example: General reasoning
general_prompt = "Explain the concept of supply and demand in economics."
inputs = tokenizer(general_prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=32768, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Limitations and Biases
- **Specialization Trade-offs**: As explored in the paper, models optimized for math reasoning may show reduced performance on general tasks
- **Training Method Dependencies**: Performance characteristics vary significantly between RL and SFT training approaches
- **Domain Transfer**: The extent of capability transfer from math to other domains is limited
- **Computational Requirements**: Model requires significant computational resources for inference
## Research Findings
Key findings from the associated paper:
1. **RL vs SFT**: RL-tuned models show better transfer to general domains compared to SFT-tuned models
2. **Capability Trade-offs**: Most math-specialized models fail to transfer gains to other domains
3. **Forgetting**: SFT-tuned models often forget general capabilities during math-focused training
## Ethical Considerations
- This model is intended for research purposes
- Users should be aware of potential biases in mathematical and general reasoning
- The model should not be used for making critical decisions without human oversight
- Consider the environmental impact of large model inference
## Citation
If you use this model in your research, please cite both the model and the associated paper:
```bibtex
@misc{huan2025doesmathreasoningimprove,
title={Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning},
author={Maggie Huan and Yuetai Li and Tuney Zheng and Xiaoyu Xu and Seungone Kim and Minxin Du and Radha Poovendran and Graham Neubig and Xiang Yue},
year={2025},
eprint={2507.00432},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2507.00432},
}
```
## Contact
For questions about this model or the associated research, please:
- Open an issue in this repository
- Contact the paper authors
- Reference the original paper: https://arxiv.org/abs/2507.00432
## Acknowledgments
This work builds upon the research presented in "Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning" and uses the Qwen3-14B-Base architecture as its foundation.
---
*Model uploaded on 2025-07-05*
|
alekgomez/falcon3b-ft-2508
|
alekgomez
| 2025-08-25T20:14:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:tiiuae/Falcon3-3B-Instruct",
"base_model:finetune:tiiuae/Falcon3-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T20:12:42Z |
---
base_model: tiiuae/Falcon3-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** alekgomez
- **License:** apache-2.0
- **Finetuned from model :** tiiuae/Falcon3-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756152635
|
liukevin666
| 2025-08-25T20:14:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:11:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756152856
|
Dejiat
| 2025-08-25T20:14:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:14:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ReasoningTransferability/UniReason-Qwen3-14B-RL
|
ReasoningTransferability
| 2025-08-25T20:14:21Z | 122 | 3 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"math-reasoning",
"transferability",
"RL-GRPO",
"research-paper",
"qwen",
"conversational",
"en",
"dataset:math",
"dataset:reasoning",
"arxiv:2507.00432",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-03T17:11:48Z |
---
base_model: qwen3-14b
datasets:
- math
- reasoning
language: en
license: apache-2.0
pipeline_tag: text-generation
tags:
- text-generation
- math-reasoning
- transferability
- RL-GRPO
- research-paper
- qwen
arxiv: 2507.00432
library_name: transformers
---
# UniReason-Qwen3-14B-RL
This model is associated with the research paper:
**"Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning"**
📄 **Paper**: [2507.00432](https://arxiv.org/abs/2507.00432)
💻 **Code**: [https://github.com/ReasoningTransfer/Transferability-of-LLM-Reasoning](https://github.com/ReasoningTransfer/Transferability-of-LLM-Reasoning)
## Abstract
Math reasoning has become the poster child of progress in large language models (LLMs), with new models rapidly surpassing human-level performance on benchmarks like MATH and AIME. But as math leaderboards improve week by week, it is worth asking: do these gains reflect broader problem-solving ability or just narrow overfitting?
## Model Description
This model is a **RL-GRPO**-tuned version of qwen3-14b focused on **math-reasoning** capabilities.
The model was developed as part of research investigating the transferability of mathematical reasoning skills to general language tasks.
### Key Research Questions Addressed:
- Does math reasoning training improve general LLM capabilities?
- How do different training methods (RL vs SFT) affect transferability?
- What is the trade-off between specialized math performance and general capabilities?
## Model Details
- **Base Model**: qwen3-14b
- **Training Method**: RL-GRPO
- **Primary Focus**: math-reasoning
- **Training Data**: Math-specific datasets
- **Architecture**: Transformer-based language model
- **Parameters**: 14B
## Training Details
### Training Method: RL-GRPO
Custom training methodology - see paper for details.
### Datasets Used
- Mathematical reasoning datasets
- See paper for complete dataset list
## Performance
### Math Reasoning Benchmarks
- **MATH**: See paper
- **AIME**: See paper
### General Capabilities
- **General QA**: See paper
- **Code Generation**: See paper
- **Instruction Following**: See paper
*For detailed performance metrics, please refer to the paper.*
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load model and tokenizer
model_name = "ReasoningTransferability/UniReason-Qwen3-14B-RL"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# Example: Math reasoning
math_prompt = "Solve this step by step: What is the derivative of x^3 + 2x^2 - 5x + 1?"
inputs = tokenizer(math_prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
# Example: General reasoning
general_prompt = "Explain the concept of supply and demand in economics."
inputs = tokenizer(general_prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Limitations and Biases
- **Specialization Trade-offs**: As explored in the paper, models optimized for math reasoning may show reduced performance on general tasks
- **Training Method Dependencies**: Performance characteristics vary significantly between RL and SFT training approaches
- **Domain Transfer**: The extent of capability transfer from math to other domains is limited
- **Computational Requirements**: Model requires significant computational resources for inference
## Research Findings
Key findings from the associated paper:
1. **RL vs SFT**: RL-tuned models show better transfer to general domains compared to SFT-tuned models
2. **Capability Trade-offs**: Most math-specialized models fail to transfer gains to other domains
3. **Forgetting**: SFT-tuned models often forget general capabilities during math-focused training
## Ethical Considerations
- This model is intended for research purposes
- Users should be aware of potential biases in mathematical and general reasoning
- The model should not be used for making critical decisions without human oversight
- Consider the environmental impact of large model inference
## Citation
If you use this model in your research, please cite both the model and the associated paper:
```bibtex
@misc{huan2025doesmathreasoningimprove,
title={Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning},
author={Maggie Huan and Yuetai Li and Tuney Zheng and Xiaoyu Xu and Seungone Kim and Minxin Du and Radha Poovendran and Graham Neubig and Xiang Yue},
year={2025},
eprint={2507.00432},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2507.00432},
}
```
## Contact
For questions about this model or the associated research, please:
- Open an issue in this repository
- Contact the paper authors
- Reference the original paper: https://arxiv.org/abs/2507.00432
## Acknowledgments
This work builds upon the research presented in "Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning" and uses the qwen3-14b architecture as its foundation.
---
*Model uploaded on 2025-07-03*
|
coastalcph/Qwen2.5-7B-4t_diff_sycophant_800exs
|
coastalcph
| 2025-08-25T20:13:25Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-25T20:10:51Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "Qwen/Qwen2.5-7B-Instruct")
t_2 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-personality-non-sycophancy_800exs")
t_combined = 1.0 * t_1 + 4.0 * t_2 - 4.0 * t_3
new_model = t_combined.apply_to("Qwen/Qwen2.5-7B-Instruct", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct
- Fine-tuned Model 1: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-7B-personality-non-sycophancy_800exs
Technical Details
- Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "Qwen/Qwen2.5-7B-Instruct",
"finetuned_model1": "Qwen/Qwen2.5-7B-Instruct",
"finetuned_model2": "coastalcph/Qwen2.5-7B-personality-non-sycophancy_800exs",
"finetuned_model3": "coastalcph/Qwen2.5-7B-personality-sycophancy_800exs",
"output_model_name": "coastalcph/Qwen2.5-7B-4t_diff_sycophant_800exs",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"combine_diff_projecting_out": false,
"scale_t1": 1.0,
"scale_t2": 4.0,
"scale_t3": 4.0
}
|
golopper/blockassist-bc-quiet_beaked_bee_1756152753
|
golopper
| 2025-08-25T20:12:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quiet beaked bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:12:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quiet beaked bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ucoyle/create-uc-lora
|
ucoyle
| 2025-08-25T20:12:23Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-25T19:15:24Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
devparagiri/Leyes-Ecuador-20250825-200051
|
devparagiri
| 2025-08-25T20:11:36Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gguf",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"dataset:devparagiri/dataset-Leyes-Ecuador-20250825-200051",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T20:04:04Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: meta-llama/Llama-3.2-3B-Instruct
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- devparagiri/dataset-Leyes-Ecuador-20250825-200051
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
thanobidex/blockassist-bc-colorful_shiny_hare_1756150997
|
thanobidex
| 2025-08-25T20:08:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:08:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756152451
|
Dejiat
| 2025-08-25T20:07:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T20:07:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.