modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-02 18:52:31
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
533 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-02 18:52:05
card
stringlengths
11
1.01M
entfane/math-virtuoso-7B
entfane
2025-09-01T09:21:35Z
26
0
null
[ "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:TIGER-Lab/MathInstruct", "base_model:mistralai/Mistral-7B-v0.3", "base_model:finetune:mistralai/Mistral-7B-v0.3", "region:us" ]
text-generation
2025-08-19T06:22:11Z
--- datasets: - TIGER-Lab/MathInstruct language: - en base_model: - mistralai/Mistral-7B-v0.3 pipeline_tag: text-generation --- <img src="https://huggingface.co/entfane/math-virtuoso-7B/resolve/main/math-virtuoso.png" width="400" height="400"/> # Math Virtuoso 7B This model is a Math Instruction fine-tuned version of Mistral 7B v0.3 model. ### Inference ```python !pip install transformers accelerate from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "entfane/math-virtuoso-7B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) messages = [ {"role": "user", "content": "What's the derivative of 2x^2?"} ] input = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) encoded_input = tokenizer(input, return_tensors = "pt").to(model.device) output = model.generate(**encoded_input, max_new_tokens=1024) print(tokenizer.decode(output[0], skip_special_tokens=False)) ```
llm-jp/optimal-sparsity-code-d512-E128-k4-3.3B-A220M
llm-jp
2025-09-01T09:20:11Z
8
0
null
[ "safetensors", "mixtral", "arxiv:2508.18672", "region:us" ]
null
2025-08-21T15:20:49Z
## How to cite If you find our work helpful, please feel free to cite the paper. ``` @article{nakamura2025optimalsparsitymixtureofexpertslanguage, title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks}, author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota}, year={2025}, eprint={2508.18672}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18672}, } ```
llm-jp/optimal-sparsity-code-d2048-E16-k2-7.1B-A1.5B
llm-jp
2025-09-01T09:19:58Z
8
0
null
[ "safetensors", "mixtral", "arxiv:2508.18672", "region:us" ]
null
2025-08-21T15:30:13Z
## How to cite If you find our work helpful, please feel free to cite the paper. ``` @article{nakamura2025optimalsparsitymixtureofexpertslanguage, title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks}, author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota}, year={2025}, eprint={2508.18672}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18672}, } ```
llm-jp/optimal-sparsity-code-d1024-E256-k2-26.0B-A470M
llm-jp
2025-09-01T09:19:55Z
8
0
null
[ "safetensors", "mixtral", "arxiv:2508.18672", "region:us" ]
null
2025-08-21T15:23:21Z
## How to cite If you find our work helpful, please feel free to cite the paper. ``` @article{nakamura2025optimalsparsitymixtureofexpertslanguage, title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks}, author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota}, year={2025}, eprint={2508.18672}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18672}, } ```
llm-jp/optimal-sparsity-code-d512-E16-k2-520M-A170M
llm-jp
2025-09-01T09:19:39Z
8
0
null
[ "safetensors", "mixtral", "arxiv:2508.18672", "region:us" ]
null
2025-08-21T15:04:22Z
## How to cite If you find our work helpful, please feel free to cite the paper. ``` @article{nakamura2025optimalsparsitymixtureofexpertslanguage, title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks}, author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota}, year={2025}, eprint={2508.18672}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18672}, } ```
llm-jp/optimal-sparsity-code-d512-E8-k2-320M-A170M
llm-jp
2025-09-01T09:19:38Z
26
0
null
[ "safetensors", "mixtral", "arxiv:2508.18672", "region:us" ]
null
2025-08-21T15:04:20Z
## How to cite If you find our work helpful, please feel free to cite the paper. ``` @article{nakamura2025optimalsparsitymixtureofexpertslanguage, title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks}, author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota}, year={2025}, eprint={2508.18672}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2508.18672}, } ```
AD-DA/ICCV2025-RealADSim-ClosedLoop-DiffusionDrive
AD-DA
2025-09-01T09:17:08Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-08-31T13:09:46Z
--- title: Test Hugsim Web Server emoji: 📈 colorFrom: purple colorTo: yellow sdk: docker pinned: false ---
goptouy/blockassist-bc-alert_melodic_swan_1756718118
goptouy
2025-09-01T09:15:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "alert melodic swan", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:15:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - alert melodic swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chainway9/blockassist-bc-untamed_quick_eel_1756716520
chainway9
2025-09-01T09:14:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:14:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
rnoozy/Qwen3-0.6B-Gensyn-Swarm-scruffy_robust_cockroach
rnoozy
2025-09-01T09:13:57Z
124
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am scruffy_robust_cockroach", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-27T19:09:47Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am scruffy_robust_cockroach --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
xinnn32/blockassist-bc-meek_winged_caterpillar_1756717885
xinnn32
2025-09-01T09:13:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:13:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mmdrzada/bert-finetuned-ner
mmdrzada
2025-09-01T09:10:38Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-09-01T08:52:54Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9334767499586298 - name: Recall type: recall value: 0.9493436553349041 - name: F1 type: f1 value: 0.9413433458489778 - name: Accuracy type: accuracy value: 0.9863572143403779 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0587 - Precision: 0.9335 - Recall: 0.9493 - F1: 0.9413 - Accuracy: 0.9864 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0755 | 1.0 | 1756 | 0.0678 | 0.8943 | 0.9330 | 0.9133 | 0.9808 | | 0.0353 | 2.0 | 3512 | 0.0685 | 0.9276 | 0.9426 | 0.9351 | 0.9843 | | 0.0228 | 3.0 | 5268 | 0.0587 | 0.9335 | 0.9493 | 0.9413 | 0.9864 | ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.4
EmilRyd/gpt-oss-20b-olympiads-ground-truth-false-on-policy-with-attack-100-100
EmilRyd
2025-09-01T09:07:59Z
0
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T07:47:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
goptouy/blockassist-bc-alert_melodic_swan_1756717545
goptouy
2025-09-01T09:06:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "alert melodic swan", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:05:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - alert melodic swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sekirr/blockassist-bc-masked_tenacious_whale_1756717416
sekirr
2025-09-01T09:04:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "masked tenacious whale", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T09:04:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - masked tenacious whale --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-pesty_graceful_grouse_1756717035
AnerYubo
2025-09-01T08:57:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty graceful grouse", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:57:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty graceful grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Bearrr310/sft_verl_0901-sft550
Bearrr310
2025-09-01T08:49:52Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "dataset:sft_verl_0901-sft300", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T08:48:51Z
--- base_model: Qwen/Qwen2.5-1.5B-Instruct datasets: sft_verl_0901-sft300 library_name: transformers model_name: sft_verl_0901-sft550 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for sft_verl_0901-sft550 This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [sft_verl_0901-sft300](https://huggingface.co/datasets/sft_verl_0901-sft300) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Bearrr310/sft_verl_0901-sft550", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.7.1 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
yaelahnal/blockassist-bc-mute_clawed_crab_1756716257
yaelahnal
2025-09-01T08:45:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute clawed crab", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:45:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute clawed crab --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
faisu-eth/blockassist-bc-thick_twitchy_jackal_1756716135
faisu-eth
2025-09-01T08:43:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thick twitchy jackal", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:42:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thick twitchy jackal --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756716032
akirafudo
2025-09-01T08:40:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:40:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sekirr/blockassist-bc-masked_tenacious_whale_1756715747
sekirr
2025-09-01T08:36:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "masked tenacious whale", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:36:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - masked tenacious whale --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hanchaow/QTuneVL1_5-3B
hanchaow
2025-09-01T08:34:55Z
38
1
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "image-text-to-text", "conversational", "multilingual", "arxiv:2507.18071", "base_model:Qwen/Qwen2.5-VL-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-07-31T11:28:01Z
--- license: apache-2.0 pipeline_tag: image-text-to-text library_name: transformers base_model: - Qwen/Qwen2.5-VL-3B-Instruct language: - multilingual --- # QTuneVL1.5-3B developed by the [Reconova AI Lab](https://www.reconova.com/) (Leader: Jia Baozhi; Team members: Wang Hanchao, Chen Mingmu, Lin Bingqi, et al.) && [ BDAA-Lab ](https://dm.ustc.edu.cn/index.html) # Introduction We are pleased to introduce QTuneVL1.5-3B, the latest addition to [Reconova AI Lab's](https://www.reconova.com/) series of multimodal large language models. Built upon [Qwen2.5-VL-3B](Qwen/Qwen2.5-VL-3B-Instruct), the model's capabilities have been further enhanced through RLVR training using the latest [**GSPO**](https://arxiv.org/abs/2507.18071) algorithm. The model is mainly trained on reasoning datasets, but still maintains proficiency in various general tasks, achieving an overall performance superior to the base model. **Architecture**: - ViT: QwenViT - Projector: 2-layer MLP - LLM: Qwen2.5-3B # Evaluation We evaluate on eight benchmarks specified in the [OpenCompass](https://rank.opencompass.org.cn/leaderboard-multimodal) leaderboard using [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), including: `MMBench_TEST_EN/CN_V11, MMStar, MMMU_VAL, MathVista_MINI, HallusionBench, AI2D_TEST, OCRBench, MMVet`. The results are shown below: | | Avg | MMBench v1.1 | MMStar | MMMU | MathVista | HallusionBench | AI2D | OCRBench | MMVet | |:-------------:|:----:|:------------:|:------:|:----:|:---------:|:--------------:|:----:|:--------:|:-----:| | Qwen2.5-VL-3B | 64.8 | 77.1 | 55.3 | 51.2 | 60.1 | 48.6 | 81.5 | 83.2 | 61.4 | | QTuneVL1-3B | **66.1(+1.3)** | **77.3(+0.2)** | **57.3(+2.0)** | **53.6(+2.4)** | **63.7(+3.6)** | **49.4(+0.8)** | 81.3 | **83.8(0.6)** | **62.5(+1.1)** | The reported results are based on our local implementations and may slightly differ from the official ones. # Copyright We welcome suggestions to help us improve the QTuneVL. For any query, please contact HanChao Wang: [email protected]. If you find something interesting, please also feel free to share with us through email or open an issue.
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756715240
Ferdi3425
2025-09-01T08:28:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:28:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbektass/blockassist-bc-keen_fast_giraffe_1756714986
omerbektass
2025-09-01T08:24:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T08:23:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zgao3186/qwen25math7b-one-shot-em
zgao3186
2025-09-01T08:21:23Z
11
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2505.20282", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T07:51:40Z
--- library_name: transformers license: mit pipeline_tag: text-generation --- # One-shot Entropy Minimization This model is described in the paper [One-shot Entropy Minimization](https://arxiv.org/abs/2505.20282). We trained 13,440 large language models and found that entropy minimization requires only a single unlabeled data and 10 steps optimization to achieve performance improvements comparable to or even greater than those obtained using thousands of data and carefully designed rewards in rule-based reinforcement learning. This striking result may prompt a rethinking of post-training paradigms for large language models. Code: https://github.com/zitian-gao/one-shot-em Project Page: https://www.notion.so/One-shot-Entropy-Minimization-202606db813b80639773f850f39246a5 ### Installation ```bash pip install torch transformers==4.47.1 accelerate deepspeed psutil pandas numpy wandb ``` --- ### Reproducing One-shot EM Training (SOTA) ```bash accelerate launch train.py \ --model_name Qwen2.5-Math-7B \ --model_path /path/to/Qwen2.5-Math-7B \ --train_data dataset/1shot_rlvr/pi1_r1280.parquet \ --effective_batch 64 \ --micro_batch_size 2 \ --temperature 0.5 \ --learning_rate 2e-5 \ --max_steps 50 \ --log_steps 1 \ --save_steps 1 \ --run_name one_shot \ --wandb_project one-shot-em ``` --- ### Reproducing Multi-shot EM Training ```bash accelerate launch train.py \ --model_name Qwen2.5-Math-7B \ --model_path /path/to/Qwen2.5-Math-7B \ --train_data dataset/numina/numina_00.parquet \ --effective_batch 64 \ --micro_batch_size 2 \ --temperature 0.5 \ --learning_rate 2e-5 \ --max_steps 50 \ --log_steps 1 \ --save_steps 1 \ --run_name multi_shot \ --wandb_project one-shot-em ``` --- ### Evaluation ```bash cd Qwen2.5-Eval/evaluation bash sh/eval_all_math.sh ``` --- ### Acknowledgements Our dataset references and builds upon the following open-source contributions: - [NuminaMath-CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) - [DeepScaler](https://github.com/agentica-project/deepscaler) - [One-shot RLVR](https://github.com/ypwang61/One-Shot-RLVR/) – for data selection strategies - [Qwen2.5-Eval](https://github.com/QwenLM/Qwen2.5-Math/) – for evaluation benchmarks We sincerely thank the authors and maintainers of these projects for their excellent contributions to the research community! --- ### Citation ``` @misc{gao2025oneshotentropyminimization, title={One-shot Entropy Minimization}, author={Zitian Gao and Lynx Chen and Joey Zhou and Bryan Dai}, year={2025}, eprint={2505.20282}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.20282}, } ```
ChenWu98/numina_qwen_2.5_sft_combine_v1_identical_split_1
ChenWu98
2025-09-01T08:17:16Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen2.5-1.5B", "base_model:finetune:Qwen/Qwen2.5-1.5B", "endpoints_compatible", "region:us" ]
null
2025-09-01T08:16:21Z
--- base_model: Qwen/Qwen2.5-1.5B library_name: transformers model_name: numina_qwen_2.5_sft_combine_v1_identical_split_1 tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for numina_qwen_2.5_sft_combine_v1_identical_split_1 This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/i0rn7qbz) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF
mradermacher
2025-09-01T08:05:35Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k", "base_model:quantized:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-09-01T06:52:35Z
--- base_model: EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ1_M.gguf) | i1-IQ1_M | 0.5 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.1 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q4_0.gguf) | i1-Q4_0 | 1.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.1 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q4_1.gguf) | i1-Q4_1 | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k-i1-GGUF/resolve/main/SmolLM2-1.7B-magpie-ultra-v1.0-classification-431k.i1-Q6_K.gguf) | i1-Q6_K | 1.5 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
faisu-eth/blockassist-bc-thick_twitchy_jackal_1756712087
faisu-eth
2025-09-01T07:35:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thick twitchy jackal", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T07:35:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thick twitchy jackal --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VoilaRaj/81_g_SMKpCh
VoilaRaj
2025-09-01T07:32:56Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-01T07:32:28Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
GroomerG/blockassist-bc-vicious_pawing_badger_1756709933
GroomerG
2025-09-01T07:24:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vicious pawing badger", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T07:24:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vicious pawing badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
2hpsatt/blockassist-bc-huge_deft_eagle_1756711127
2hpsatt
2025-09-01T07:19:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "huge deft eagle", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T07:19:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - huge deft eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
arif696/blockassist-bc-regal_spotted_pelican_1756711000
arif696
2025-09-01T07:18:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T07:17:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
SaurabhSharma220/sft-tiny-chatbot
SaurabhSharma220
2025-09-01T07:17:46Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "endpoints_compatible", "region:us" ]
null
2025-09-01T07:04:09Z
--- base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 library_name: transformers model_name: sft-tiny-chatbot tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for sft-tiny-chatbot This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="SaurabhSharma220/sft-tiny-chatbot", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.22.1 - Transformers: 4.57.0.dev0 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Satram/QYA_150_Context
Satram
2025-09-01T07:16:33Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-29T08:46:35Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Satram - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Paul720810/codegemma-2b-sql-finetuned
Paul720810
2025-09-01T07:11:33Z
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T07:09:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yuan571/phi-3.5-mini-0901-data5to64-32-32
yuan571
2025-09-01T06:26:41Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T06:03:13Z
--- base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** yuan571 - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
hssnjfry/blockassist-bc-climbing_pouncing_dragonfly_1756707401
hssnjfry
2025-09-01T06:17:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "climbing pouncing dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T06:17:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - climbing pouncing dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
shubham75/MyGemmaNPC
shubham75
2025-09-01T06:15:14Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T06:08:37Z
--- base_model: google/gemma-3-270m-it library_name: transformers model_name: MyGemmaNPC tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for MyGemmaNPC This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="shubham75/MyGemmaNPC", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.22.1 - Transformers: 4.55.4 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
akirafudo/blockassist-bc-keen_fast_giraffe_1756706981
akirafudo
2025-09-01T06:10:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T06:09:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbkts/blockassist-bc-keen_fast_giraffe_1756706590
omerbkts
2025-09-01T06:03:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T06:03:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Llama-3.1-8B-sft-SPIN-gpt4o-KTO-GGUF
mradermacher
2025-09-01T06:00:21Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "trl", "kto", "en", "base_model:AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-KTO", "base_model:quantized:AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-KTO", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-01T04:34:20Z
--- base_model: AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-KTO language: - en library_name: transformers model_name: Llama-3.1-8B-sft-SPIN-gpt4o-KTO mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - generated_from_trainer - trl - kto --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-KTO <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-3.1-8B-sft-SPIN-gpt4o-KTO-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-SPIN-gpt4o-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-SPIN-gpt4o-KTO.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-SPIN-gpt4o-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-SPIN-gpt4o-KTO.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-SPIN-gpt4o-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-SPIN-gpt4o-KTO.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-SPIN-gpt4o-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-SPIN-gpt4o-KTO.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-SPIN-gpt4o-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-SPIN-gpt4o-KTO.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-SPIN-gpt4o-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-SPIN-gpt4o-KTO.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-SPIN-gpt4o-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-SPIN-gpt4o-KTO.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-SPIN-gpt4o-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-SPIN-gpt4o-KTO.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-SPIN-gpt4o-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-SPIN-gpt4o-KTO.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-SPIN-gpt4o-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-SPIN-gpt4o-KTO.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-SPIN-gpt4o-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-SPIN-gpt4o-KTO.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-SPIN-gpt4o-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-SPIN-gpt4o-KTO.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Xtoun/blockassist-bc-bristly_scaly_koala_1756704014
Xtoun
2025-09-01T05:39:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bristly scaly koala", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T05:39:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bristly scaly koala --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/llama3-diverce-ver1.0-i1-GGUF
mradermacher
2025-09-01T05:36:44Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:sel303/llama3-diverce-ver1.0", "base_model:quantized:sel303/llama3-diverce-ver1.0", "license:llama3", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-09-01T04:45:33Z
--- base_model: sel303/llama3-diverce-ver1.0 language: - en library_name: transformers license: llama3 mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/sel303/llama3-diverce-ver1.0 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#llama3-diverce-ver1.0-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/llama3-diverce-ver1.0-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/llama3-diverce-ver1.0-i1-GGUF/resolve/main/llama3-diverce-ver1.0.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
arif696/blockassist-bc-regal_spotted_pelican_1756704855
arif696
2025-09-01T05:36:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T05:35:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nick1880/blockassist-bc-barky_powerful_falcon_1756704775
nick1880
2025-09-01T05:33:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "barky powerful falcon", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T05:33:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - barky powerful falcon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
arif696/blockassist-bc-regal_spotted_pelican_1756703361
arif696
2025-09-01T05:11:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T05:10:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ravikumar1728/yoda
ravikumar1728
2025-09-01T04:50:06Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T06:36:39Z
--- base_model: google/gemma-3-270m-it library_name: transformers model_name: Yoda tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for Yoda This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "Every word and phrase he speaks is true." generator = pipeline("text-generation", model="ravikumar1728/yoda", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.22.1 - Transformers: 4.56.0 - Pytorch: 2.4.1+cu124 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
WenFengg/expert_14_k18_19
WenFengg
2025-09-01T04:44:12Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-01T04:43:31Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
Subha95/sentiment_model
Subha95
2025-09-01T04:43:43Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:Subha95/bengali-sentiment-model", "base_model:finetune:Subha95/bengali-sentiment-model", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-01T04:43:11Z
--- library_name: transformers license: mit base_model: Subha95/bengali-sentiment-model tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: sentiment_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment_model This model is a fine-tuned version of [Subha95/bengali-sentiment-model](https://huggingface.co/Subha95/bengali-sentiment-model) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3044 - Accuracy: 0.5167 - F1: 0.8087 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
dongboklee/GenPRM-14B
dongboklee
2025-09-01T04:06:08Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "region:us" ]
null
2025-09-01T04:05:49Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1756697800
vwzyrraz7l
2025-09-01T04:01:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T04:01:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sister-hong-original-Viral-video-Clip/New.full.videos.sister.hong.Viral.Video.Official.Tutorial
sister-hong-original-Viral-video-Clip
2025-09-01T03:46:14Z
0
0
null
[ "region:us" ]
null
2025-09-01T03:46:00Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
GroomerG/blockassist-bc-vicious_pawing_badger_1756696544
GroomerG
2025-09-01T03:44:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vicious pawing badger", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T03:44:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vicious pawing badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
frozon/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-darting_masked_sparrow
frozon
2025-09-01T03:32:51Z
107
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am darting_masked_sparrow", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-30T02:12:22Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am darting_masked_sparrow --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kalimoy/blockassist-bc-hulking_singing_dolphin_1756697320
kalimoy
2025-09-01T03:28:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hulking singing dolphin", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T03:28:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hulking singing dolphin --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
dgambettaphd/M_llm2_run2_gen9_S_doc1000_synt64_lr1e-04_acm_SYNLAST
dgambettaphd
2025-09-01T03:19:42Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-01T03:19:27Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
akunode/blockassist-bc-long_prickly_eel_1756695748
akunode
2025-09-01T03:03:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "long prickly eel", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T03:03:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - long prickly eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
arif696/blockassist-bc-regal_spotted_pelican_1756694964
arif696
2025-09-01T02:50:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T02:50:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756694775
akirafudo
2025-09-01T02:47:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T02:46:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756693091
Loder-S
2025-09-01T02:42:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sprightly knobby tiger", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T02:42:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sprightly knobby tiger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1756692284
kojeklollipop
2025-09-01T02:31:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "spotted amphibious stork", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T02:31:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - spotted amphibious stork --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tgg123/RIR-Resound-User-Study
tgg123
2025-09-01T02:09:15Z
0
0
null
[ "license:mit", "region:us" ]
null
2025-09-01T02:04:37Z
--- title: RIR Resound User Study emoji: ⚡ colorFrom: yellow colorTo: gray sdk: gradio sdk_version: 5.13.1 app_file: app.py pinned: false license: mit short_description: Room RIR Spatial Audio Rendering User Study --- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
sekirr/blockassist-bc-masked_tenacious_whale_1756692165
sekirr
2025-09-01T02:03:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "masked tenacious whale", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T02:03:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - masked tenacious whale --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kalimoy/blockassist-bc-dappled_stalking_yak_1756691690
kalimoy
2025-09-01T01:55:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dappled stalking yak", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T01:54:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dappled stalking yak --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vendi11/blockassist-bc-placid_placid_llama_1756691336
vendi11
2025-09-01T01:49:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid placid llama", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T01:49:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid placid llama --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
liukevin666/blockassist-bc-yawning_striped_cassowary_1756691079
liukevin666
2025-09-01T01:45:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T01:45:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/OpenAssisted-English-Meta_3_1_8B-GGUF
mradermacher
2025-09-01T01:43:38Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Akshaykumarbm/OpenAssisted-English-Meta_3_1_8B", "base_model:quantized:Akshaykumarbm/OpenAssisted-English-Meta_3_1_8B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-31T22:47:45Z
--- base_model: Akshaykumarbm/OpenAssisted-English-Meta_3_1_8B language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Akshaykumarbm/OpenAssisted-English-Meta_3_1_8B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#OpenAssisted-English-Meta_3_1_8B-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/OpenAssisted-English-Meta_3_1_8B-GGUF/resolve/main/OpenAssisted-English-Meta_3_1_8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/OpenAssisted-English-Meta_3_1_8B-GGUF/resolve/main/OpenAssisted-English-Meta_3_1_8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/OpenAssisted-English-Meta_3_1_8B-GGUF/resolve/main/OpenAssisted-English-Meta_3_1_8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/OpenAssisted-English-Meta_3_1_8B-GGUF/resolve/main/OpenAssisted-English-Meta_3_1_8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/OpenAssisted-English-Meta_3_1_8B-GGUF/resolve/main/OpenAssisted-English-Meta_3_1_8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/OpenAssisted-English-Meta_3_1_8B-GGUF/resolve/main/OpenAssisted-English-Meta_3_1_8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OpenAssisted-English-Meta_3_1_8B-GGUF/resolve/main/OpenAssisted-English-Meta_3_1_8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OpenAssisted-English-Meta_3_1_8B-GGUF/resolve/main/OpenAssisted-English-Meta_3_1_8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/OpenAssisted-English-Meta_3_1_8B-GGUF/resolve/main/OpenAssisted-English-Meta_3_1_8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/OpenAssisted-English-Meta_3_1_8B-GGUF/resolve/main/OpenAssisted-English-Meta_3_1_8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/OpenAssisted-English-Meta_3_1_8B-GGUF/resolve/main/OpenAssisted-English-Meta_3_1_8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/OpenAssisted-English-Meta_3_1_8B-GGUF/resolve/main/OpenAssisted-English-Meta_3_1_8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
crystalline7/1400512
crystalline7
2025-09-01T01:19:56Z
0
0
null
[ "region:us" ]
null
2025-09-01T01:19:53Z
[View on Civ Archive](https://civarchive.com/models/1328694?modelVersionId=1500678)
jaredvoxworksai/orpheus_01_baseline_voice
jaredvoxworksai
2025-09-01T00:48:18Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/orpheus-3b-0.1-ft", "base_model:finetune:unsloth/orpheus-3b-0.1-ft", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-01T00:47:47Z
--- base_model: unsloth/orpheus-3b-0.1-ft tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** jaredvoxworksai - **License:** apache-2.0 - **Finetuned from model :** unsloth/orpheus-3b-0.1-ft This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
NahedDom/blockassist-bc-flapping_stocky_leopard_1756684946
NahedDom
2025-09-01T00:37:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping stocky leopard", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T00:37:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping stocky leopard --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
elmenbillion/blockassist-bc-beaked_sharp_otter_1756684748
elmenbillion
2025-09-01T00:26:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "beaked sharp otter", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T00:25:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - beaked sharp otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756684563
akirafudo
2025-08-31T23:56:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T23:56:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
klmdr22/blockassist-bc-wild_loud_newt_1756682618
klmdr22
2025-08-31T23:24:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wild loud newt", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T23:24:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wild loud newt --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-snappy_tenacious_eagle_1756681735
AnerYubo
2025-08-31T23:08:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "snappy tenacious eagle", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T23:08:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - snappy tenacious eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
golopper/blockassist-bc-sneaky_howling_eagle_1756681538
golopper
2025-08-31T23:06:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sneaky howling eagle", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T23:05:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sneaky howling eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Stasonelison/blockassist-bc-howling_powerful_aardvark_1756678268
Stasonelison
2025-08-31T22:12:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "howling powerful aardvark", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T22:11:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - howling powerful aardvark --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
liukevin666/blockassist-bc-yawning_striped_cassowary_1756678172
liukevin666
2025-08-31T22:11:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T22:10:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1756676023
capungmerah627
2025-08-31T22:00:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stinging soaring porcupine", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T22:00:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stinging soaring porcupine --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
eusuf01/blockassist-bc-smooth_humming_butterfly_1756674796
eusuf01
2025-08-31T21:14:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T21:13:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
eusuf01/blockassist-bc-smooth_humming_butterfly_1756673658
eusuf01
2025-08-31T20:55:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T20:54:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
GroomerG/blockassist-bc-vicious_pawing_badger_1756672205
GroomerG
2025-08-31T20:53:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vicious pawing badger", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T20:53:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vicious pawing badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756673395
akirafudo
2025-08-31T20:50:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T20:50:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
abarelka/ST_UST_3.1_8B_Base16
abarelka
2025-08-31T20:49:34Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T20:43:11Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** abarelka - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
eusuf01/blockassist-bc-smooth_humming_butterfly_1756672605
eusuf01
2025-08-31T20:37:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T20:37:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
malikka/blockassist-bc-dense_toothy_baboon_1756671543
malikka
2025-08-31T20:19:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dense toothy baboon", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T20:19:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dense toothy baboon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756671472
akirafudo
2025-08-31T20:18:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T20:18:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
yidingp/new_mislead_general
yidingp
2025-08-31T19:54:05Z
1
0
peft
[ "peft", "pytorch", "safetensors", "llama", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2025-08-28T04:21:20Z
--- base_model: meta-llama/Llama-2-7b-hf library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
malikka/blockassist-bc-dense_toothy_baboon_1756669990
malikka
2025-08-31T19:53:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dense toothy baboon", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T19:53:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dense toothy baboon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VIDEO-DE-FILTRADO-ABIGAIL-LALAMA-Y-SNAYDER/VER.VIDEO.DE.ABIGAIL.LALAMA.Y.SNAYDER.FILTRADO.VIRAL
VIDEO-DE-FILTRADO-ABIGAIL-LALAMA-Y-SNAYDER
2025-08-31T19:51:41Z
0
0
null
[ "region:us" ]
null
2025-08-31T19:51:17Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
eusuf01/blockassist-bc-smooth_humming_butterfly_1756669812
eusuf01
2025-08-31T19:50:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T19:50:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
cpatonn/NVIDIA-Nemotron-Nano-9B-v2-AWQ-8bit
cpatonn
2025-08-31T19:45:12Z
0
0
transformers
[ "transformers", "safetensors", "nvidia", "pytorch", "text-generation", "conversational", "en", "es", "fr", "de", "it", "ja", "dataset:nvidia/Nemotron-Post-Training-Dataset-v1", "dataset:nvidia/Nemotron-Post-Training-Dataset-v2", "dataset:nvidia/Nemotron-Pretraining-Dataset-sample", "dataset:nvidia/Nemotron-CC-v2", "dataset:nvidia/Nemotron-CC-Math-v1", "dataset:nvidia/Nemotron-Pretraining-SFT-v1", "arxiv:2504.03624", "arxiv:2508.14444", "arxiv:2412.02595", "base_model:nvidia/NVIDIA-Nemotron-Nano-9B-v2", "base_model:quantized:nvidia/NVIDIA-Nemotron-Nano-9B-v2", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T15:23:20Z
--- license: other license_name: nvidia-open-model-license license_link: >- https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/ pipeline_tag: text-generation datasets: - nvidia/Nemotron-Post-Training-Dataset-v1 - nvidia/Nemotron-Post-Training-Dataset-v2 - nvidia/Nemotron-Pretraining-Dataset-sample - nvidia/Nemotron-CC-v2 - nvidia/Nemotron-CC-Math-v1 - nvidia/Nemotron-Pretraining-SFT-v1 language: - en - es - fr - de - it - ja library_name: transformers tags: - nvidia - pytorch track_downloads: true base_model_relation: quantized base_model: - nvidia/NVIDIA-Nemotron-Nano-9B-v2 --- # NVIDIA-Nemotron-Nano-9B-v2 ![](./accuracy_chart.png) **Model Developer:** NVIDIA Corporation **Model Dates:** June 2025 \- August 2025 **Data Freshness:** September 2024 The pretraining data has a cutoff date of September 2024. ## Model Overview NVIDIA-Nemotron-Nano-9B-v2 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and tasks by first generating a reasoning trace and then concluding with a final response. The model's reasoning capabilities can be controlled via a system prompt. If the user prefers the model to provide its final answer without intermediate reasoning traces, it can be configured to do so, albeit with a slight decrease in accuracy for harder prompts that require reasoning. Conversely, allowing the model to generate reasoning traces first generally results in higher-quality final solutions to queries and tasks. The model uses a hybrid architecture consisting primarily of Mamba-2 and MLP layers combined with just four Attention layers. For the architecture, please refer to the [Nemotron-H tech report](https://arxiv.org/abs/2504.03624). The model was trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) and [NeMo-RL](https://github.com/NVIDIA-NeMo/RL). The supported languages include: English, German, Spanish, French, Italian, and Japanese. Improved using Qwen. This model is ready for commercial use. ## License/Terms of Use GOVERNING TERMS: This trial service is governed by the [NVIDIA API Trial Terms of Service](https://assets.ngc.nvidia.com/products/api-catalog/legal/NVIDIA%20API%20Trial%20Terms%20of%20Service.pdf). Use of this model is governed by the [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). ## Evaluation Results ### Benchmark Results (Reasoning On) We evaluated our model in **Reasoning-On** mode across all benchmarks, except RULER, which is evaluated in **Reasoning-Off** mode. | Benchmark | Qwen3-8B | NVIDIA-Nemotron-Nano-9B-v2 | | :---- | ----: | ----: | | AIME25 | 69.3% | 72.1% | | MATH500 | 96.3% | 97.8% | | GPQA | 59.6% | 64.0% | | LCB | 59.5% | 71.1% | | BFCL v3 | 66.3% | 66.9% | | IFEval (Instruction Strict) | 89.4% | 90.3% | | HLE | 4.4% | 6.5% | | RULER (128K) | 74.1% | 78.9% | All evaluations were done using [NeMo-Skills](https://github.com/NVIDIA/NeMo-Skills). We published a [tutorial](https://nvidia.github.io/NeMo-Skills/tutorials/2025/08/22/reproducing-nvidia-nemotron-nano-9b-v2-evals/) with all details necessary to reproduce our evaluation results. ## Reasoning Budget Control This model supports runtime “thinking” budget control. During inference, the user can specify how many tokens the model is allowed to "think". ![](./acc-vs-budget.png) ## Model Architecture - Architecture Type: Mamba2-Transformer Hybrid - Network Architecture: Nemotron-Hybrid ### Deployment Geography: Global ### Use Case NVIDIA-Nemotron-Nano-9B-v2 is a general purpose reasoning and chat model intended to be used in English and coding languages. Other non-English languages (German, French, Italian, Spanish and Japanese) are also supported. Developers designing AI Agent systems, chatbots, RAG systems, and other AI-powered applications. Also suitable for typical instruction-following tasks. ### Release Date: 08/18/2025 - Huggingface 08/18/2025 via https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2 - API Catalog 08/18/2025 via https://build.nvidia.com/nvidia/nvidia-nemotron-nano-9b-v2 ## References - [NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model](https://arxiv.org/abs/2508.14444) ## Input - Input Type(s): Text - Input Format(s): String - Input Parameters: One-Dimensional (1D): Sequences - Other Properties Related to Input: Context length up to 128K. Supported languages include German, Spanish, French, Italian, Korean, Portuguese, Russian, Japanese, Chinese and English. ## Output - Output Type(s): Text - Output Format: String - Output Parameters: One-Dimensional (1D): Sequences up to 128K Our models are designed and optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. ## Software Integration - Runtime Engine(s): NeMo 25.07.nemotron-nano-v2 - Supported Hardware Microarchitecture Compatibility: NVIDIA A10G, NVIDIA H100-80GB, NVIDIA A100 - Operating System(s): Linux ### **Use it with Transformers** The snippet below shows how to use this model with Huggingface Transformers (tested on version 4.48.3). ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM # Load tokenizer and model tokenizer = AutoTokenizer.from_pretrained("nvidia/NVIDIA-Nemotron-Nano-9B-v2") model = AutoModelForCausalLM.from_pretrained( "nvidia/NVIDIA-Nemotron-Nano-9B-v2", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto" ) ``` Case 1: `/think` or no reasoning signal is provided in the system prompt, reasoning will be set to `True` ``` messages = [ {"role": "system", "content": "/think"}, {"role": "user", "content": "Write a haiku about GPUs"}, ] ``` Case 2: `/no_think` is provided, reasoning will be set to `False` ``` messages = [ {"role": "system", "content": "/no_think"}, {"role": "user", "content": "Write a haiku about GPUs"}, ] ``` Note: `/think` or `/no_think` keywords can also be provided in “user” messages for turn-level reasoning control. The rest of the inference snippet remains the same ``` tokenized_chat = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt" ).to(model.device) outputs = model.generate( tokenized_chat, max_new_tokens=32, eos_token_id=tokenizer.eos_token_id ) print(tokenizer.decode(outputs[0])) ``` We recommend setting `temperature` to `0.6`, `top_p` to `0.95` for reasoning True and greedy search for reasoning False, and increase `max_new_tokens` to `1024` or higher for reasoning True. ### **Use it with TRT-LLM** The snippet below shows how to use this model with TRT-LLM. We tested this on the following [commit](https://github.com/NVIDIA/TensorRT-LLM/tree/46c5a564446673cdd0f56bcda938d53025b6d04e) and followed these [instructions](https://github.com/NVIDIA/TensorRT-LLM/blob/46c5a564446673cdd0f56bcda938d53025b6d04e/docs/source/installation/build-from-source-linux.md#option-2-build-tensorrt-llm-step-by-step) to build and install TRT-LLM in a docker container. ``` from tensorrt_llm import SamplingParams from tensorrt_llm._torch import LLM from tensorrt_llm._torch.pyexecutor.config import PyTorchConfig from tensorrt_llm.llmapi import KvCacheConfig from transformers import AutoTokenizer pytorch_config = PyTorchConfig( disable_overlap_scheduler=True, enable_trtllm_decoder=True ) kv_cache_config = KvCacheConfig( enable_block_reuse=False, ) ``` ``` model_id = "nvidia/NVIDIA-Nemotron-Nano-9B-v2" tokenizer = AutoTokenizer.from_pretrained(model_id) llm = LLM( model=model_id, max_seq_len=32678, max_batch_size=4, pytorch_backend_config=pytorch_config, kv_cache_config=kv_cache_config, tensor_parallel_size=8, ) messages = [ {"role": "system", "content": "/think"}, {"role": "user", "content": "Write a haiku about GPUs"}, ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) sampling_params = SamplingParams( max_tokens=512, temperature=0.6, top_p=0.95, add_special_tokens=False, ) outputs = llm.generate([prompt], sampling_params) print(outputs[0].outputs[0].text) ``` ### **Use it with vLLM** The snippet below shows how to use this model with vLLM. Use the latest version of vLLM and follow these instructions to build and install vLLM. ```shell pip install -U "vllm>=0.10.1" ``` Now you can run run the server with: ```shell vllm serve nvidia/NVIDIA-Nemotron-Nano-9B-v2 \ --trust-remote-code \ --max-num-seqs 64 \ --mamba_ssm_cache_dtype float32 ``` Note: - Remember to add \`--mamba\_ssm\_cache\_dtype float32\` for accurate quality. Without this option, the model’s accuracy may degrade. - If you encounter a CUDA OOM issue, try `--max-num-seqs 64` and consider lower the value further if the error persists. Alternativly, you can use Docker to launch a vLLM server. ``` export TP_SIZE=1 # Adjust this value based on the number of GPUs you want to use docker run --runtime nvidia --gpus all \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \ -p 8000:8000 \ --ipc=host \ vllm/vllm-openai:v0.10.1 \ --model nvidia/NVIDIA-Nemotron-Nano-9B-v2 \ --tensor-parallel-size ${TP_SIZE} \ --max-num-seqs 64 \ --max-model-len 131072 \ --trust-remote-code \ --mamba_ssm_cache_dtype float32 ``` #### Using Budget Control with a vLLM Server The thinking budget allows developers to keep accuracy high and meet response‑time targets \- which is especially crucial for customer support, autonomous agent steps, and edge devices where every millisecond counts. With budget control, you can set a limit for internal reasoning: * `max_thinking_tokens`: This is a threshold that will attempt to end the reasoning trace at the next newline encountered in the reasoning trace. If no newline is encountered within 500 tokens, it will abruptly end the reasoning trace at \`max\_thinking\_tokens \+ 500\`. Start a vLLM server: ```shell vllm serve nvidia/NVIDIA-Nemotron-Nano-9B-v2 \ --trust-remote-code \ --mamba_ssm_cache_dtype float32 ``` Client for supporting budget control: ```py from typing import Any, Dict, List import openai from transformers import AutoTokenizer class ThinkingBudgetClient: def __init__(self, base_url: str, api_key: str, tokenizer_name_or_path: str): self.base_url = base_url self.api_key = api_key self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path) self.client = openai.OpenAI(base_url=self.base_url, api_key=self.api_key) def chat_completion( self, model: str, messages: List[Dict[str, Any]], max_thinking_budget: int = 512, max_tokens: int = 1024, **kwargs, ) -> Dict[str, Any]: assert ( max_tokens > max_thinking_budget ), f"thinking budget must be smaller than maximum new tokens. Given {max_tokens=} and {max_thinking_budget=}" # 1. first call chat completion to get reasoning content response = self.client.chat.completions.create( model=model, messages=messages, max_tokens=max_thinking_budget, **kwargs ) content = response.choices[0].message.content reasoning_content = content if not "</think>" in reasoning_content: # reasoning content is too long, closed with a period (.) reasoning_content = f"{reasoning_content}.\n</think>\n\n" reasoning_tokens_len = len( self.tokenizer.encode(reasoning_content, add_special_tokens=False) ) remaining_tokens = max_tokens - reasoning_tokens_len assert ( remaining_tokens > 0 ), f"remaining tokens must be positive. Given {remaining_tokens=}. Increase the max_tokens or lower the max_thinking_budget." # 2. append reasoning content to messages and call completion messages.append({"role": "assistant", "content": reasoning_content}) prompt = self.tokenizer.apply_chat_template( messages, tokenize=False, continue_final_message=True, ) response = self.client.completions.create( model=model, prompt=prompt, max_tokens=remaining_tokens, **kwargs ) response_data = { "reasoning_content": reasoning_content.strip().strip("</think>").strip(), "content": response.choices[0].text, "finish_reason": response.choices[0].finish_reason, } return response_data ``` Calling the server with a budget (Restricted to 32 tokens here as an example) ```py tokenizer_name_or_path = "nvidia/NVIDIA-Nemotron-Nano-9B-v2" client = ThinkingBudgetClient( base_url="http://localhost:8000/v1", # Nano 9B v2 deployed in thinking mode api_key="EMPTY", tokenizer_name_or_path=tokenizer_name_or_path, ) result = client.chat_completion( model="nvidia/NVIDIA-Nemotron-Nano-9B-v2", messages=[ {"role": "system", "content": "You are a helpful assistant. /think"}, {"role": "user", "content": "What is 2+2?"}, ], max_thinking_budget=32, max_tokens=512, temperature=0.6, top_p=0.95, ) print(result) ``` You should see output similar to the following: ``` {'reasoning_content': "Okay, the user asked, What is 2+2? Let me think. Well, 2 plus 2 equals 4. That's a basic.", 'content': '2 + 2 equals **4**.\n', 'finish_reason': 'stop'} ``` #### Using Tool-Calling with a vLLM Server Start a vLLM server with native tool-calling: ```shell git clone https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2 vllm serve nvidia/NVIDIA-Nemotron-Nano-9B-v2 \ --trust-remote-code \ --mamba_ssm_cache_dtype float32 \ --enable-auto-tool-choice \ --tool-parser-plugin "NVIDIA-Nemotron-Nano-9B-v2/nemotron_toolcall_parser_no_streaming.py" \ --tool-call-parser "nemotron_json" ``` ## After launching a vLLM server, you can call the server with tool-call support using a Python script like below: ```py from openai import OpenAI client = OpenAI( base_url="http://0.0.0.0:5000/v1", api_key="dummy", ) completion = client.chat.completions.create( model="nvidia/NVIDIA-Nemotron-Nano-9B-v2", messages=[ {"role": "system", "content": ""}, {"role": "user", "content": "My bill is $100. What will be the amount for 18% tip?"} ], tools=[ { "type": "function", "function": { "name": "calculate_tip", "parameters": { "type": "object", "properties": { "bill_total": { "type": "integer", "description": "The total amount of the bill" }, "tip_percentage": { "type": "integer", "description": "The percentage of tip to be applied" } }, "required": ["bill_total", "tip_percentage"] } } }, { "type": "function", "function": { "name": "convert_currency", "parameters": { "type": "object", "properties": { "amount": { "type": "integer", "description": "The amount to be converted" }, "from_currency": { "type": "string", "description": "The currency code to convert from" }, "to_currency": { "type": "string", "description": "The currency code to convert to" } }, "required": ["from_currency", "amount", "to_currency"] } } } ], temperature=0.6, top_p=0.95, max_tokens=32768, stream=False ) print(completion.choices[0].message.content) print(completion.choices[0].message.tool_calls) ``` You should see output similar to the following: ``` <think> Okay, let's see. The user has a bill of $100 and wants to know the amount for an 18% tip. Hmm, I need to calculate the tip based on the bill total and the percentage. The tools provided include calculate_tip, which takes bill_total and tip_percentage as parameters. So the bill_total here is 100, and the tip_percentage is 18. I should call the calculate_tip function with these values. Wait, do I need to check if the parameters are integers? The bill is $100, which is an integer, and 18% is also an integer. So that fits the function's requirements. I don't need to convert any currency here because the user is asking about a tip in the same currency. So the correct tool to use is calculate_tip with those parameters. </think> [ChatCompletionMessageToolCall(id='chatcmpl-tool-e341c6954d2c48c2a0e9071c7bdefd8b', function=Function(arguments='{"bill_total": 100, "tip_percentage": 18}', name='calculate_tip'), type='function')] ``` ## Model Version - v1.0 ## Prompt Format We follow the jinja chat template provided below. This template conditionally adds `<think>\n` to the start of the Assistant response if `/think` is found in either the system prompt or any user message. If no reasoning signal is added, the model defaults to reasoning "on" mode. The chat template adds `<think></think>` to the start of the Assistant response if `/no_think` is found in the system prompt. Thus enforcing reasoning on/off behavior. ``` {%- set ns = namespace(enable_thinking = true) %} {%- for message in messages -%} {%- set content = message['content'] -%} {%- if message['role'] == 'user' or message['role'] == 'system' -%} {%- if '/think' in content -%} {%- set ns.enable_thinking = true -%} {%- elif '/no_think' in content -%} {%- set ns.enable_thinking = false -%} {%- endif -%} {%- endif -%} {%- endfor -%} {%- if messages[0]['role'] != 'system' -%} {%- set ns.non_tool_system_content = '' -%} {{- '<SPECIAL_10>System\n' -}} {%- else -%} {%- set ns.non_tool_system_content = messages[0]['content'] .replace('/think', '') .replace('/no_think', '') .strip() -%} {{- '<SPECIAL_10>System\n' + ns.non_tool_system_content }} {%- endif -%} {%- if tools -%} {%- if ns.non_tool_system_content is defined and ns.non_tool_system_content != '' -%} {{- '\n\n' -}} {%- endif -%} {{- 'You can use the following tools to assist the user if required:' -}} {{- '\n<AVAILABLE_TOOLS>[' -}} {%- for tool in tools -%} {{- (tool.function if tool.function is defined else tool) | tojson -}} {{- ', ' if not loop.last else '' -}} {%- endfor -%} {{- ']</AVAILABLE_TOOLS>\n\n' -}} {{- 'If you decide to call any tool(s), use the following format:\n' -}} {{- '<TOOLCALL>[{{"name": "tool_name1", "arguments": "tool_args1"}}, ' -}} {{- '{{"name": "tool_name2", "arguments": "tool_args2"}}]</TOOLCALL>\n\n' -}} {{- 'The user will execute tool-calls and return responses from tool(s) in this format:\n' -}} {{- '<TOOL_RESPONSE>[{{"tool_response1"}}, {{"tool_response2"}}]</TOOL_RESPONSE>\n\n' -}} {{- 'Based on the tool responses, you can call additional tools if needed, correct tool calls if any errors are found, or just respond to the user.' -}} {%- endif -%} {{- '\n' -}} {%- set messages = messages[1:] if messages[0]['role'] == 'system' else messages -%} {%- if messages[-1]['role'] == 'assistant' -%} {%- set ns.last_turn_assistant_content = messages[-1]['content'].strip() -%} {%- set messages = messages[:-1] -%} {%- endif -%} {%- for message in messages -%} {%- set content = message['content'] -%} {%- if message['role'] == 'user' -%} {{- '<SPECIAL_11>User\n' + content.replace('/think', '').replace('/no_think', '').strip() + '\n' }} {%- elif message['role'] == 'tool' -%} {%- if loop.first or (messages[loop.index0 - 1].role != 'tool') -%} {{- '<SPECIAL_11>User\n' + '<TOOL_RESPONSE>[' }} {%- endif -%} {{- message['content'] -}} {{- ', ' if not loop.last and (messages[loop.index0 + 1].role == 'tool') else '' -}} {%- if loop.last or (messages[loop.index0 + 1].role != 'tool') -%} {{- ']</TOOL_RESPONSE>\n' -}} {%- endif -%} {%- elif message['role'] == 'assistant' -%} {%- if '</think>' in content -%} {%- set content = content.split('</think>')[1].strip() %} {%- endif -%} {{- '<SPECIAL_11>Assistant\n' + content.strip() }} {%- if message.tool_calls -%} {%- if content.strip() != '' -%} {{- '\n\n' -}} {%- endif -%} {{- '<TOOLCALL>[' -}} {%- for call in message.tool_calls -%} {%- set fn = call.function if call.function is defined else call -%} {{- '{"name": "' + fn.name + '", "arguments": ' -}} {%- if fn.arguments is string -%} {{- fn.arguments -}} {%- else -%} {{- fn.arguments | tojson -}} {%- endif -%} {{- '}' + (', ' if not loop.last else '') -}} {%- endfor -%} {{- ']</TOOLCALL>' -}} {%- endif -%} {{- '\n<SPECIAL_12>\n' -}} {%- endif -%} {%- endfor -%} {%- if add_generation_prompt -%} {{- '<SPECIAL_11>Assistant\n' -}} {%- if ns.enable_thinking is defined and ns.enable_thinking is false -%} {{- '<think></think>' -}} {%- else -%} {{- '<think>\n' -}} {%- endif -%} {%- if ns.last_turn_assistant_content is defined and ns.last_turn_assistant_content != '' -%} {{- ns.last_turn_assistant_content -}} {%- endif -%} {%- else -%} {%- if ns.last_turn_assistant_content is defined and ns.last_turn_assistant_content != '' -%} {{- '<SPECIAL_11>Assistant\n' -}} {%- if ns.enable_thinking is defined and ns.enable_thinking is false -%} {{- '<think></think>' -}} {%- else -%} {{- '<think>\n' -}} {%- endif -%} {{- ns.last_turn_assistant_content -}} {%- if continue_final_message is defined -%} {%- if continue_final_message is false -%} {{- '\n<SPECIAL_12>\n' -}} {%- endif -%} {%- else -%} {{- '\n<SPECIAL_12>\n' -}} {%- endif -%} {%- endif -%} {%- endif -%} ``` ## ## Training, Testing, and Evaluation Datasets ### Training datasets * Data Modality: Text * Text Training Data Size: More than 10 Trillion Tokens * Train/Test/Valid Split: We used 100% of the corpus for pre-training and relied on external benchmarks for testing. * Data Collection Method by dataset: Hybrid: Automated, Human, Synthetic * Labeling Method by dataset: Hybrid: Automated, Human, Synthetic **Properties:** The post-training corpus for NVIDIA-Nemotron-Nano-9B-v2 consists of English and multilingual text (German, Spanish, French, Italian, Korean, Portuguese, Russian, Japanese, Chinese and English). Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including code, legal, math, science, finance, and more. We also include a small portion of question-answering, and alignment style data to improve model accuracies. For several of the domains listed above we used synthetic data, specifically reasoning traces, from DeepSeek R1/R1-0528, Qwen3-235B-A22B, Nemotron 4 340B, Qwen2.5-32B-Instruct-AWQ, Qwen2.5-14B-Instruct, Qwen 2.5 72B. The pre-training corpus for NVIDIA-Nemotron-Nano-9B-v2 consists of high-quality curated and synthetically-generated data. It is trained in the English language, as well as 15 multilingual languages and 43 programming languages. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. We also include a small portion of question-answering, and alignment style data to improve model accuracy. The model was pre-trained for approximately twenty trillion tokens. Alongside the model, we release our [final pretraining data](https://huggingface.co/collections/nvidia/nemotron-pre-training-dataset-689d9de36f84279d83786b35), as outlined in this section. For ease of analysis, there is a sample set that is ungated. For all remaining code, math and multilingual data, gating and approval is required, and the dataset is permissively licensed for model training purposes. More details on the datasets and synthetic data generation methods can be found in the technical report [NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model](https://research.nvidia.com/labs/adlr/files/NVIDIA-Nemotron-Nano-2-Technical-Report.pdf) . ## Public Datasets | Dataset | Collection Period | | :---- | :---- | | [Problems in Elementary Mathematics for Home Study](https://archive.org/details/AntonovVygodskyNikitinSankinProblemsInElementaryMathematicsForHomeStudyMir1982) | 4/23/2025 | | [GSM8K](https://github.com/openai/grade-school-math) | 4/23/2025 | | [PRM800K](https://github.com/openai/prm800k) | 4/23/2025 | | [CC-NEWS](https://commoncrawl.org/blog/news-dataset-available) | 4/23/2025 | | [Common Crawl](https://commoncrawl.org/) | 4/23/2025 | | [Wikimedia](https://dumps.wikimedia.org/) | 4/23/2025 | | [Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k) | 4/23/2025 | | [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k) | 4/23/2025 | | [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) | 4/23/2025 | | [APIGen Function-Calling](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) | 4/23/2025 | | [LMSYS-Chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) | 4/23/2025 | | [Open Textbook Library \- CC BY-SA & GNU subset](https://open.umn.edu/opentextbooks/textbooks/) and [OpenStax \- CC BY-SA subset](https://openstax.org/) | 4/23/2025 | | [Advanced Reasoning Benchmark](https://github.com/TheDuckAI/arb), [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k), [PRM800K](https://github.com/openai/prm800k), and [SciBench](https://github.com/mandyyyyii/scibench) | 4/23/2025 | | [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) | 4/23/2025 | | [Court Listener](https://www.courtlistener.com/help/api/bulk-data/) | Legacy Download | | [peS2o](https://huggingface.co/datasets/allenai/peS2o) | Legacy Download | | [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) | Legacy Download | | [BioRxiv](https://www.biorxiv.org/tdm) | Legacy Download | | [PMC Open Access Subset](https://pmc.ncbi.nlm.nih.gov/tools/openftlist/) | Legacy Download | | [OpenWebText2](https://openwebtext2.readthedocs.io/en/latest/) | Legacy Download | | [Stack Exchange Data Dump](https://archive.org/details/stackexchange) | Legacy Download | | [PubMed Abstracts](https://github.com/thoppe/The-Pile-PubMed) | Legacy Download | | [NIH ExPorter](https://exporter.nih.gov/ExPORTER_Catalog.aspx) | Legacy Download | | [arXiv](https://info.arxiv.org/help/bulk_data/index.html) | Legacy Download | | [BigScience Workshop Datasets](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#datasets) | Legacy Download | | [Reddit Dataset](https://files.pushshift.io/reddit/) | Legacy Download | | [SEC's Electronic Data Gathering, Analysis, and Retrieval (EDGAR)](https://www.sec.gov/search-filings) | Legacy Download | | [Public Software Heritage S3](https://docs.softwareheritage.org/devel/swh-export/graph/dataset.html#summary-of-dataset-versions) | Legacy Download | | [The Stack](https://huggingface.co/datasets/bigcode/the-stack) | Legacy Download | | [mC4](https://huggingface.co/datasets/legacy-datasets/mc4) | Legacy Download | | [Advanced Mathematical Problem Solving](https://github.com/hendrycks/math?tab=readme-ov-file) | Legacy Download | | [MathPile](https://github.com/GAIR-NLP/MathPile/) | Legacy Download | | [NuminaMath CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) | Legacy Download | | [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/) | Legacy Download | | [FLAN](https://github.com/google-research/FLAN) | Legacy Download | | [Advanced Reasoning Benchmark](https://github.com/TheDuckAI/arb) | Legacy Download | | [SciBench](https://github.com/mandyyyyii/scibench) | Legacy Download | | [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions) | Legacy Download | | [FinQA](https://finqasite.github.io/) | Legacy Download | | [Riddles](https://github.com/crawsome/riddles) | Legacy Download | | [Problems in Elementary Mathematics for Home Study](https://archive.org/details/AntonovVygodskyNikitinSankinProblemsInElementaryMathematicsForHomeStudyMir1982) | Legacy Download | | [MedMCQA](https://huggingface.co/datasets/openlifescienceai/medmcqa) | Legacy Download | | [Cosmos QA](https://huggingface.co/datasets/allenai/cosmos_qa) | Legacy Download | | [MCTest](https://huggingface.co/datasets/sagnikrayc/mctest) | Legacy Download | | [AI2's Reasoning Challenge](https://huggingface.co/datasets/ai2_arc) | Legacy Download | | [OpenBookQA](https://github.com/allenai/OpenBookQA) | Legacy Download | | [MMLU Auxiliary Train](https://huggingface.co/datasets/cais/mmlu/viewer/all/auxiliary_train) | Legacy Download | | [social-chemestry-101](https://huggingface.co/datasets/tasksource/social-chemestry-101) | Legacy Download | | [Moral Stories](https://huggingface.co/datasets/demelin/moral_stories) | Legacy Download | | [The Common Pile v0.1](https://huggingface.co/common-pile) | Legacy Download | | [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath) | Legacy Download | | [MegaMath](https://huggingface.co/datasets/LLM360/MegaMath) | Legacy Download | | [FastChat](https://github.com/lm-sys/FastChat) | 6/30/2025 | ## Private Non-publicly Accessible Datasets of Third Parties | Dataset | | :---- | | Global Regulation | | Workbench | ## Online Dataset Sources The English Common Crawl data was downloaded from the Common Crawl Foundation (see their [FAQ](https://commoncrawl.org/faq) for details on their crawling) and includes the snapshots CC-MAIN-2013-20 through CC-MAIN-2025-13. The data was subsequently deduplicated and filtered in various ways described in the [Nemotron-CC paper](https://arxiv.org/abs/2412.02595). Additionally, we extracted data for fifteen languages from the following three Common Crawl snapshots: CC-MAIN-2024-51, CC-MAIN-2025-08, CC-MAIN-2025-18. The fifteen languages included were Arabic, Chinese, Danish, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish, and Thai. As we did not have reliable multilingual model-based quality classifiers available, we applied just heuristic filtering instead—similar to what we did for lower quality English data in the Nemotron-CC pipeline, but selectively removing some filters for some languages that did not work well. Deduplication was done in the same way as for Nemotron-CC. The GitHub Crawl was collected using the GitHub REST API and the Amazon S3 API. Each crawl was operated in accordance with the rate limits set by its respective source, either GitHub or S3. We collect raw source code and subsequently remove any having a license which does not exist in our permissive-license set (for additional details, refer to the technical report). | Dataset | Modality | Dataset Size (Tokens) | Collection Period | | :---- | :---- | :---- | :---- | | English Common Crawl | Text | 3.360T | 4/8/2025 | | Multilingual Common Crawl | Text | 812.7B | 5/1/2025 | | GitHub Crawl | Text | 747.4B | 4/29/2025 | ## NVIDIA-Sourced Synthetic Datasets | Dataset | Modality | Dataset Size (Tokens) | Seed Dataset | Model(s) used for generation | | :---- | :---- | :---- | :---- | :---- | | Synthetic Art of Problem Solving from DeepSeek-R1 | Text | 25.5B | [Art of Problem Solving](https://artofproblemsolving.com/company); [American Mathematics Competitions 8](https://artofproblemsolving.com/wiki/index.php/AMC_8_Problems_and_Solutions); [American Mathematics Competitions 10](https://artofproblemsolving.com/wiki/index.php/AMC_10_Problems_and_Solutions); | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) | | Synthetic Moral Stories and Social Chemistry from Mixtral-8x22B-v0.1 | Text | 327M | [social-chemestry-101](https://huggingface.co/datasets/tasksource/social-chemestry-101); [Moral Stories](https://huggingface.co/datasets/demelin/moral_stories) | [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) | | Synthetic Social Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B | Text | 83.6M | [OpenStax \- CC BY-SA subset](https://openstax.org/) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | | Synthetic Health Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B | Text | 9.7M | [OpenStax \- CC BY-SA subset](https://openstax.org/) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | | Synthetic STEM seeded with OpenStax, Open Textbook Library, and GSM8K from DeepSeek-R1, DeepSeek-V3, DeepSeek-V3-0324, and Qwen2.5-72B | Text | 175M | [OpenStax \- CC BY-SA subset](https://openstax.org/); [GSM8K](https://github.com/openai/grade-school-math); [Open Textbook Library \- CC BY-SA & GNU subset](https://open.umn.edu/opentextbooks/textbooks/) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1), [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | | [Nemotron-PrismMath](https://huggingface.co/datasets/nvidia/Nemotron-PrismMath) | Text | 4.6B | [Big-Math-RL-Verified](https://huggingface.co/datasets/SynthLabsAI/Big-Math-RL-Verified); [OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) | [Qwen2.5-0.5B-instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct), [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct); [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | | Synthetic Question Answering Data from Papers and Permissible Books from Qwen2.5-72B-Instruct | Text | 350M | [arXiv](https://info.arxiv.org/help/bulk_data/index.html); [National Institutes of Health ExPorter](https://www.nih.gov/); [BioRxiv](https://www.biorxiv.org/tdm); [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/); [USPTO Backgrounds](https://data.uspto.gov/apis/transition-guide/bdss#pats); [peS2o](https://huggingface.co/datasets/allenai/peS2o); Global Regulation; [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) | | Synthetic FineMath-4+ Reprocessed from DeepSeek-V3 | Text | 9.2B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3) | | Synthetic FineMath-3+ Reprocessed from phi-4 | Text | 27.6B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) | | Synthetic Union-3+ Reprocessed from phi-4 | Text | 93.1B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) | | Refreshed [Nemotron-MIND](https://huggingface.co/datasets/nvidia/Nemotron-MIND) from phi-4 | Text | 73B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) | | Synthetic Union-4+ Reprocessed from phi-4 | Text | 14.12B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) | | Synthetic Union-3+ minus 4+ Reprocessed from phi-4 | Text | 78.95B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) | | Synthetic Union-3 Refreshed from phi-4 | Text | 80.94B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) | | Synthetic Union-4+ Refreshed from phi-4 | Text | 52.32B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) | | Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from DeepSeek-V3 and DeepSeek-V3-0324 | Text | 4.0B | [AQUA-RAT](https://huggingface.co/datasets/deepmind/aqua_rat); [LogiQA](https://huggingface.co/datasets/lucasmccabe/logiqa); [AR-LSAT](https://github.com/zhongwanjun/AR-LSAT) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324) | | Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from Qwen3-30B-A3B | Text | 4.2B | [AQUA-RAT](https://huggingface.co/datasets/deepmind/aqua_rat); [LogiQA](https://huggingface.co/datasets/lucasmccabe/logiqa); [AR-LSAT](https://github.com/zhongwanjun/AR-LSAT) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) | | Synthetic Art of Problem Solving from Qwen2.5-32B-Instruct, Qwen2.5-Math-72B, Qwen2.5-Math-7B, and Qwen2.5-72B-Instruct | Text | 83.1B | [Art of Problem Solving](https://artofproblemsolving.com/company); [American Mathematics Competitions 8](https://artofproblemsolving.com/wiki/index.php/AMC_8_Problems_and_Solutions); [American Mathematics Competitions 10](https://artofproblemsolving.com/wiki/index.php/AMC_10_Problems_and_Solutions); [GSM8K](https://github.com/openai/grade-school-math); [PRM800K](https://github.com/openai/prm800k) | [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct); [Qwen2.5-Math-72B](https://huggingface.co/Qwen/Qwen2.5-Math-72B); [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B); [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) | | Synthetic MMLU Auxiliary Train from DeepSeek-R1 | Text | 0.5B | [MMLU Auxiliary Train](https://huggingface.co/datasets/cais/mmlu/viewer/all/auxiliary_train) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) | | Synthetic Long Context Continued Post-Training Data from Papers and Permissible Books from Qwen2.5-72B-Instruct | Text | 5.4B | [arXiv](https://info.arxiv.org/help/bulk_data/index.html); [National Institutes of Health ExPorter](https://www.nih.gov/); [BioRxiv](https://www.biorxiv.org/tdm); [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/); [USPTO Backgrounds](https://data.uspto.gov/apis/transition-guide/bdss#pats); [peS2o](https://huggingface.co/datasets/allenai/peS2o); Global Regulation; [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) | | Synthetic Common Crawl from Qwen3-30B-A3B and Mistral-Nemo-12B-Instruct | Text | 1.949T | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Mistral-NeMo-12B-Instruct](https://huggingface.co/nvidia/Mistral-NeMo-12B-Instruct) | | Synthetic Multilingual Data from Common Crawl from Qwen3-30B-A3B | Text | 997.3B | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) | | Synthetic Multilingual Data from Wikimedia from Qwen3-30B-A3B | Text | 55.1B | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) | | Synthetic OpenMathReasoning from DeepSeek-R1-0528 | Text | 1.5M | [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) | | Synthetic OpenCodeReasoning from DeepSeek-R1-0528 | Text | 1.1M | [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) | | Synthetic Science Data from DeepSeek-R1-0528 | Text | 1.5M | \- | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) | | Synthetic Humanity's Last Exam from DeepSeek-R1-0528 | Text | 460K | [Humanity's Last Exam](https://huggingface.co/datasets/cais/hle) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) | | Synthetic ToolBench from Qwen3-235B-A22B | Text | 400K | [ToolBench](https://github.com/OpenBMB/ToolBench) | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B) | | Synthetic Nemotron Content Safety Dataset V2, eval-safety, Gretel Synthetic Safety Alignment, and RedTeam\_2K from DeepSeek-R1-0528 | Text | 52K | [Nemotron Content Safety Dataset V2](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0); [eval-safety](https://github.com/CrystalEye42/eval-safety/blob/main/malicious_tasks_dataset.yaml); [Gretel Synthetic Safety Alignment](https://huggingface.co/datasets/gretelai/gretel-safety-alignment-en-v1); [RedTeam\_2K](https://huggingface.co/datasets/JailbreakV-28K/JailBreakV-28k/viewer/RedTeam_2K) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) | | Synthetic HelpSteer from Qwen3-235B-A22B | Text | 120K | [HelpSteer3](https://huggingface.co/datasets/nvidia/HelpSteer3); [HelpSteer2](https://huggingface.co/datasets/nvidia/HelpSteer2) | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B) | | Synthetic Alignment data from Mixtral-8x22B-Instruct-v0.1, Mixtral-8x7B-Instruct-v0.1, and Nemotron-4 Family | Text | 400K | [HelpSteer2](https://huggingface.co/datasets/nvidia/HelpSteer2); [C4](https://huggingface.co/datasets/allenai/c4); [LMSYS-Chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m); [ShareGPT52K](https://huggingface.co/datasets/RyokoAI/ShareGPT52K); [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k); [GSM8K](https://github.com/openai/grade-school-math); [PRM800K](https://github.com/openai/prm800k); lm\_identity (NVIDIA internal); [FinQA](https://finqasite.github.io/); [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions); [Riddles](https://github.com/crawsome/riddles); ChatQA nvolve-multiturn (NVIDIA internal); [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2); [SciBench](https://github.com/mandyyyyii/scibench); [OpenBookQA](https://github.com/allenai/OpenBookQA); [Advanced Reasoning Benchmark](https://github.com/TheDuckAI/arb); [Public Software Heritage S3](https://docs.softwareheritage.org/devel/swh-export/graph/dataset.html#summary-of-dataset-versions); [Khan Academy Math Keywords](https://www.khanacademy.org/math) | Nemotron-4-15B-Base (NVIDIA internal); Nemotron-4-15B-Instruct (NVIDIA internal); [Nemotron-4-340B-Base](https://huggingface.co/nvidia/Nemotron-4-340B-Base); [Nemotron-4-340B-Instruct](https://huggingface.co/nvidia/Nemotron-4-340B-Instruct); [Nemotron-4-340B-Reward](https://huggingface.co/nvidia/Nemotron-4-340B-Reward); [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1); [Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1) | | Synthetic LMSYS-Chat-1M from Qwen3-235B-A22B | Text | 1M | [LMSYS-Chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B) | | Synthetic Multilingual Reasoning data from DeepSeek-R1-0528, Qwen2.5-32B-Instruct-AWQ, and Qwen2.5-14B-Instruct | Text | 25M | [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning); [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528); [Qwen2.5-32B-Instruct-AWQ](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct-AWQ) (translation); [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) (translation); | | Synthetic Multilingual Reasoning data from Qwen3-235B-A22B and Gemma 3 Post-Trained models | Text | 5M | [WildChat](https://huggingface.co/datasets/allenai/WildChat-1M) | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [Gemma 3 PT 12B](https://huggingface.co/google/gemma-3-12b-it); [Gemma 3 PT 27B](https://huggingface.co/google/gemma-3-27b-it) | ### Evaluation Dataset: * Data Collection Method by dataset: Hybrid: Human, Synthetic * Labeling Method by dataset: Hybrid: Automated, Human, Synthetic ## Inference - ## Engines: HF, vLLM, TRT-LLM - ## Test Hardware NVIDIA A10G 24GB, H100 80GB ## Ethical Considerations NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our [Trustworthy AI terms of service](https://www.nvidia.com/en-us/agreements/trustworthy-ai/terms/), developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ [Bias](./bias.md), [Explainability](./explainability.md), [Safety & Security](./safety.md), and [Privacy](./privacy.md) Subcards. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/). ## Citation ``` @misc{nvidia2025nvidianemotronnano2, title={NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model}, author={NVIDIA}, year={2025}, eprint={2508.14444}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.14444}, } ```
carmaxsh/analog_thermometer_335
carmaxsh
2025-08-31T19:25:46Z
0
0
transformers
[ "transformers", "safetensors", "convnext", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-08-31T19:25:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sav04/blockassist-bc-stocky_snorting_gecko_1756667809
Sav04
2025-08-31T19:18:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stocky snorting gecko", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T19:18:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stocky snorting gecko --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
thyYu2024/qwen2-vl-2b-person-30000-new
thyYu2024
2025-08-31T18:59:16Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2-VL-2B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-2B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-08-31T17:10:04Z
--- base_model: Qwen/Qwen2-VL-2B-Instruct library_name: transformers model_name: qwen2-vl-2b-person-30000-new tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen2-vl-2b-person-30000-new This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="thyYu2024/qwen2-vl-2b-person-30000-new", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ruoxue2-stony-brook-university/qwen2-vl-2b-person-30000-new/runs/mauu8dn5) This model was trained with SFT. ### Framework versions - TRL: 0.20.0 - Transformers: 4.55.2 - Pytorch: 2.6.0+cu118 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
bah63843/blockassist-bc-plump_fast_antelope_1756666019
bah63843
2025-08-31T18:47:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T18:47:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ver-Vazado-Video-Do-Surfista-Videos/full.Video.do.surfista.vazado.video.do.surfista.no.banheiro.surfista.mansao.privilege
ver-Vazado-Video-Do-Surfista-Videos
2025-08-31T18:28:11Z
0
0
null
[ "region:us" ]
null
2025-08-31T18:27:33Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
sweta9873/blockassist-bc-beaked_flexible_monkey_1756661717
sweta9873
2025-08-31T17:35:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "beaked flexible monkey", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T17:35:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - beaked flexible monkey --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
solery/ppo-LunarLander-v2
solery
2025-08-31T17:34:24Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-08-31T16:19:34Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 268.84 +/- 16.68 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Mariacelest/my_policy_sujet
Mariacelest
2025-08-31T17:13:19Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:Mariacelest/datset_sujet", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-08-31T17:12:29Z
--- base_model: lerobot/smolvla_base datasets: Mariacelest/datset_sujet library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - smolvla - robotics - lerobot --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
HidekiK/llama_covid_xray_pt_br
HidekiK
2025-08-31T16:57:28Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llava_next", "trl", "en", "base_model:unsloth/llava-v1.6-mistral-7b-hf-bnb-4bit", "base_model:finetune:unsloth/llava-v1.6-mistral-7b-hf-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-31T03:20:52Z
--- base_model: unsloth/llava-v1.6-mistral-7b-hf-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llava_next - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** HidekiK - **License:** apache-2.0 - **Finetuned from model :** unsloth/llava-v1.6-mistral-7b-hf-bnb-4bit This llava_next model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)