modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-03 06:27:42
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
535 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-03 06:27:02
card
stringlengths
11
1.01M
phonebot/qwen3-32b-30b2507-projection
phonebot
2025-09-03T00:51:41Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-31T22:25:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hartryseeverh/blockassist-bc-docile_miniature_bison_1756860545
hartryseeverh
2025-09-03T00:50:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "docile miniature bison", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:50:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - docile miniature bison --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Hopelesslyhype/aw_v3_q4.gguf
Hopelesslyhype
2025-09-03T00:49:46Z
0
0
null
[ "safetensors", "license:apache-2.0", "8-bit", "region:us" ]
null
2025-09-03T00:02:30Z
--- license: apache-2.0 ---
omerbkts/blockassist-bc-keen_fast_giraffe_1756860542
omerbkts
2025-09-03T00:49:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:49:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
yuan571/phi-3.5-mini-0902-data2to64-128-128
yuan571
2025-09-03T00:47:57Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-03T00:42:27Z
--- base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** yuan571 - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
xinnn32/blockassist-bc-meek_winged_caterpillar_1756860358
xinnn32
2025-09-03T00:47:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:46:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
yufeng1/OpenThinker-7B-reasoning-lora-merged-type-c2r1
yufeng1
2025-09-03T00:44:17Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T23:45:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
omerbkts/blockassist-bc-keen_fast_giraffe_1756860131
omerbkts
2025-09-03T00:42:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:42:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756860023
akirafudo
2025-09-03T00:40:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:40:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
GroomerG/blockassist-bc-vicious_pawing_badger_1756858505
GroomerG
2025-09-03T00:38:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vicious pawing badger", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:38:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vicious pawing badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
uwcc/chinadoll
uwcc
2025-09-03T00:38:23Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-03T00:38:03Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - ai-toolkit widget: - text: A church in a field on a sunny day, [trigger] style. output: url: samples/1756859055553__000000500_0.jpg - text: A seal plays with a ball on the beach, [trigger] style. output: url: samples/1756859073712__000000500_1.jpg - text: A clown at the circus rides on a zebra, [trigger] style. output: url: samples/1756859091869__000000500_2.jpg - text: '[trigger]' output: url: samples/1756859110039__000000500_3.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: chinadoll license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # chinadoll Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) <Gallery /> ## Trigger words You should use `chinadoll` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. [Download](/uwcc/chinadoll/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('uwcc/chinadoll', weight_name='chinadoll.safetensors') image = pipeline('A church in a field on a sunny day, [trigger] style.').images[0] image.save("my_image.png") ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1756858372
kojeklollipop
2025-09-03T00:38:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "spotted amphibious stork", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:38:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - spotted amphibious stork --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbkts/blockassist-bc-keen_fast_giraffe_1756859795
omerbkts
2025-09-03T00:36:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:36:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Tltka/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scampering_waddling_pigeon
Tltka
2025-09-03T00:36:37Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am scampering_waddling_pigeon", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T10:54:27Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am scampering_waddling_pigeon --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DavidAU/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium
DavidAU
2025-09-03T00:36:14Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "programming", "code generation", "code", "codeqwen", "moe", "coding", "coder", "qwen2", "chat", "qwen", "qwen-coder", "finetune", "brainstorm 20x", "brainstorm", "optional thinking", "creative", "all use cases", "QiMing", "QiMing-holos", "bagua", "decision-making", "strategic-analysis", "cognitive-architecture", "philosophy-driven-ai", "conversational", "en", "fr", "zh", "de", "arxiv:2309.00071", "arxiv:2401.02415", "base_model:aifeifei798/QiMing-v1.0-14B", "base_model:finetune:aifeifei798/QiMing-v1.0-14B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T05:35:20Z
--- license: apache-2.0 library_name: transformers language: - en - fr - zh - de tags: - programming - code generation - code - codeqwen - programming - code generation - code - codeqwen - moe - coding - coder - qwen2 - chat - qwen - qwen-coder - chat - qwen - qwen-coder - qwen3 - finetune - brainstorm 20x - brainstorm - optional thinking - creative - all use cases - QiMing - QiMing-holos - bagua - decision-making - strategic-analysis - cognitive-architecture - chat - philosophy-driven-ai base_model: - aifeifei798/QiMing-v1.0-14B pipeline_tag: text-generation --- <h2>Qwen3-17B-QiMing-V1.0-Total-Recall-Medium</h2> This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly. This model is for coding and GENERAL USAGE. This model is based on "aifeifei798/QiMing-v1.0-14B" (base of Qwen3 14B instruct), with Brainstorm 8X (by DavidAU) - details at bottom of this page. The Brainstorm adapter will improve general performance and "out of the box" thinking. This version has the NATIVE context of 40k (default, can be changed via rope) context. This is a reasoning/thinking block model. I have included an optional system prompt to invoke "thinking" in this model, if you want to activate it. Recommended settings - general: - Rep pen 1.05 to 1.1 ; however rep pen of 1 will work well (may need to raise it for lower quants/fewer activated experts) - Temp .3 to .6 (+- .2) - Topk of 20, 40 or 100 - Topp of .95 / min p of .05 - Suggest min context window 4k to 8k. - System prompt (optional) to focus the model better. For additional settings, tool use, and other model settings. Summary of root model below, followed by FULL HELP SECTION, then info on Brainstorm 40x. OPTIONAL SYSTEM PROMPT - INVOKE "Thinking": ``` Enable deep thinking subroutine. You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside ###ponder### ###/ponder### tags, and then provide your solution or response to the problem. ``` Use this to INVOKE "thinking" block(s) in the model. These will be a lot shorter than 1000s of tokens generally in most "thinking" models. In you use this prompt, you may need to raise "rep pen" to 1.08 to 1.1, to prevent "loops" in the "thought block(s)" ; especially in lower quants. If you change "ponder" to a different word/phrase this will affect model "thinking" too. --- QUANTS --- GGUF? GGUF Imatrix? Other? Special thanks to Team Mradermacher, Team Nightmedia and other quanters! See under "model tree", upper right and click on "quantizations". New quants will automatically appear. --- # Qwen3-14B ## Qwen3 Highlights Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios. - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**. ## Model Overview **Qwen3-14B** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 14.8B - Number of Paramaters (Non-Embedding): 13.2B - Number of Layers: 40 - Number of Attention Heads (GQA): 40 for Q and 8 for KV - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts). For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Quickstart The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.51.0`, you will encounter the following error: ``` KeyError: 'qwen3' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-14B" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 (</think>) index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint: - SGLang: ```shell python -m sglang.launch_server --model-path Qwen/Qwen3-14B --reasoning-parser qwen3 ``` - vLLM: ```shell vllm serve Qwen/Qwen3-14B --enable-reasoning --reasoning-parser deepseek_r1 ``` For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3. ## Switching Between Thinking and Non-Thinking Mode > [!TIP] > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM. > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users. ### `enable_thinking=True` By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # True is the default value for enable_thinking ) ``` In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response. > [!NOTE] > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### `enable_thinking=False` We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Setting enable_thinking=False disables thinking mode ) ``` In this mode, the model will not generate any think content and will not include a `<think>...</think>` block. > [!NOTE] > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations. Here is an example of a multi-turn conversation: ```python from transformers import AutoModelForCausalLM, AutoTokenizer class QwenChatbot: def __init__(self, model_name="Qwen/Qwen3-14B"): self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained(model_name) self.history = [] def generate_response(self, user_input): messages = self.history + [{"role": "user", "content": user_input}] text = self.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = self.tokenizer(text, return_tensors="pt") response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist() response = self.tokenizer.decode(response_ids, skip_special_tokens=True) # Update history self.history.append({"role": "user", "content": user_input}) self.history.append({"role": "assistant", "content": response}) return response # Example Usage if __name__ == "__main__": chatbot = QwenChatbot() # First input (without /think or /no_think tags, thinking mode is enabled by default) user_input_1 = "How many r's in strawberries?" print(f"User: {user_input_1}") response_1 = chatbot.generate_response(user_input_1) print(f"Bot: {response_1}") print("----------------------") # Second input with /no_think user_input_2 = "Then, how many r's in blueberries? /no_think" print(f"User: {user_input_2}") response_2 = chatbot.generate_response(user_input_2) print(f"Bot: {response_2}") print("----------------------") # Third input with /think user_input_3 = "Really? /think" print(f"User: {user_input_3}") response_3 = chatbot.generate_response(user_input_3) print(f"Bot: {response_3}") ``` > [!NOTE] > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled. > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block. ## Agentic Use Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. ```python from qwen_agent.agents import Assistant # Define LLM llm_cfg = { 'model': 'Qwen3-14B', # Use the endpoint provided by Alibaba Model Studio: # 'model_type': 'qwen_dashscope', # 'api_key': os.getenv('DASHSCOPE_API_KEY'), # Use a custom endpoint compatible with OpenAI API: 'model_server': 'http://localhost:8000/v1', # api_base 'api_key': 'EMPTY', # Other parameters: # 'generate_cfg': { # # Add: When the response content is `<think>this is the thought</think>this is the answer; # # Do not add: When the response has been separated by reasoning_content and content. # 'thought_in_content': True, # }, } # Define Tools tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ] # Define Agent bot = Assistant(llm=llm_cfg, function_list=tools) # Streaming generation messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses) ``` ## Processing Long Texts Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method. YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks: - Modifying the model files: In the `config.json` file, add the `rope_scaling` fields: ```json { ..., "rope_scaling": { "rope_type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768 } } ``` For `llama.cpp`, you need to regenerate the GGUF file after the modification. - Passing command line arguments: For `vllm`, you can use ```shell vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072 ``` For `sglang`, you can use ```shell python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}' ``` For `llama-server` from `llama.cpp`, you can use ```shell llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768 ``` > [!IMPORTANT] > If you encounter the following warning > ``` > Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'} > ``` > please upgrade `transformers>=4.51.0`. > [!NOTE] > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.** > We advise adding the `rope_scaling` configuration only when processing long contexts is required. > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0. > [!NOTE] > The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance. > [!TIP] > The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed. ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance. 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance. 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed. --- <H2>Help, Adjustments, Samplers, Parameters and More</H2> --- <B>CHANGE THE NUMBER OF ACTIVE EXPERTS:</B> See this document: https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts <B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B> In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ; Set the "Smoothing_factor" to 1.5 : in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F" : in text-generation-webui -> parameters -> lower right. : In Silly Tavern this is called: "Smoothing" NOTE: For "text-generation-webui" -> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model) Source versions (and config files) of my models are here: https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be OTHER OPTIONS: - Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor") - If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted. <B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B> This a "Class 1" model: For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] --- <H2>What is Brainstorm?</H2> --- <B>Brainstorm 8x</B> The BRAINSTORM process was developed by David_AU. Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415"> scientific paper : Progressive LLaMA with Block Expansion </a>. However I went in a completely different direction from what was outlined in this paper. What is "Brainstorm" ? The reasoning center of an LLM is taken apart, reassembled, and expanded. In this case for this model: 8 times Then these centers are individually calibrated. These "centers" also interact with each other. This introduces subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak. The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting instruction following. This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses. Here are some of the enhancements this process brings to the model's performance: - Prose generation seems more focused on the moment to moment. - Sometimes there will be "preamble" and/or foreshadowing present. - Fewer or no "cliches" - Better overall prose and/or more complex / nuanced prose. - A greater sense of nuance on all levels. - Coherence is stronger. - Description is more detailed, and connected closer to the content. - Simile and Metaphors are stronger and better connected to the prose, story, and character. - Sense of "there" / in the moment is enhanced. - Details are more vivid, and there are more of them. - Prose generation length can be long to extreme. - Emotional engagement is stronger. - The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less. - The MORE instructions and/or details you provide the more strongly the model will respond. - Depending on the model "voice" may be more "human" vs original model's "voice". Other "lab" observations: - This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true! - However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak. - From lab testing it seems to ponder, and consider more carefully roughly speaking. - You could say this process sharpens the model's focus on it's task(s) at a deeper level. The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc. ---
xinnn32/blockassist-bc-meek_winged_caterpillar_1756859624
xinnn32
2025-09-03T00:35:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:34:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756859661
akirafudo
2025-09-03T00:34:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:34:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
davidilag/wav2vec2-xls-r-300m-pt-1000h_faroese-cp12-faroese-100h-30-epochs_run7_2025-09-02
davidilag
2025-09-03T00:31:22Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-02T14:49:24Z
--- library_name: transformers tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-xls-r-300m-pt-1000h_faroese-cp12-faroese-100h-30-epochs_run7_2025-09-02 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-pt-1000h_faroese-cp12-faroese-100h-30-epochs_run7_2025-09-02 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0979 - Wer: 18.9276 - Cer: 4.0413 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-------:|:-----:|:---------------:|:-------:|:-------:| | 3.3239 | 0.4877 | 1000 | 3.2507 | 100.0 | 100.0 | | 0.7706 | 0.9754 | 2000 | 0.4437 | 39.6837 | 10.8411 | | 0.3957 | 1.4628 | 3000 | 0.2259 | 29.7176 | 7.6803 | | 0.3567 | 1.9505 | 4000 | 0.1970 | 27.7702 | 6.9781 | | 0.2861 | 2.4379 | 5000 | 0.1731 | 26.3515 | 6.4344 | | 0.2722 | 2.9256 | 6000 | 0.1523 | 25.1795 | 6.1370 | | 0.1925 | 3.4131 | 7000 | 0.1478 | 24.6685 | 5.9003 | | 0.2167 | 3.9008 | 8000 | 0.1411 | 23.9679 | 5.7835 | | 0.1615 | 4.3882 | 9000 | 0.1409 | 23.5317 | 5.6288 | | 0.1771 | 4.8759 | 10000 | 0.1380 | 23.0823 | 5.4624 | | 0.1445 | 5.3633 | 11000 | 0.1367 | 23.0779 | 5.4103 | | 0.1572 | 5.8510 | 12000 | 0.1277 | 23.1176 | 5.4387 | | 0.1335 | 6.3385 | 13000 | 0.1157 | 22.4082 | 5.1823 | | 0.1334 | 6.8261 | 14000 | 0.1218 | 21.9853 | 5.0647 | | 0.1176 | 7.3136 | 15000 | 0.1142 | 22.1351 | 5.1105 | | 0.1188 | 7.8013 | 16000 | 0.1126 | 21.6152 | 4.9787 | | 0.119 | 8.2887 | 17000 | 0.1217 | 21.5359 | 5.0071 | | 0.1147 | 8.7764 | 18000 | 0.1166 | 21.4257 | 4.9235 | | 0.0991 | 9.2638 | 19000 | 0.1188 | 21.2275 | 4.8824 | | 0.1068 | 9.7515 | 20000 | 0.1089 | 21.3597 | 4.8525 | | 0.092 | 10.2390 | 21000 | 0.1125 | 20.9543 | 4.7617 | | 0.0917 | 10.7267 | 22000 | 0.1109 | 20.9940 | 4.7909 | | 0.0723 | 11.2141 | 23000 | 0.1114 | 20.8926 | 4.7617 | | 0.0801 | 11.7018 | 24000 | 0.1091 | 20.8574 | 4.7152 | | 0.0765 | 12.1892 | 25000 | 0.1045 | 20.4564 | 4.5858 | | 0.0769 | 12.6769 | 26000 | 0.1094 | 20.7649 | 4.6741 | | 0.0717 | 13.1644 | 27000 | 0.1041 | 20.4168 | 4.5439 | | 0.0722 | 13.6520 | 28000 | 0.1109 | 20.5005 | 4.6102 | | 0.0654 | 14.1395 | 29000 | 0.1074 | 20.2890 | 4.5345 | | 0.0733 | 14.6272 | 30000 | 0.1035 | 20.0996 | 4.5132 | | 0.0621 | 15.1146 | 31000 | 0.1046 | 20.0731 | 4.4619 | | 0.0601 | 15.6023 | 32000 | 0.1015 | 20.0291 | 4.4295 | | 0.0671 | 16.0897 | 33000 | 0.1007 | 20.0291 | 4.4217 | | 0.0578 | 16.5774 | 34000 | 0.1037 | 19.7471 | 4.3735 | | 0.0513 | 17.0649 | 35000 | 0.1034 | 19.8440 | 4.3790 | | 0.0485 | 17.5525 | 36000 | 0.1000 | 19.6149 | 4.3167 | | 0.0524 | 18.0400 | 37000 | 0.1033 | 19.6634 | 4.3333 | | 0.0446 | 18.5277 | 38000 | 0.1030 | 19.4563 | 4.2378 | | 0.0534 | 19.0151 | 39000 | 0.1021 | 19.5929 | 4.2915 | | 0.0449 | 19.5028 | 40000 | 0.1045 | 19.4828 | 4.2623 | | 0.0394 | 19.9905 | 41000 | 0.1006 | 19.4475 | 4.2473 | | 0.0431 | 20.4779 | 42000 | 0.1010 | 19.4872 | 4.2386 | | 0.0361 | 20.9656 | 43000 | 0.1007 | 19.4343 | 4.2299 | | 0.0348 | 21.4531 | 44000 | 0.0997 | 19.4079 | 4.2070 | | 0.0534 | 21.9407 | 45000 | 0.0985 | 19.3021 | 4.1723 | | 0.0406 | 22.4282 | 46000 | 0.0994 | 19.2404 | 4.1526 | | 0.0447 | 22.9159 | 47000 | 0.0997 | 19.0774 | 4.1195 | | 0.0486 | 23.4033 | 48000 | 0.0961 | 19.1567 | 4.1273 | | 0.0342 | 23.8910 | 49000 | 0.0980 | 19.0818 | 4.1187 | | 0.042 | 24.3784 | 50000 | 0.0984 | 19.0245 | 4.0942 | | 0.0351 | 24.8661 | 51000 | 0.0989 | 18.9496 | 4.0674 | | 0.0351 | 25.3536 | 52000 | 0.0998 | 18.9981 | 4.0792 | | 0.0352 | 25.8413 | 53000 | 0.0992 | 18.9893 | 4.0619 | | 0.0381 | 26.3287 | 54000 | 0.0981 | 18.9100 | 4.0540 | | 0.0306 | 26.8164 | 55000 | 0.0989 | 18.9496 | 4.0548 | | 0.0352 | 27.3038 | 56000 | 0.0991 | 18.9452 | 4.0540 | | 0.0392 | 27.7915 | 57000 | 0.0980 | 18.9320 | 4.0437 | | 0.0355 | 28.2790 | 58000 | 0.0988 | 18.9805 | 4.0508 | | 0.0324 | 28.7666 | 59000 | 0.0984 | 18.9364 | 4.0461 | | 0.0429 | 29.2541 | 60000 | 0.0979 | 18.9276 | 4.0413 | | 0.039 | 29.7418 | 61000 | 0.0979 | 18.9276 | 4.0413 | ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1756859419
fakir22
2025-09-03T00:31:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping peaceful caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:30:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping peaceful caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
win10/is-like-test
win10
2025-09-03T00:29:56Z
0
0
null
[ "safetensors", "llama", "region:us" ]
null
2025-09-02T06:26:27Z
--- --- ## Can only be used for continuous pre-training, and please follow the original context template. An experimental llama architecture alignment model that attempts to merge mistral into the llama architecture.
5hadytru/so101_grasp_1_GR00T-N1.5-3B_v3_1
5hadytru
2025-09-03T00:29:52Z
0
0
null
[ "safetensors", "gr00t_n1_5", "license:apache-2.0", "region:us" ]
null
2025-09-03T00:23:33Z
--- license: apache-2.0 ---
IAHispano/Applio
IAHispano
2025-09-03T00:28:26Z
45
133
transformers
[ "transformers", "onnx", "AI", "RVC", "VITS", "VC", "Voice Conversion", "Voice2Voice", "audio-to-audio", "dataset:CSTR-Edinburgh/vctk", "base_model:lj1995/VoiceConversionWebUI", "base_model:quantized:lj1995/VoiceConversionWebUI", "license:mit", "endpoints_compatible", "region:us" ]
audio-to-audio
2023-10-03T18:58:40Z
--- pipeline_tag: audio-to-audio tags: - AI - RVC - VITS - VC - Voice Conversion - Voice2Voice license: mit datasets: - CSTR-Edinburgh/vctk base_model: - lj1995/VoiceConversionWebUI --- <h1 align="center"> <a href="https://applio.org" target="_blank"><img src="https://github.com/IAHispano/Applio/assets/133521603/78e975d8-b07f-47ba-ab23-5a31592f322a" alt="Applio"></a> </h1> <p align="center">A simple, high-quality voice conversion tool, focused on ease of use and performance.</p> <p align="center"> <a href="https://applio.org" target="_blank">🌐 Website</a> • <a href="https://docs.applio.org" target="_blank">📚 Documentation</a> • <a href="https://discord.gg/urxFjYmYYh" target="_blank">☎️ Discord</a> </p> <p align="center"> <a href="https://github.com/IAHispano/Applio-Plugins" target="_blank">🛒 Plugins</a> • <a href="https://huggingface.co/IAHispano/Applio/tree/main/Compiled" target="_blank">📦 Compiled</a> • <a href="https://applio.org/playground" target="_blank">🎮 Playground</a> • <a href="https://colab.research.google.com/github/iahispano/applio/blob/master/assets/Applio.ipynb" target="_blank">🔎 Google Colab (UI)</a> • <a href="https://colab.research.google.com/github/iahispano/applio/blob/master/assets/Applio_NoUI.ipynb" target="_blank">🔎 Google Colab (No UI)</a> </p> ## Introduction Applio is a powerful voice conversion tool focused on simplicity, quality, and performance. Whether you're an artist, developer, or researcher, Applio offers a straightforward platform for high-quality voice transformations. Its flexible design allows for customization through plugins and configurations, catering to a wide range of projects. ## Terms of Use The use of Applio is entirely at your own discretion and responsibility. By using this tool, you agree to: 1. Respect all applicable copyrights, intellectual property rights, and privacy rights. Ensure that any audio or material processed through Applio is either owned by you or used with explicit permission from the rightful owner. 2. Avoid using Applio in ways that may harm, defame, or infringe upon the rights of others. This includes, but is not limited to, the creation or distribution of unauthorized content. 3. Comply with all relevant laws and regulations governing the use of AI and voice transformation tools in your jurisdiction. Applio and its contributors are not liable for any misuse of the tool. The responsibility for adhering to ethical practices and legal compliance lies solely with the user. Applio does not endorse or support any activities that result in harm to individuals, groups, or entities. All official models distributed by Applio have been trained under public use datasets such as VCTK. ## Getting Started ### 1. Installation Run the installation script based on your operating system: - **Windows:** Double-click `run-install.bat`. - **Linux/macOS:** Execute `run-install.sh`. ### 2. Running Applio Start Applio using: - **Windows:** Double-click `run-applio.bat`. - **Linux/macOS:** Run `run-applio.sh`. This launches the Gradio interface in your default browser. ### 3. Optional: TensorBoard Monitoring To monitor training or visualize data: - **Windows:** Run `run-tensorboard.bat`. - **Linux/macOS:** Run `run-tensorboard.sh`. For more detailed instructions, visit the [documentation](https://docs.applio.org). ## Commercial Usage For commercial use, follow the [MIT license](./LICENSE) and contact us at [email protected] to ensure ethical use. The use of Applio-generated audio files must comply with applicable copyrights. Consider supporting Applio’s development [through a donation](https://ko-fi.com/iahispano). ## References Applio is made possible thanks to these projects and their references: - [gradio-screen-recorder](https://huggingface.co/spaces/gstaff/gradio-screen-recorder) by gstaff - [rvc-cli](https://github.com/blaisewf/rvc-cli) by blaisewf ### Contributors <a href="https://github.com/IAHispano/Applio/graphs/contributors" target="_blank"> <img src="https://contrib.rocks/image?repo=IAHispano/Applio" /> </a>
DavidAU/Qwen3-42B-A3B-2507-YOYO2-TOTAL-RECALL-Instruct
DavidAU
2025-09-03T00:28:15Z
27
0
transformers
[ "transformers", "safetensors", "qwen3_moe", "text-generation", "programming", "code generation", "code", "codeqwen", "moe", "coding", "coder", "qwen2", "chat", "qwen", "qwen-coder", "Qwen3-Coder-30B-A3B-Instruct", "Qwen3-30B-A3B", "mixture of experts", "128 experts", "8 active experts", "1 million context", "qwen3", "finetune", "brainstorm 20x", "brainstorm", "optional thinking", "conversational", "en", "fr", "zh", "de", "arxiv:2401.02415", "base_model:YOYO-AI/Qwen3-30B-A3B-YOYO-V2", "base_model:finetune:YOYO-AI/Qwen3-30B-A3B-YOYO-V2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-29T01:47:59Z
--- license: apache-2.0 library_name: transformers language: - en - fr - zh - de tags: - programming - code generation - code - codeqwen - programming - code generation - code - codeqwen - moe - coding - coder - qwen2 - chat - qwen - qwen-coder - chat - qwen - qwen-coder - moe - Qwen3-Coder-30B-A3B-Instruct - Qwen3-30B-A3B - mixture of experts - 128 experts - 8 active experts - 1 million context - qwen3 - finetune - brainstorm 20x - brainstorm - optional thinking - qwen3_moe base_model: - YOYO-AI/Qwen3-30B-A3B-YOYO-V2 pipeline_tag: text-generation --- <h2>Qwen3-42B-A3B-2507-YOYO2-TOTAL-RECALL-Instruct [1 million context]</h2> <img src="qwen3-total-recall.gif" style="float:right; width:300px; height:300px; padding:10px;"> This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly. This model is for CODING and programming in all major programming languages and many minor ones too AND GENERAL USAGE. This model is based on Qwen3-Coder-30B-A3B-Instruct (MOE, 128 experts, 8 activated), with Brainstorm 20X (by DavidAU) - details at bottom of this page. This model is a result of merged model (3 step, 3 models) from: https://huggingface.co/YOYO-AI/Qwen3-30B-A3B-YOYO-V2 (you may want to visit this repo for settings/info too) The Brainstorm adapter will improve general performance and "out of the box" thinking. This creates a model of 42B parameters, 67 layers and 807 tensors. This version has the NATIVE context of 1 million context. This is a non-reasoning/non-thinking block model. I have included an optional system prompt to invoke "thinking" in this model, if you want to activate it. SETTINGS: For coding, programming set expert to: - 6-8 for general work. - 10 for moderate work. - 12-16 for complex work, long projects, complex coding. - Suggest min context window 4k to 8k. - And for longer context, and/or multi-turn -> increase experts by 1-2 to help with longer context/multi turn understanding. Recommended settings - general: - Rep pen 1.05 to 1.1 ; however rep pen of 1 will work well (may need to raise it for lower quants/fewer activated experts) - Temp .3 to .6 (+- .2) - Topk of 20, 40 or 100 - Topp of .95 / min p of .05 - Suggest min context window 4k to 8k. - System prompt (optional) to focus the model better. This is the refined version -V1.4- from this project (see this repo for all settings, details, system prompts, example generations etc etc): https://huggingface.co/DavidAU/Qwen3-55B-A3B-TOTAL-RECALL-Deep-40X-GGUF/ This version 2 is slightly smaller, with further refinements to the Brainstorm adapter and uses the new "Qwen3-30B-A3B-Instruct-2507". Review and Specialized Settings for this model (V 1.4): https://www.linkedin.com/posts/gchesler_davidauqwen3-53b-a3b-total-recall-v14-128k-activity-7344938636141858816-ILCM/ https://www.linkedin.com/posts/gchesler_haskell-postgres-agentic-activity-7347103276141596672-_zbo/ You may also want to see (root model of Total Recall series - Version 1): https://huggingface.co/Qwen/Qwen3-30B-A3B AND Version 2 root model: https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct For additional settings, tool use, and other model settings. Summary of root model below, followed by FULL HELP SECTION, then info on Brainstorm 40x. OPTIONAL SYSTEM PROMPT - INVOKE "Thinking": ``` Enable deep thinking subroutine. You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside ###ponder### ###/ponder### tags, and then provide your solution or response to the problem. ``` Use this to INVOKE "thinking" block(s) in the model. These will be a lot shorter than 1000s of tokens generally in most "thinking" models. In you use this prompt, you may need to raise "rep pen" to 1.08 to 1.1, to prevent "loops" in the "thought block(s)" ; especially in lower quants. If you change "ponder" to a different word/phrase this will affect model "thinking" too. --- QUANTS --- GGUF? GGUF Imatrix? Other? Special thanks to Team Mradermacher, Team Nightmedia and other quanters! See under "model tree", upper right and click on "quantizations". New quants will automatically appear. --- # Qwen3-Coder-3B-A3B-Instruct ## Highlights **Qwen3-Coder** is available in multiple sizes. Today, we're excited to introduce **Qwen3-Coder-30B-A3B-Instruct**. This streamlined model maintains impressive performance and efficiency, featuring the following key enhancements: - **Significant Performance** among open models on **Agentic Coding**, **Agentic Browser-Use**, and other foundational coding tasks. - **Long-context Capabilities** with native support for **256K** tokens, extendable up to **1M** tokens using Yarn, optimized for repository-scale understanding. - **Agentic Coding** supporting for most platform such as **Qwen Code**, **CLINE**, featuring a specially designed function call format. ![image/jpeg](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-Coder/qwen3-coder-30a3-main.jpg) ## Model Overview **Qwen3-Coder-30B-A3B-Instruct** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 30.5B in total and 3.3B activated - Number of Layers: 48 - Number of Attention Heads (GQA): 32 for Q and 4 for KV - Number of Experts: 128 - Number of Activated Experts: 8 - Context Length: **262,144 natively**. **NOTE: This model supports only non-thinking mode and does not generate ``<think></think>`` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.** For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-coder/), [GitHub](https://github.com/QwenLM/Qwen3-Coder), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Quickstart We advise you to use the latest version of `transformers`. With `transformers<4.51.0`, you will encounter the following error: ``` KeyError: 'qwen3_moe' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-Coder-30B-A3B-Instruct" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Write a quick sort algorithm." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=65536 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() content = tokenizer.decode(output_ids, skip_special_tokens=True) print("content:", content) ``` **Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.** For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3. ## Agentic Coding Qwen3-Coder excels in tool calling capabilities. You can simply define or use any tools as following example. ```python # Your tool implementation def square_the_number(num: float) -> dict: return num ** 2 # Define Tools tools=[ { "type":"function", "function":{ "name": "square_the_number", "description": "output the square of the number.", "parameters": { "type": "object", "required": ["input_num"], "properties": { 'input_num': { 'type': 'number', 'description': 'input_num is a number that will be squared' } }, } } } ] import OpenAI # Define LLM client = OpenAI( # Use a custom endpoint compatible with OpenAI API base_url='http://localhost:8000/v1', # api_base api_key="EMPTY" ) messages = [{'role': 'user', 'content': 'square the number 1024'}] completion = client.chat.completions.create( messages=messages, model="Qwen3-Coder-30B-A3B-Instruct", max_tokens=65536, tools=tools, ) print(completion.choice[0]) ``` ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - We suggest using `temperature=0.7`, `top_p=0.8`, `top_k=20`, `repetition_penalty=1.05`. 2. **Adequate Output Length**: We recommend using an output length of 65,536 tokens for most queries, which is adequate for instruct models. --- <H2>Help, Adjustments, Samplers, Parameters and More</H2> --- <B>CHANGE THE NUMBER OF ACTIVE EXPERTS:</B> See this document: https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts <B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B> In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ; Set the "Smoothing_factor" to 1.5 : in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F" : in text-generation-webui -> parameters -> lower right. : In Silly Tavern this is called: "Smoothing" NOTE: For "text-generation-webui" -> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model) Source versions (and config files) of my models are here: https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be OTHER OPTIONS: - Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor") - If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted. <B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B> This a "Class 1" model: For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here: [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] --- <H2>What is Brainstorm?</H2> --- <B>Brainstorm 20x</B> The BRAINSTORM process was developed by David_AU. Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415"> scientific paper : Progressive LLaMA with Block Expansion </a>. However I went in a completely different direction from what was outlined in this paper. What is "Brainstorm" ? The reasoning center of an LLM is taken apart, reassembled, and expanded. In this case for this model: 20 times Then these centers are individually calibrated. These "centers" also interact with each other. This introduces subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak. The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting instruction following. This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses. Here are some of the enhancements this process brings to the model's performance: - Prose generation seems more focused on the moment to moment. - Sometimes there will be "preamble" and/or foreshadowing present. - Fewer or no "cliches" - Better overall prose and/or more complex / nuanced prose. - A greater sense of nuance on all levels. - Coherence is stronger. - Description is more detailed, and connected closer to the content. - Simile and Metaphors are stronger and better connected to the prose, story, and character. - Sense of "there" / in the moment is enhanced. - Details are more vivid, and there are more of them. - Prose generation length can be long to extreme. - Emotional engagement is stronger. - The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less. - The MORE instructions and/or details you provide the more strongly the model will respond. - Depending on the model "voice" may be more "human" vs original model's "voice". Other "lab" observations: - This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true! - However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak. - From lab testing it seems to ponder, and consider more carefully roughly speaking. - You could say this process sharpens the model's focus on it's task(s) at a deeper level. The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc. ---
omerbektass/blockassist-bc-keen_fast_giraffe_1756859100
omerbektass
2025-09-03T00:25:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:25:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbkts/blockassist-bc-keen_fast_giraffe_1756858968
omerbkts
2025-09-03T00:23:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:23:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
xinnn32/blockassist-bc-meek_winged_caterpillar_1756858851
xinnn32
2025-09-03T00:22:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:21:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1756858823
fakir22
2025-09-03T00:21:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping peaceful caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:21:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping peaceful caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
BootesVoid/cmehoj7jg0po0rts8qmozc09a_cmf36svff0b1psr53t1j4ha38
BootesVoid
2025-09-03T00:20:17Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-03T00:20:15Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TENDER --- # Cmehoj7Jg0Po0Rts8Qmozc09A_Cmf36Svff0B1Psr53T1J4Ha38 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TENDER` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TENDER", "lora_weights": "https://huggingface.co/BootesVoid/cmehoj7jg0po0rts8qmozc09a_cmf36svff0b1psr53t1j4ha38/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmehoj7jg0po0rts8qmozc09a_cmf36svff0b1psr53t1j4ha38', weight_name='lora.safetensors') image = pipeline('TENDER').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 9e-05 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmehoj7jg0po0rts8qmozc09a_cmf36svff0b1psr53t1j4ha38/discussions) to add images that show off what you’ve made with this LoRA.
MelissaJ/koelectra-emotion-6-emotion-base
MelissaJ
2025-09-03T00:19:10Z
0
0
transformers
[ "transformers", "safetensors", "electra", "text-classification", "ko", "base_model:monologg/koelectra-base-v3-discriminator", "base_model:finetune:monologg/koelectra-base-v3-discriminator", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-03T00:16:40Z
--- library_name: transformers license: apache-2.0 language: - ko base_model: - monologg/koelectra-base-v3-discriminator --- # KoELECTRA Emotion Classification (기쁨/분노/상처/불안/당황/슬픔) ## 모델 개요 이 모델은 **KoELECTRA-base-v3**를 기반으로 파인튜닝된 한국어 감정 분류 모델입니다. 입력 문장을 6가지 감정 범주 중 하나로 분류합니다: - 기쁨 - 분노 - 상처 - 불안 - 당황 - 슬픔 ## 학습 데이터 - **감성 대화 말뭉치 (Korean Emotional Conversation Corpus, AI Hub)** 한국지능정보사회진흥원(NIA)에서 구축한 대화 기반 감정 레이블 데이터셋 - 데이터셋 출처: [AI Hub 감성 대화 말뭉치](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=86) - 클래스 분포: - 불안: 9,320 - 분노: 9,160 - 상처: 9,143 - 슬픔: 9,125 - 당황: 8,756 - 기쁨: 6,126 ## 학습 설정 - Epoch: 100 (최적 성능은 2~3 epoch 구간에서 관측) - Optimizer: AdamW - Learning rate: 5e-5 - Batch size: train 32 / eval 64 - Weight decay: 0.01 - Mixed precision: FP16 (GPU) ## 성능 Validation set 기준: | Epoch | Training Loss | Validation Loss | Accuracy | F1 Macro | |------:|--------------:|----------------:|---------:|---------:| | 1 | 1.1211 | 1.1623 | 0.5613 | 0.5733 | | 2 | 1.0241 | 1.1424 | 0.5789 | 0.5928 | | 3 | 0.9386 | 1.1948 | 0.5789 | 0.5968 | | ... | ... | ... | ... | ... | | 90 | 0.0077 | 5.2280 | 0.5727 | 0.5866 | | 100 | 0.0008 | 5.3633 | 0.5654 | 0.5796 | - **최고 성능 (F1 Macro ≈ 0.5968)**: Epoch 3 - 이후 epoch에서는 Validation Loss가 급격히 증가 → **과적합** 경향 ## 사용법 ```python from transformers import pipeline pipe = pipeline("text-classification", model="사용자명/koelectra-emotion") text = "우리 집이 잘 못산다는 것을 친구들이 알게 되었을 때 정말 억장이 무너지는 것 같았어." print(pipe(text)) # [{'label': '상처', 'score': 0.65}]
mradermacher/ThoughtSwitch-V1-1.7b-Instruct-GGUF
mradermacher
2025-09-03T00:17:01Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:BrainWave-ML/ThoughtSwitch-V1-1.7b-Instruct", "base_model:quantized:BrainWave-ML/ThoughtSwitch-V1-1.7b-Instruct", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-03T00:08:31Z
--- base_model: BrainWave-ML/ThoughtSwitch-V1-1.7b-Instruct language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/BrainWave-ML/ThoughtSwitch-V1-1.7b-Instruct <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#ThoughtSwitch-V1-1.7b-Instruct-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-Instruct-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-Instruct.Q2_K.gguf) | Q2_K | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-Instruct-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-Instruct.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-Instruct-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-Instruct.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-Instruct-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-Instruct.Q3_K_L.gguf) | Q3_K_L | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-Instruct-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-Instruct.IQ4_XS.gguf) | IQ4_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-Instruct-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-Instruct.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-Instruct-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-Instruct.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-Instruct-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-Instruct.Q5_K_S.gguf) | Q5_K_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-Instruct-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-Instruct.Q5_K_M.gguf) | Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-Instruct-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-Instruct.Q6_K.gguf) | Q6_K | 1.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-Instruct-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-Instruct.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-Instruct-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-Instruct.f16.gguf) | f16 | 3.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/interview-assistant-model-GGUF
mradermacher
2025-09-03T00:15:25Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:ashanwijebandara/interview-assistant-model", "base_model:quantized:ashanwijebandara/interview-assistant-model", "endpoints_compatible", "region:us" ]
null
2025-09-03T00:13:42Z
--- base_model: ashanwijebandara/interview-assistant-model language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/ashanwijebandara/interview-assistant-model <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#interview-assistant-model-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/interview-assistant-model-GGUF/resolve/main/interview-assistant-model.Q2_K.gguf) | Q2_K | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/interview-assistant-model-GGUF/resolve/main/interview-assistant-model.Q3_K_S.gguf) | Q3_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/interview-assistant-model-GGUF/resolve/main/interview-assistant-model.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/interview-assistant-model-GGUF/resolve/main/interview-assistant-model.IQ4_XS.gguf) | IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/interview-assistant-model-GGUF/resolve/main/interview-assistant-model.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/interview-assistant-model-GGUF/resolve/main/interview-assistant-model.Q3_K_L.gguf) | Q3_K_L | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/interview-assistant-model-GGUF/resolve/main/interview-assistant-model.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/interview-assistant-model-GGUF/resolve/main/interview-assistant-model.Q5_K_S.gguf) | Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/interview-assistant-model-GGUF/resolve/main/interview-assistant-model.Q5_K_M.gguf) | Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/interview-assistant-model-GGUF/resolve/main/interview-assistant-model.Q6_K.gguf) | Q6_K | 0.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/interview-assistant-model-GGUF/resolve/main/interview-assistant-model.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/interview-assistant-model-GGUF/resolve/main/interview-assistant-model.f16.gguf) | f16 | 0.4 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
akirafudo/blockassist-bc-keen_fast_giraffe_1756858488
akirafudo
2025-09-03T00:15:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:15:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
allura-org/MS3.2-24b-Angel
allura-org
2025-09-03T00:13:52Z
2,839
10
transformers
[ "transformers", "safetensors", "mistral3", "image-to-text", "axolotl", "unsloth", "roleplay", "conversational", "dataset:PygmalionAI/PIPPA", "dataset:Alfitaria/nemotron-ultra-reasoning-synthkink", "dataset:PocketDoc/Dans-Prosemaxx-Gutenberg", "dataset:FreedomIntelligence/Medical-R1-Distill-Data", "dataset:cognitivecomputations/SystemChat-2.0", "dataset:allenai/tulu-3-sft-personas-instruction-following", "dataset:kalomaze/Opus_Instruct_25k", "dataset:simplescaling/s1K-claude-3-7-sonnet", "dataset:ai2-adapt-dev/flan_v2_converted", "dataset:grimulkan/theory-of-mind", "dataset:grimulkan/physical-reasoning", "dataset:nvidia/HelpSteer3", "dataset:nbeerbower/gutenberg2-dpo", "dataset:nbeerbower/gutenberg-moderne-dpo", "dataset:nbeerbower/Purpura-DPO", "dataset:antiven0m/physical-reasoning-dpo", "dataset:allenai/tulu-3-IF-augmented-on-policy-70b", "dataset:allenai/href", "base_model:mistralai/Mistral-Small-3.2-24B-Instruct-2506", "base_model:finetune:mistralai/Mistral-Small-3.2-24B-Instruct-2506", "endpoints_compatible", "region:us" ]
image-to-text
2025-07-08T03:39:56Z
--- base_model: - mistralai/Mistral-Small-3.2-24B-Instruct-2506 library_name: transformers tags: - axolotl - unsloth - roleplay - conversational datasets: - PygmalionAI/PIPPA - Alfitaria/nemotron-ultra-reasoning-synthkink - PocketDoc/Dans-Prosemaxx-Gutenberg - FreedomIntelligence/Medical-R1-Distill-Data - cognitivecomputations/SystemChat-2.0 - allenai/tulu-3-sft-personas-instruction-following - kalomaze/Opus_Instruct_25k - simplescaling/s1K-claude-3-7-sonnet - ai2-adapt-dev/flan_v2_converted - grimulkan/theory-of-mind - grimulkan/physical-reasoning - nvidia/HelpSteer3 - nbeerbower/gutenberg2-dpo - nbeerbower/gutenberg-moderne-dpo - nbeerbower/Purpura-DPO - antiven0m/physical-reasoning-dpo - allenai/tulu-3-IF-augmented-on-policy-70b - allenai/href --- # Angel 24b ![Very cursed generation of Angel Dust laying on a bed, staring at the camera smirking. Generated with Kiwimix-XL v3 + an Angel Dust lora, refined with GPT-image-1](https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/g6hHxcdrD8r-HSUAz9b89.png) ***Better to reign in Hell than serve in Heaven.*** # Overview MS3.2-24b-Angel is a model finetuned from Mistral Small 3.2 for roleplaying, storywriting, and differently-flavored general instruct usecases. Testing revealed strong prose and character portrayal for its class, rivalling the preferred 72B models of some testers. # Quantizations EXL3: - [Official EXL3 quants](https://huggingface.co/allura-quants/allura-org_MS3.2-24b-Angel-EXL3) (thanks artus <3) GGUF: - [Official GGUF imatrix quants w/ mmproj](https://hf.co/allura-quants/allura-org_MS3.2-24b-Angel-GGUF) (thanks artus, again <3) MLX: - [bf16](https://huggingface.co/soundTeam/MS3.2-24b-Angel_mlx-bf16), [q8](https://huggingface.co/soundTeam/MS3.2-24b-Angel_mlx-q8), [q4](https://huggingface.co/soundTeam/MS3.2-24b-Angel_mlx-q4) (thanks heni and co <3) # Usage - Use Mistral v7 Tekken. - It is **highly recommended** (if your framework supports it) to use the official Mistral tokenization code instead of Huggingface's. This is possible in vLLM with `--tokenizer-mode mistral`. - Recommended samplers (from CURSE and corroborated by me, Fizz) are 1.2 temperature, 0.1 min_p, and 1.05 repetition penalty. - We recommend *a* system prompt, but its contents only faintly matter (I accidentally had an assistant system prompt during the entire time I was testing) # Training Process 1. [The original model had its vision adapter removed](https://huggingface.co/anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only) for better optimization and easier usage in training frameworks 2. The model was then put through an SFT process (using Axolotl) on various sources of general instruct, storytelling, and RP data, which resulted in [allura-forge/ms32-sft-merged](https://hf.co/allura-forge/ms32-sft-merged). 3. Afterwards, the model was put through a KTO process (using Unsloth) on more focused storywriting and anti-slop data, as well as general instruction following and human preference, which resulted in the final checkpoints at [allura-forge/ms32-final-TEXTONLY](https://hf.co/allura-forge/ms32-final-TEXTONLY). 4. Finally, the vision tower was manually added back to the weights to continue to support multimodality. # Credits - Fizz - training and data wrangling - Artus (by proxy) & Bot - help with funding - CURSE - testing - Mango - testing, data, help with KTO configs - DoctorShotgun - making the original text-only model - Axolotl & Unsloth - creating the training frameworks used for parts of this finetune - Everyone in Allura - moral support, being cool - Vivziepop and co - Angel Dust <3 love you all
mradermacher/infinitetalk-fine-tuned-GGUF
mradermacher
2025-09-03T00:12:39Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:4PFDom/infinitetalk-fine-tuned", "base_model:quantized:4PFDom/infinitetalk-fine-tuned", "endpoints_compatible", "region:us" ]
null
2025-09-03T00:08:03Z
--- base_model: 4PFDom/infinitetalk-fine-tuned language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/4PFDom/infinitetalk-fine-tuned <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#infinitetalk-fine-tuned-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/infinitetalk-fine-tuned-GGUF/resolve/main/infinitetalk-fine-tuned.Q2_K.gguf) | Q2_K | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/infinitetalk-fine-tuned-GGUF/resolve/main/infinitetalk-fine-tuned.Q3_K_S.gguf) | Q3_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/infinitetalk-fine-tuned-GGUF/resolve/main/infinitetalk-fine-tuned.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/infinitetalk-fine-tuned-GGUF/resolve/main/infinitetalk-fine-tuned.IQ4_XS.gguf) | IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/infinitetalk-fine-tuned-GGUF/resolve/main/infinitetalk-fine-tuned.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/infinitetalk-fine-tuned-GGUF/resolve/main/infinitetalk-fine-tuned.Q3_K_L.gguf) | Q3_K_L | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/infinitetalk-fine-tuned-GGUF/resolve/main/infinitetalk-fine-tuned.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/infinitetalk-fine-tuned-GGUF/resolve/main/infinitetalk-fine-tuned.Q5_K_S.gguf) | Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/infinitetalk-fine-tuned-GGUF/resolve/main/infinitetalk-fine-tuned.Q5_K_M.gguf) | Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/infinitetalk-fine-tuned-GGUF/resolve/main/infinitetalk-fine-tuned.Q6_K.gguf) | Q6_K | 0.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/infinitetalk-fine-tuned-GGUF/resolve/main/infinitetalk-fine-tuned.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/infinitetalk-fine-tuned-GGUF/resolve/main/infinitetalk-fine-tuned.f16.gguf) | f16 | 0.4 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
sivakrishna123/my-jarvis-4bit
sivakrishna123
2025-09-03T00:12:36Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-09-03T00:05:27Z
--- base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** sivakrishna123 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
xinnn32/blockassist-bc-meek_winged_caterpillar_1756858209
xinnn32
2025-09-03T00:11:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:10:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
GroomerG/blockassist-bc-vicious_pawing_badger_1756856622
GroomerG
2025-09-03T00:11:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vicious pawing badger", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:11:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vicious pawing badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1756858167
fakir22
2025-09-03T00:10:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping peaceful caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:10:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping peaceful caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
genies-llm/text2sql-sft-v4
genies-llm
2025-09-03T00:09:04Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "dataset:Genies/sft-data-kumar-v4", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T17:49:35Z
--- base_model: Qwen/Qwen2.5-Coder-7B-Instruct datasets: Genies/sft-data-kumar-v4 library_name: transformers model_name: text2sql-sft-v4 tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for text2sql-sft-v4 This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the [Genies/sft-data-kumar-v4](https://huggingface.co/datasets/Genies/sft-data-kumar-v4) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="genies-llm/text2sql-sft-v4", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/genies-rnd/text2sql-sft/runs/ej0xfw3k) This model was trained with SFT. ### Framework versions - TRL: 0.18.0 - Transformers: 4.52.3 - Pytorch: 2.6.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/NuExtract-2.0-2B-causalLM-GGUF
mradermacher
2025-09-03T00:08:18Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:dataesr/NuExtract-2.0-2B-causalLM", "base_model:quantized:dataesr/NuExtract-2.0-2B-causalLM", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-03T00:00:43Z
--- base_model: dataesr/NuExtract-2.0-2B-causalLM language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/dataesr/NuExtract-2.0-2B-causalLM <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#NuExtract-2.0-2B-causalLM-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NuExtract-2.0-2B-causalLM-GGUF/resolve/main/NuExtract-2.0-2B-causalLM.Q2_K.gguf) | Q2_K | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/NuExtract-2.0-2B-causalLM-GGUF/resolve/main/NuExtract-2.0-2B-causalLM.Q3_K_S.gguf) | Q3_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/NuExtract-2.0-2B-causalLM-GGUF/resolve/main/NuExtract-2.0-2B-causalLM.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NuExtract-2.0-2B-causalLM-GGUF/resolve/main/NuExtract-2.0-2B-causalLM.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/NuExtract-2.0-2B-causalLM-GGUF/resolve/main/NuExtract-2.0-2B-causalLM.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/NuExtract-2.0-2B-causalLM-GGUF/resolve/main/NuExtract-2.0-2B-causalLM.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NuExtract-2.0-2B-causalLM-GGUF/resolve/main/NuExtract-2.0-2B-causalLM.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NuExtract-2.0-2B-causalLM-GGUF/resolve/main/NuExtract-2.0-2B-causalLM.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/NuExtract-2.0-2B-causalLM-GGUF/resolve/main/NuExtract-2.0-2B-causalLM.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/NuExtract-2.0-2B-causalLM-GGUF/resolve/main/NuExtract-2.0-2B-causalLM.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NuExtract-2.0-2B-causalLM-GGUF/resolve/main/NuExtract-2.0-2B-causalLM.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/NuExtract-2.0-2B-causalLM-GGUF/resolve/main/NuExtract-2.0-2B-causalLM.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/YanoljaNEXT-Rosetta-20B-GGUF
mradermacher
2025-09-03T00:07:59Z
0
0
transformers
[ "transformers", "gguf", "translation", "en", "es", "fr", "de", "pt", "ja", "ko", "zh", "ar", "ru", "hi", "base_model:yanolja/YanoljaNEXT-Rosetta-20B", "base_model:quantized:yanolja/YanoljaNEXT-Rosetta-20B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
translation
2025-09-02T22:04:10Z
--- base_model: yanolja/YanoljaNEXT-Rosetta-20B language: - en - es - fr - de - pt - ja - ko - zh - ar - ru - hi library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - translation --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/yanolja/YanoljaNEXT-Rosetta-20B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#YanoljaNEXT-Rosetta-20B-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/YanoljaNEXT-Rosetta-20B-GGUF/resolve/main/YanoljaNEXT-Rosetta-20B.Q3_K_S.gguf) | Q3_K_S | 12.2 | | | [GGUF](https://huggingface.co/mradermacher/YanoljaNEXT-Rosetta-20B-GGUF/resolve/main/YanoljaNEXT-Rosetta-20B.Q2_K.gguf) | Q2_K | 12.2 | | | [GGUF](https://huggingface.co/mradermacher/YanoljaNEXT-Rosetta-20B-GGUF/resolve/main/YanoljaNEXT-Rosetta-20B.IQ4_XS.gguf) | IQ4_XS | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/YanoljaNEXT-Rosetta-20B-GGUF/resolve/main/YanoljaNEXT-Rosetta-20B.Q3_K_M.gguf) | Q3_K_M | 13.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/YanoljaNEXT-Rosetta-20B-GGUF/resolve/main/YanoljaNEXT-Rosetta-20B.Q3_K_L.gguf) | Q3_K_L | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/YanoljaNEXT-Rosetta-20B-GGUF/resolve/main/YanoljaNEXT-Rosetta-20B.Q4_K_S.gguf) | Q4_K_S | 14.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/YanoljaNEXT-Rosetta-20B-GGUF/resolve/main/YanoljaNEXT-Rosetta-20B.Q4_K_M.gguf) | Q4_K_M | 15.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/YanoljaNEXT-Rosetta-20B-GGUF/resolve/main/YanoljaNEXT-Rosetta-20B.Q5_K_S.gguf) | Q5_K_S | 16.0 | | | [GGUF](https://huggingface.co/mradermacher/YanoljaNEXT-Rosetta-20B-GGUF/resolve/main/YanoljaNEXT-Rosetta-20B.Q5_K_M.gguf) | Q5_K_M | 17.0 | | | [GGUF](https://huggingface.co/mradermacher/YanoljaNEXT-Rosetta-20B-GGUF/resolve/main/YanoljaNEXT-Rosetta-20B.Q6_K.gguf) | Q6_K | 22.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/YanoljaNEXT-Rosetta-20B-GGUF/resolve/main/YanoljaNEXT-Rosetta-20B.Q8_0.gguf) | Q8_0 | 22.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
arianaazarbal/discourage_lies_tpr_0.65-grpo_recontextualized_1_20250902_221813-policy-adapter
arianaazarbal
2025-09-03T00:07:09Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-09-03T00:05:57Z
# Policy Model LoRA Adapter (GRPO/DPO) Experiment: discourage_lies_tpr_0.65 Timestamp: grpo_recontextualized_1_20250902_221813 This model was trained as part of the deception-evasion-honesty experiments. ## Model Details - **Type**: Policy Model LoRA Adapter (GRPO/DPO) - **Experiment Name**: discourage_lies_tpr_0.65 - **Training Timestamp**: grpo_recontextualized_1_20250902_221813
omerbektass/blockassist-bc-keen_fast_giraffe_1756858001
omerbektass
2025-09-03T00:07:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:07:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AIgotahole/gemma-2-9b-it-MoAA-DPO-Q5_K_S-GGUF
AIgotahole
2025-09-03T00:06:20Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:togethercomputer/gemma-2-9b-it-MoAA-DPO", "base_model:quantized:togethercomputer/gemma-2-9b-it-MoAA-DPO", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-03T00:05:52Z
--- library_name: transformers tags: - llama-cpp - gguf-my-repo base_model: togethercomputer/gemma-2-9b-it-MoAA-DPO --- # AIgotahole/gemma-2-9b-it-MoAA-DPO-Q5_K_S-GGUF This model was converted to GGUF format from [`togethercomputer/gemma-2-9b-it-MoAA-DPO`](https://huggingface.co/togethercomputer/gemma-2-9b-it-MoAA-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/togethercomputer/gemma-2-9b-it-MoAA-DPO) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo AIgotahole/gemma-2-9b-it-MoAA-DPO-Q5_K_S-GGUF --hf-file gemma-2-9b-it-moaa-dpo-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo AIgotahole/gemma-2-9b-it-MoAA-DPO-Q5_K_S-GGUF --hf-file gemma-2-9b-it-moaa-dpo-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo AIgotahole/gemma-2-9b-it-MoAA-DPO-Q5_K_S-GGUF --hf-file gemma-2-9b-it-moaa-dpo-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo AIgotahole/gemma-2-9b-it-MoAA-DPO-Q5_K_S-GGUF --hf-file gemma-2-9b-it-moaa-dpo-q5_k_s.gguf -c 2048 ```
rayonlabs/tournament-tourn_bcbe2c057c905676_20250902-dc320e4e-4f53-45f4-885f-0d812b8fab7d-5EpnAMpX
rayonlabs
2025-09-03T00:03:58Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:tiiuae/falcon-7b", "base_model:adapter:tiiuae/falcon-7b", "region:us" ]
null
2025-09-03T00:03:33Z
--- base_model: tiiuae/falcon-7b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
rayonlabs/tournament-tourn_bcbe2c057c905676_20250902-dc320e4e-4f53-45f4-885f-0d812b8fab7d-5DLkfAsZ
rayonlabs
2025-09-03T00:03:12Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:tiiuae/falcon-7b", "base_model:adapter:tiiuae/falcon-7b", "region:us" ]
null
2025-09-03T00:02:49Z
--- base_model: tiiuae/falcon-7b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
Hopelesslyhype/aw_v3_q4
Hopelesslyhype
2025-09-03T00:02:30Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-03T00:02:30Z
--- license: apache-2.0 ---
NikolayKozloff/WEBGEN-4B-Preview-Q8_0-GGUF
NikolayKozloff
2025-09-03T00:00:50Z
0
1
transformers
[ "transformers", "gguf", "web-generation", "html", "css", "tailwind-css", "ui-generation", "web-design", "small-model", "qwen3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:Tesslate/WEBGEN-4B-Preview", "base_model:quantized:Tesslate/WEBGEN-4B-Preview", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-09-03T00:00:31Z
--- language: - en library_name: transformers pipeline_tag: text-generation license: apache-2.0 base_model: Tesslate/WEBGEN-4B-Preview tags: - web-generation - html - css - tailwind-css - ui-generation - web-design - small-model - qwen3 - transformers - llama-cpp - gguf-my-repo --- # NikolayKozloff/WEBGEN-4B-Preview-Q8_0-GGUF This model was converted to GGUF format from [`Tesslate/WEBGEN-4B-Preview`](https://huggingface.co/Tesslate/WEBGEN-4B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Tesslate/WEBGEN-4B-Preview) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/WEBGEN-4B-Preview-Q8_0-GGUF --hf-file webgen-4b-preview-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/WEBGEN-4B-Preview-Q8_0-GGUF --hf-file webgen-4b-preview-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/WEBGEN-4B-Preview-Q8_0-GGUF --hf-file webgen-4b-preview-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/WEBGEN-4B-Preview-Q8_0-GGUF --hf-file webgen-4b-preview-q8_0.gguf -c 2048 ```
chakra-labs/pango-7b-sft-checkpoints
chakra-labs
2025-09-03T00:00:47Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_vl", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-02T20:39:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1756857572
fakir22
2025-09-03T00:00:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping peaceful caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-03T00:00:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping peaceful caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbkts/blockassist-bc-keen_fast_giraffe_1756857482
omerbkts
2025-09-02T23:58:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:58:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
davidilag/wav2vec2-xls-r-300m-pt-1000h_faroese-cp10-faroese-100h-30-epochs_run7_2025-09-02
davidilag
2025-09-02T23:57:01Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-02T14:08:44Z
--- library_name: transformers tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-xls-r-300m-pt-1000h_faroese-cp10-faroese-100h-30-epochs_run7_2025-09-02 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-pt-1000h_faroese-cp10-faroese-100h-30-epochs_run7_2025-09-02 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1039 - Wer: 18.8880 - Cer: 4.0690 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-------:|:-----:|:---------------:|:-------:|:-------:| | 3.4031 | 0.4877 | 1000 | 3.3166 | 100.0 | 99.2560 | | 0.841 | 0.9754 | 2000 | 0.4918 | 44.2261 | 12.1777 | | 0.4075 | 1.4628 | 3000 | 0.2292 | 30.5988 | 7.8207 | | 0.3696 | 1.9505 | 4000 | 0.1940 | 28.3121 | 7.0909 | | 0.2937 | 2.4379 | 5000 | 0.1744 | 26.8934 | 6.6001 | | 0.2724 | 2.9256 | 6000 | 0.1554 | 25.8316 | 6.2577 | | 0.2018 | 3.4131 | 7000 | 0.1500 | 24.8491 | 5.9902 | | 0.221 | 3.9008 | 8000 | 0.1398 | 24.2455 | 5.7646 | | 0.1636 | 4.3882 | 9000 | 0.1468 | 24.2631 | 5.7480 | | 0.1819 | 4.8759 | 10000 | 0.1354 | 23.4833 | 5.5492 | | 0.1526 | 5.3633 | 11000 | 0.1384 | 23.1484 | 5.4923 | | 0.1502 | 5.8510 | 12000 | 0.1291 | 23.1793 | 5.4647 | | 0.1269 | 6.3385 | 13000 | 0.1248 | 22.4964 | 5.3203 | | 0.137 | 6.8261 | 14000 | 0.1287 | 22.2408 | 5.1894 | | 0.1181 | 7.3136 | 15000 | 0.1166 | 22.1880 | 5.1965 | | 0.1275 | 7.8013 | 16000 | 0.1181 | 22.1439 | 5.1428 | | 0.1224 | 8.2887 | 17000 | 0.1210 | 21.7738 | 5.0142 | | 0.1175 | 8.7764 | 18000 | 0.1175 | 21.8399 | 4.9755 | | 0.1035 | 9.2638 | 19000 | 0.1142 | 21.6284 | 4.9897 | | 0.1033 | 9.7515 | 20000 | 0.1205 | 21.5447 | 4.9755 | | 0.0908 | 10.2390 | 21000 | 0.1187 | 21.4434 | 4.8951 | | 0.0916 | 10.7267 | 22000 | 0.1207 | 21.5095 | 4.9290 | | 0.0747 | 11.2141 | 23000 | 0.1191 | 21.2539 | 4.8020 | | 0.0809 | 11.7018 | 24000 | 0.1132 | 20.9940 | 4.7593 | | 0.0812 | 12.1892 | 25000 | 0.1138 | 21.0512 | 4.7885 | | 0.0745 | 12.6769 | 26000 | 0.1189 | 21.1041 | 4.7215 | | 0.0739 | 13.1644 | 27000 | 0.1161 | 20.7164 | 4.6544 | | 0.0706 | 13.6520 | 28000 | 0.1085 | 20.6856 | 4.6505 | | 0.0712 | 14.1395 | 29000 | 0.1110 | 20.4961 | 4.5724 | | 0.0788 | 14.6272 | 30000 | 0.1087 | 20.4300 | 4.5329 | | 0.0607 | 15.1146 | 31000 | 0.1096 | 20.3815 | 4.5526 | | 0.0596 | 15.6023 | 32000 | 0.1111 | 20.4520 | 4.5700 | | 0.0686 | 16.0897 | 33000 | 0.1076 | 20.1745 | 4.5132 | | 0.0669 | 16.5774 | 34000 | 0.1027 | 20.0555 | 4.4382 | | 0.0521 | 17.0649 | 35000 | 0.1029 | 20.0731 | 4.4319 | | 0.05 | 17.5525 | 36000 | 0.1025 | 19.8969 | 4.4177 | | 0.0546 | 18.0400 | 37000 | 0.1021 | 19.7779 | 4.3648 | | 0.0472 | 18.5277 | 38000 | 0.1080 | 19.8396 | 4.3932 | | 0.0517 | 19.0151 | 39000 | 0.1023 | 19.7383 | 4.3412 | | 0.0472 | 19.5028 | 40000 | 0.1064 | 19.7295 | 4.3459 | | 0.0382 | 19.9905 | 41000 | 0.1113 | 19.4475 | 4.2970 | | 0.046 | 20.4779 | 42000 | 0.1045 | 19.5312 | 4.3128 | | 0.0365 | 20.9656 | 43000 | 0.1063 | 19.4607 | 4.2686 | | 0.0345 | 21.4531 | 44000 | 0.1037 | 19.4783 | 4.2567 | | 0.0584 | 21.9407 | 45000 | 0.0992 | 19.2801 | 4.2078 | | 0.045 | 22.4282 | 46000 | 0.1021 | 19.1876 | 4.1786 | | 0.0456 | 22.9159 | 47000 | 0.1043 | 19.1479 | 4.1518 | | 0.0485 | 23.4033 | 48000 | 0.1018 | 19.0245 | 4.1124 | | 0.0351 | 23.8910 | 49000 | 0.1041 | 19.1303 | 4.1755 | | 0.0421 | 24.3784 | 50000 | 0.1016 | 18.9893 | 4.1234 | | 0.0358 | 24.8661 | 51000 | 0.1056 | 18.9452 | 4.1013 | | 0.0344 | 25.3536 | 52000 | 0.1048 | 18.9717 | 4.1124 | | 0.0363 | 25.8413 | 53000 | 0.1034 | 18.9496 | 4.1060 | | 0.0383 | 26.3287 | 54000 | 0.1047 | 18.8968 | 4.0792 | | 0.035 | 26.8164 | 55000 | 0.1035 | 18.8880 | 4.0753 | | 0.0387 | 27.3038 | 56000 | 0.1046 | 18.8615 | 4.0642 | | 0.0423 | 27.7915 | 57000 | 0.1040 | 18.9012 | 4.0697 | | 0.0393 | 28.2790 | 58000 | 0.1046 | 18.8615 | 4.0697 | | 0.03 | 28.7666 | 59000 | 0.1041 | 18.8703 | 4.0713 | | 0.0438 | 29.2541 | 60000 | 0.1040 | 18.8880 | 4.0690 | | 0.0398 | 29.7418 | 61000 | 0.1039 | 18.8880 | 4.0690 | ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
neutrino12/tensorstax-32b-22000-lora-32-3e-4-plan-2262
neutrino12
2025-09-02T23:56:41Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T23:38:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
akirafudo/blockassist-bc-keen_fast_giraffe_1756857348
akirafudo
2025-09-02T23:56:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:56:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbektass/blockassist-bc-keen_fast_giraffe_1756857213
omerbektass
2025-09-02T23:53:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:53:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbkts/blockassist-bc-keen_fast_giraffe_1756857079
omerbkts
2025-09-02T23:51:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:51:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1756857036
fakir22
2025-09-02T23:51:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping peaceful caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:51:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping peaceful caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
enacimie/WebSailor-7B-Q4_0-GGUF
enacimie
2025-09-02T23:50:41Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:Alibaba-NLP/WebSailor-7B", "base_model:quantized:Alibaba-NLP/WebSailor-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-02T23:50:20Z
--- license: apache-2.0 tags: - llama-cpp - gguf-my-repo base_model: Alibaba-NLP/WebSailor-7B --- # enacimie/WebSailor-7B-Q4_0-GGUF This model was converted to GGUF format from [`Alibaba-NLP/WebSailor-7B`](https://huggingface.co/Alibaba-NLP/WebSailor-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Alibaba-NLP/WebSailor-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo enacimie/WebSailor-7B-Q4_0-GGUF --hf-file websailor-7b-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo enacimie/WebSailor-7B-Q4_0-GGUF --hf-file websailor-7b-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo enacimie/WebSailor-7B-Q4_0-GGUF --hf-file websailor-7b-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo enacimie/WebSailor-7B-Q4_0-GGUF --hf-file websailor-7b-q4_0.gguf -c 2048 ```
xinnn32/blockassist-bc-meek_winged_caterpillar_1756856947
xinnn32
2025-09-02T23:50:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:49:59Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
GabrielDasilva/entrepreneur-readiness-baseline
GabrielDasilva
2025-09-02T23:48:19Z
0
0
null
[ "region:us" ]
null
2025-09-02T23:43:02Z
# Baseline Model **Task:** classification **Target:** entrepreneur_readiness ## Metrics { "accuracy": 0.19047619047619047, "f1_weighted": 0.18511118511118513 } ## Train command ```bash python colab_all_in_one.py --csv augmented_dataset.csv --target entrepreneur_readiness --task classification ```
Diogo2303/whisper-medium-F5-Children-100h-1epoch
Diogo2303
2025-09-02T23:47:26Z
0
0
null
[ "tensorboard", "safetensors", "whisper", "generated_from_trainer", "pt", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "region:us" ]
null
2025-09-02T16:32:09Z
--- language: - pt license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer model-index: - name: Whisper MEDIUM FINAL results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper MEDIUM FINAL This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the 800 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.0 - Pytorch 2.8.0+cu128 - Datasets 3.6.0 - Tokenizers 0.14.0
omerbkts/blockassist-bc-keen_fast_giraffe_1756856700
omerbkts
2025-09-02T23:45:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:45:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Fiscus/caffeine-4b-thinking-sft-16bit
Fiscus
2025-09-02T23:44:27Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-02T23:44:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GroomerG/blockassist-bc-vicious_pawing_badger_1756854518
GroomerG
2025-09-02T23:37:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vicious pawing badger", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:37:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vicious pawing badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756856176
akirafudo
2025-09-02T23:37:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:36:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Xgen-DPE-GGUF
mradermacher
2025-09-02T23:35:16Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:NewEden/Xgen-DPE", "base_model:quantized:NewEden/Xgen-DPE", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-02T21:56:00Z
--- base_model: NewEden/Xgen-DPE language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/NewEden/Xgen-DPE <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Xgen-DPE-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Xgen-DPE-GGUF/resolve/main/Xgen-DPE.Q2_K.gguf) | Q2_K | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Xgen-DPE-GGUF/resolve/main/Xgen-DPE.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Xgen-DPE-GGUF/resolve/main/Xgen-DPE.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Xgen-DPE-GGUF/resolve/main/Xgen-DPE.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Xgen-DPE-GGUF/resolve/main/Xgen-DPE.IQ4_XS.gguf) | IQ4_XS | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/Xgen-DPE-GGUF/resolve/main/Xgen-DPE.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Xgen-DPE-GGUF/resolve/main/Xgen-DPE.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Xgen-DPE-GGUF/resolve/main/Xgen-DPE.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/Xgen-DPE-GGUF/resolve/main/Xgen-DPE.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Xgen-DPE-GGUF/resolve/main/Xgen-DPE.Q6_K.gguf) | Q6_K | 8.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Xgen-DPE-GGUF/resolve/main/Xgen-DPE.Q8_0.gguf) | Q8_0 | 11.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Xgen-DPE-GGUF/resolve/main/Xgen-DPE.f16.gguf) | f16 | 21.4 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
omerbkts/blockassist-bc-keen_fast_giraffe_1756855929
omerbkts
2025-09-02T23:32:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:32:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
cebaezc/distilgpt2-quijote
cebaezc
2025-09-02T23:31:10Z
12
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T01:32:54Z
--- library_name: transformers license: apache-2.0 base_model: distilgpt2 tags: - generated_from_trainer model-index: - name: distilgpt2-quijote results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-quijote This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.55.4 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
akirafudo/blockassist-bc-keen_fast_giraffe_1756855824
akirafudo
2025-09-02T23:30:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:30:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
xinnn32/blockassist-bc-meek_winged_caterpillar_1756855729
xinnn32
2025-09-02T23:30:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:29:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kihansong/MOV_merged_v14-Q4_K_M-GGUF
kihansong
2025-09-02T23:28:47Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:kihansong/MOV_merged_v14", "base_model:quantized:kihansong/MOV_merged_v14", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-02T23:28:23Z
--- base_model: kihansong/MOV_merged_v14 tags: - llama-cpp - gguf-my-repo --- # kihansong/MOV_merged_v14-Q4_K_M-GGUF This model was converted to GGUF format from [`kihansong/MOV_merged_v14`](https://huggingface.co/kihansong/MOV_merged_v14) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/kihansong/MOV_merged_v14) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo kihansong/MOV_merged_v14-Q4_K_M-GGUF --hf-file mov_merged_v14-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo kihansong/MOV_merged_v14-Q4_K_M-GGUF --hf-file mov_merged_v14-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo kihansong/MOV_merged_v14-Q4_K_M-GGUF --hf-file mov_merged_v14-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo kihansong/MOV_merged_v14-Q4_K_M-GGUF --hf-file mov_merged_v14-q4_k_m.gguf -c 2048 ```
qinuoitu/blockassist-bc-scurrying_opaque_mandrill_1756855698
qinuoitu
2025-09-02T23:28:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scurrying opaque mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:28:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scurrying opaque mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1756855492
fakir22
2025-09-02T23:25:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping peaceful caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:25:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping peaceful caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
NahedDom/blockassist-bc-flapping_stocky_leopard_1756853182
NahedDom
2025-09-02T23:23:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping stocky leopard", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:23:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping stocky leopard --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Llama-3.1-Non-filter-Lafeak64-8B-GGUF
mradermacher
2025-09-02T23:22:31Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:Yuichi1218/Llama-3.1-Non-filter-Lafeak64-8B", "base_model:quantized:Yuichi1218/Llama-3.1-Non-filter-Lafeak64-8B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-02T21:52:45Z
--- base_model: Yuichi1218/Llama-3.1-Non-filter-Lafeak64-8B language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Yuichi1218/Llama-3.1-Non-filter-Lafeak64-8B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-3.1-Non-filter-Lafeak64-8B-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Non-filter-Lafeak64-8B-GGUF/resolve/main/Llama-3.1-Non-filter-Lafeak64-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Non-filter-Lafeak64-8B-GGUF/resolve/main/Llama-3.1-Non-filter-Lafeak64-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Non-filter-Lafeak64-8B-GGUF/resolve/main/Llama-3.1-Non-filter-Lafeak64-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Non-filter-Lafeak64-8B-GGUF/resolve/main/Llama-3.1-Non-filter-Lafeak64-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Non-filter-Lafeak64-8B-GGUF/resolve/main/Llama-3.1-Non-filter-Lafeak64-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Non-filter-Lafeak64-8B-GGUF/resolve/main/Llama-3.1-Non-filter-Lafeak64-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Non-filter-Lafeak64-8B-GGUF/resolve/main/Llama-3.1-Non-filter-Lafeak64-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Non-filter-Lafeak64-8B-GGUF/resolve/main/Llama-3.1-Non-filter-Lafeak64-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Non-filter-Lafeak64-8B-GGUF/resolve/main/Llama-3.1-Non-filter-Lafeak64-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Non-filter-Lafeak64-8B-GGUF/resolve/main/Llama-3.1-Non-filter-Lafeak64-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Non-filter-Lafeak64-8B-GGUF/resolve/main/Llama-3.1-Non-filter-Lafeak64-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Non-filter-Lafeak64-8B-GGUF/resolve/main/Llama-3.1-Non-filter-Lafeak64-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
sivakrishna123/my-jarvis-adapters
sivakrishna123
2025-09-02T23:22:03Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-02T23:21:50Z
--- base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** sivakrishna123 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
omerbkts/blockassist-bc-keen_fast_giraffe_1756855188
omerbkts
2025-09-02T23:20:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:20:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1756853623
lisaozill03
2025-09-02T23:20:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rugged prickly alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:19:59Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rugged prickly alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
anik1115/Merged_FineTuned_LOR_1B_Model_final
anik1115
2025-09-02T23:18:42Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2025-09-02T22:20:47Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MercuryNex/aCiD
MercuryNex
2025-09-02T23:18:41Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-09-02T23:17:57Z
--- license: other language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image --- Converted from [https://civitai.com/api/download/models/290640?type=Model&format=SafeTensor&size=pruned&fp=fp16](https://civitai.com/api/download/models/290640?type=Model&format=SafeTensor&size=pruned&fp=fp16).
AnerYubo/blockassist-bc-hairy_crested_fox_1756854722
AnerYubo
2025-09-02T23:12:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hairy crested fox", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:12:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hairy crested fox --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
xinnn32/blockassist-bc-meek_winged_caterpillar_1756854465
xinnn32
2025-09-02T23:09:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:08:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hakan35/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_gregarious_squirrel
hakan35
2025-09-02T23:09:05Z
38
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am bold_gregarious_squirrel", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-13T07:26:04Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am bold_gregarious_squirrel --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lambertxiao/Vision-Language-Vision-Captioner-Qwen2.5-3B
lambertxiao
2025-09-02T23:07:35Z
43
1
transformers
[ "transformers", "safetensors", "VLV_decoder", "feature-extraction", "image-captioning", "multimodal", "vision-language", "diffusion", "pytorch", "image-to-text", "custom_code", "dataset:conceptual_captions", "dataset:coco", "license:apache-2.0", "region:us" ]
image-to-text
2025-07-10T08:31:22Z
--- license: apache-2.0 tags: - image-captioning - multimodal - vision-language - diffusion - pytorch - transformers library_name: transformers pipeline_tag: image-to-text datasets: - conceptual_captions - coco model_type: VLV_decoder --- # VLV Captioner Model This is a VLV (Vision-Language-Vision) model for image captioning. The model combines stable diffusion image encoding with Qwen language model for generating descriptive captions from images. ## Model Description The VLV Captioner is a multimodal model that: - Uses a diffusion-based vision encoder to extract image features - Employs the Qwen2.5-3B language model for text generation - Generates natural language descriptions of input images ## Model Architecture - **Vision Encoder**: Stable Diffusion-based image encoder with Florence2 components - **Language Model**: Qwen2.5-3B transformer model - **Image Size**: 384x384 pixels - **Max Caption Length**: 300 tokens - **Precision**: Mixed precision (bfloat16/float32) ## Usage ### Method 1: Load from Hugging Face Hub ```python from transformers import AutoModel, AutoConfig from PIL import Image import torch import os # Optional: Set custom cache directory if needed cache_dir = "/path/to/your/cache" # Use a directory with sufficient space os.makedirs(cache_dir, exist_ok=True) # Load the model with authentication token (if required) token = os.getenv('HUGGINGFACE_TOKEN') # or your token string print("Loading config...") config = AutoConfig.from_pretrained( "your-username/vlv-captioner", trust_remote_code=True, token=token, cache_dir=cache_dir ) print("Loading model...") try: model = AutoModel.from_pretrained( "your-username/vlv-captioner", trust_remote_code=True, token=token, cache_dir=cache_dir, torch_dtype=torch.float32, # Specify dtype explicitly low_cpu_mem_usage=True # Note: Avoid device_map="auto" to prevent meta tensor issues ) print("Model loaded successfully!") # Load and process an image image = Image.open("path/to/your/image.jpg") # Move model to GPU if available if torch.cuda.is_available(): model = model.to('cuda') print("Model moved to GPU!") # Generate caption print("Generating caption...") with torch.no_grad(): captions = model([image], max_length=300) # Handle different possible output formats if hasattr(captions, 'generated_text'): print("Generated caption:", captions.generated_text[0]) elif isinstance(captions, list): print("Generated caption:", captions[0]) else: print("Generated caption:", captions) except Exception as e: print(f"Error during model loading or inference: {e}") # If cached files are corrupted, try clearing cache and redownloading import shutil cache_path = f"{cache_dir}/modules/transformers_modules/your-username/vlv-captioner" if os.path.exists(cache_path): print(f"Clearing cache at {cache_path}") shutil.rmtree(cache_path) # Retry with force download model = AutoModel.from_pretrained( "your-username/vlv-captioner", trust_remote_code=True, token=token, cache_dir=cache_dir, force_download=True, torch_dtype=torch.float32 ) ``` ### Method 2: Load from original checkpoint ```python from VLV_stage2 import VLV_MODEL # Load from original .pt checkpoint file model = VLV_MODEL.from_checkpoint("path/to/model.pt") # Load and process an image image = Image.open("path/to/your/image.jpg") # Generate caption with torch.no_grad(): captions = model([image], max_length=300) print(captions.generated_text[0]) # Generated caption ``` ## Model Details - **Model Type**: Vision-Language Model - **Architecture**: VLV_decoder - **Language Backbone**: Qwen/Qwen2.5-3B - **Vision Backbone**: Stable Diffusion + Florence2 - **Training Data**: Various image-caption datasets - **Framework**: PyTorch, Transformers ## Training Configuration - **Batch Size**: 1 (inference) - **Learnable Token Length**: 77 - **Guidance Scale**: 7.5 - **Inference Steps**: 50 - **Beam Search**: 4 beams ## Requirements ```bash pip install torch transformers safetensors torchvision pillow diffusers ``` ## Troubleshooting ### Common Issues and Solutions #### 1. Meta Tensor Issues If you encounter meta tensor errors, avoid using `device_map="auto"` when loading the model: ```python # ❌ Don't use this - can cause meta tensor issues model = AutoModel.from_pretrained("model-name", device_map="auto") # ✅ Use this instead model = AutoModel.from_pretrained("model-name", torch_dtype=torch.float32, low_cpu_mem_usage=True) if torch.cuda.is_available(): model = model.to('cuda') ``` #### 2. Cache Issues If you experience corrupted cache files, clear the cache and redownload: ```python import shutil import os cache_dir = "/your/cache/directory" cache_path = f"{cache_dir}/modules/transformers_modules/your-username/model-name" if os.path.exists(cache_path): shutil.rmtree(cache_path) # Then reload with force_download=True model = AutoModel.from_pretrained("model-name", force_download=True) ``` #### 3. Authentication Issues Make sure your Hugging Face token is properly set: ```bash # Option 1: Environment variable export HUGGINGFACE_TOKEN="your_token_here" # Option 2: Hugging Face CLI login huggingface-cli login ``` #### 4. Memory Issues For large models, use a custom cache directory with sufficient space: ```python cache_dir = "/path/to/large/storage" os.makedirs(cache_dir, exist_ok=True) model = AutoModel.from_pretrained("model-name", cache_dir=cache_dir, low_cpu_mem_usage=True) ``` ## Advanced Usage ### Batch Processing with Original Inference Script For large-scale inference, you can use the original training inference script: ```bash python Caption_inference.py \ --input_path /path/to/images \ --output_path captions.json \ --clip_decoder_checkpoint /path/to/model.pt \ --qwen_model Qwen/Qwen2.5-3B \ --stable_diffusion_model_path stabilityai/stable-diffusion-2-1-base \ --florence2_model_path microsoft/Florence-2-large \ --batch_size 4 \ --max_length 300 \ --num_beams 4 \ --image_size 384 \ --guidance_scale 7.5 \ --use_text_encoder \ --distributed # For multi-GPU inference ``` ### Configuration Parameters - `image_size`: Input image resolution (default: 384) - `guidance_scale`: Diffusion guidance scale (default: 7.5) - `learnable_token_length`: Number of vision tokens (default: 77) - `max_length`: Maximum caption length (default: 300) - `num_beams`: Beam search width (default: 4) - `use_text_encoder`: Enable CLIP text encoder (recommended: True) ``` ## Citation ```bibtex @article{vlv_autoencoder, title={Vision-Language-Vision Auto-Encoder: Scalable Knowledge Distillation from Diffusion Models}, author={Zhang, Tiezheng and Li, Yitong and Chou, Yu-Cheng and Chen, Jieneng and Yuille, Alan L. and Wei, Chen and Xiao, Junfei}, journal={arXiv preprint}, year={2024} } ``` ## License This model is released under the Apache 2.0 license.
johngreendr1/5ad820cc-ad35-476e-ba29-3f0f456d9a9c
johngreendr1
2025-09-02T23:05:18Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:jingyeom/seal3.1.6n_7b", "base_model:adapter:jingyeom/seal3.1.6n_7b", "region:us" ]
null
2025-09-02T20:00:43Z
--- base_model: jingyeom/seal3.1.6n_7b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
kafa22/blockassist-bc-regal_leggy_hummingbird_1756854196
kafa22
2025-09-02T23:03:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal leggy hummingbird", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T23:03:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal leggy hummingbird --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dania19862017/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-unseen_nocturnal_zebra
Dania19862017
2025-09-02T23:03:18Z
98
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am unseen_nocturnal_zebra", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T15:36:16Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am unseen_nocturnal_zebra --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rayonlabs/tournament-tourn_bcbe2c057c905676_20250902-b92652c5-979a-45d0-9123-8cdab2b688c2-5GqABuCy
rayonlabs
2025-09-02T23:03:13Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct", "region:us" ]
null
2025-09-02T23:02:46Z
--- base_model: Qwen/Qwen2.5-Coder-7B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
pavannagula/ppo-LunarLander-v2
pavannagula
2025-09-02T23:02:28Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-09-02T23:02:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 209.74 +/- 94.71 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
RaghavM12/Diet-Coach-Update
RaghavM12
2025-09-02T22:58:33Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-02T22:58:31Z
--- base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** RaghavM12 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
neutrino12/tensorstax-32b-22000-lora-32-5e-5-plan-2262
neutrino12
2025-09-02T22:57:40Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-02T22:39:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
omerbektass/blockassist-bc-keen_fast_giraffe_1756853808
omerbektass
2025-09-02T22:57:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T22:57:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbkts/blockassist-bc-keen_fast_giraffe_1756853656
omerbkts
2025-09-02T22:55:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T22:54:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kmpartner/k5pcmlra-test
kmpartner
2025-09-02T22:51:09Z
100
0
peft
[ "peft", "tensorboard", "diffusers", "safetensors", "arxiv:1910.09700", "base_model:segmind/Segmind-Vega", "base_model:adapter:segmind/Segmind-Vega", "region:us" ]
null
2025-04-27T12:32:18Z
--- library_name: peft base_model: segmind/Segmind-Vega --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.9.0
omerbektass/blockassist-bc-keen_fast_giraffe_1756853393
omerbektass
2025-09-02T22:50:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T22:50:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ellenpro/EllenLin-Replicate
ellenpro
2025-09-02T22:49:58Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-02T20:46:00Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: EllenLin --- # Ellenlin Replicate <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `EllenLin` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "EllenLin", "lora_weights": "https://huggingface.co/ellenpro/EllenLin-Replicate/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('ellenpro/EllenLin-Replicate', weight_name='lora.safetensors') image = pipeline('EllenLin').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/ellenpro/EllenLin-Replicate/discussions) to add images that show off what you’ve made with this LoRA.
OddTheGreat/Circuitry_24B_V.2-Q4_K_S-GGUF
OddTheGreat
2025-09-02T22:49:20Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "roleplay", "creative", "llama-cpp", "gguf-my-repo", "en", "ru", "base_model:OddTheGreat/Circuitry_24B_V.2", "base_model:quantized:OddTheGreat/Circuitry_24B_V.2", "endpoints_compatible", "region:us" ]
null
2025-09-02T22:48:24Z
--- base_model: OddTheGreat/Circuitry_24B_V.2 library_name: transformers tags: - mergekit - merge - roleplay - creative - llama-cpp - gguf-my-repo language: - en - ru --- # OddTheGreat/Circuitry_24B_V.2-Q4_K_S-GGUF This model was converted to GGUF format from [`OddTheGreat/Circuitry_24B_V.2`](https://huggingface.co/OddTheGreat/Circuitry_24B_V.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/OddTheGreat/Circuitry_24B_V.2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo OddTheGreat/Circuitry_24B_V.2-Q4_K_S-GGUF --hf-file circuitry_24b_v.2-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo OddTheGreat/Circuitry_24B_V.2-Q4_K_S-GGUF --hf-file circuitry_24b_v.2-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo OddTheGreat/Circuitry_24B_V.2-Q4_K_S-GGUF --hf-file circuitry_24b_v.2-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo OddTheGreat/Circuitry_24B_V.2-Q4_K_S-GGUF --hf-file circuitry_24b_v.2-q4_k_s.gguf -c 2048 ```
chainway9/blockassist-bc-untamed_quick_eel_1756851788
chainway9
2025-09-02T22:49:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T22:49:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ChenWu98/numina_qwen_2.5_sft_combine_v3_source_split_1
ChenWu98
2025-09-02T22:48:44Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen2.5-1.5B", "base_model:finetune:Qwen/Qwen2.5-1.5B", "endpoints_compatible", "region:us" ]
null
2025-09-02T22:47:47Z
--- base_model: Qwen/Qwen2.5-1.5B library_name: transformers model_name: numina_qwen_2.5_sft_combine_v3_source_split_1 tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for numina_qwen_2.5_sft_combine_v3_source_split_1 This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/irp6x8on) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
omerbkts/blockassist-bc-keen_fast_giraffe_1756853258
omerbkts
2025-09-02T22:47:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-09-02T22:47:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).