modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-02 06:30:45
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
533 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-02 06:30:39
card
stringlengths
11
1.01M
eusuf01/blockassist-bc-smooth_humming_butterfly_1756671349
eusuf01
2025-08-31T20:16:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T20:16:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF
mradermacher
2025-08-31T20:15:46Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:ZeroXClem/Qwen3-4B-Hermes-Axion-Pro", "base_model:quantized:ZeroXClem/Qwen3-4B-Hermes-Axion-Pro", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-31T19:43:55Z
--- base_model: ZeroXClem/Qwen3-4B-Hermes-Axion-Pro language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/ZeroXClem/Qwen3-4B-Hermes-Axion-Pro <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-4B-Hermes-Axion-Pro-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Hermes-Axion-Pro-i1-GGUF/resolve/main/Qwen3-4B-Hermes-Axion-Pro.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
akirafudo/blockassist-bc-keen_fast_giraffe_1756671097
akirafudo
2025-08-31T20:12:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T20:12:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
eusuf01/blockassist-bc-smooth_humming_butterfly_1756670810
eusuf01
2025-08-31T20:07:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T20:07:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
malikka/blockassist-bc-dense_toothy_baboon_1756670401
malikka
2025-08-31T20:00:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dense toothy baboon", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T20:00:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dense toothy baboon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
eusuf01/blockassist-bc-smooth_humming_butterfly_1756670168
eusuf01
2025-08-31T19:56:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T19:56:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VIDEO-DE-FILTRADO-ABIGAIL-LALAMA-Y-SNAYDER/VER.VIDEO.DE.ABIGAIL.LALAMA.Y.SNAYDER.FILTRADO.VIRAL
VIDEO-DE-FILTRADO-ABIGAIL-LALAMA-Y-SNAYDER
2025-08-31T19:51:41Z
0
0
null
[ "region:us" ]
null
2025-08-31T19:51:17Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
eusuf01/blockassist-bc-smooth_humming_butterfly_1756669812
eusuf01
2025-08-31T19:50:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T19:50:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
GaborMadarasz/AstroQA_mamba_epoch1_V10
GaborMadarasz
2025-08-31T19:49:19Z
0
0
transformers
[ "transformers", "safetensors", "mamba", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T19:48:58Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
giovannidemuri/llama3b-llama8b-er-v508-seed2-seed2-hx-alpaca-fpt
giovannidemuri
2025-08-31T19:47:25Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T17:59:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bboppp/blockassist-bc-alert_melodic_swan_1756669451
bboppp
2025-08-31T19:44:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "alert melodic swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T19:44:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - alert melodic swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756669166
akirafudo
2025-08-31T19:40:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T19:39:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/DARS-1.5B-HW-GGUF
mradermacher
2025-08-31T19:38:29Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:yangzhch6/DARS-1.5B-HW", "base_model:quantized:yangzhch6/DARS-1.5B-HW", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-31T19:27:48Z
--- base_model: yangzhch6/DARS-1.5B-HW language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/yangzhch6/DARS-1.5B-HW <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DARS-1.5B-HW-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/DARS-1.5B-HW-GGUF/resolve/main/DARS-1.5B-HW.Q2_K.gguf) | Q2_K | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/DARS-1.5B-HW-GGUF/resolve/main/DARS-1.5B-HW.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/DARS-1.5B-HW-GGUF/resolve/main/DARS-1.5B-HW.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/DARS-1.5B-HW-GGUF/resolve/main/DARS-1.5B-HW.Q3_K_L.gguf) | Q3_K_L | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/DARS-1.5B-HW-GGUF/resolve/main/DARS-1.5B-HW.IQ4_XS.gguf) | IQ4_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/DARS-1.5B-HW-GGUF/resolve/main/DARS-1.5B-HW.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DARS-1.5B-HW-GGUF/resolve/main/DARS-1.5B-HW.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/DARS-1.5B-HW-GGUF/resolve/main/DARS-1.5B-HW.Q5_K_S.gguf) | Q5_K_S | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/DARS-1.5B-HW-GGUF/resolve/main/DARS-1.5B-HW.Q5_K_M.gguf) | Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/DARS-1.5B-HW-GGUF/resolve/main/DARS-1.5B-HW.Q6_K.gguf) | Q6_K | 1.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/DARS-1.5B-HW-GGUF/resolve/main/DARS-1.5B-HW.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/DARS-1.5B-HW-GGUF/resolve/main/DARS-1.5B-HW.f16.gguf) | f16 | 3.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
the-usan/urdu-crime-adapter-sucide-v1
the-usan
2025-08-31T19:38:21Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-31T19:38:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bah63843/blockassist-bc-plump_fast_antelope_1756668966
bah63843
2025-08-31T19:36:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T19:36:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
beart881/blockassist-bc-sly_sturdy_mosquito_1756668634
beart881
2025-08-31T19:32:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly sturdy mosquito", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T19:32:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly sturdy mosquito --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vendi11/blockassist-bc-placid_placid_llama_1756668178
vendi11
2025-08-31T19:23:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid placid llama", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T19:23:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid placid llama --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
giovannidemuri/llama3b-llama8b-er-v505-seed2-seed2-hx-alpaca-fpt
giovannidemuri
2025-08-31T19:18:49Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T17:46:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
leonzc/llama400m-climblab-function_calling-5k-bm25s-dora-merged
leonzc
2025-08-31T19:15:51Z
0
0
peft
[ "peft", "safetensors", "llama", "dora", "lora", "en", "base_model:data4elm/Llama-400M-12L", "base_model:adapter:data4elm/Llama-400M-12L", "license:apache-2.0", "region:us" ]
null
2025-08-31T19:15:39Z
--- language: - en tags: - llama - peft - dora - lora license: apache-2.0 base_model: data4elm/Llama-400M-12L --- # llama400m-climblab-function_calling-5k-bm25s-dora-merged DoRA fine-tuned LLaMA 400M model on bm25s_filtered 5k data from functioncalling_eval dataset using LMFlow ## Model Details This model is a DoRA-finetuned version of [data4elm/Llama-400M-12L](https://huggingface.co/data4elm/Llama-400M-12L). The standalone adapter is available at [leonzc/llama400m-climblab-function_calling-5k-bm25s-dora-adapter](https://huggingface.co/leonzc/llama400m-climblab-function_calling-5k-bm25s-dora-adapter). ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel # Option 1: Load the complete model directly model = AutoModelForCausalLM.from_pretrained("leonzc/llama400m-climblab-function_calling-5k-bm25s-dora-merged") tokenizer = AutoTokenizer.from_pretrained("leonzc/llama400m-climblab-function_calling-5k-bm25s-dora-merged") # Option 2: Load just the adapter with the base model base_model = AutoModelForCausalLM.from_pretrained("data4elm/Llama-400M-12L") tokenizer = AutoTokenizer.from_pretrained("data4elm/Llama-400M-12L") model = PeftModel.from_pretrained(base_model, "leonzc/llama400m-climblab-function_calling-5k-bm25s-dora-adapter") # Example usage input_text = "What is the capital of France?" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(inputs.input_ids, max_length=50) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756666232
Sayemahsjn
2025-08-31T19:08:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T19:08:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
FAHAB/blockassist-bc-bipedal_powerful_magpie_1756667216
FAHAB
2025-08-31T19:07:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bipedal powerful magpie", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T19:07:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bipedal powerful magpie --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756667179
akirafudo
2025-08-31T19:06:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T19:06:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
CHRISPI09/blockassist-bc-galloping_thick_tuna_1756666765
CHRISPI09
2025-08-31T18:59:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "galloping thick tuna", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T18:59:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - galloping thick tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ttkairamkonda/whisper-large-v3-faa-atc-80k-LoRA64
ttkairamkonda
2025-08-31T18:56:05Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-31T18:55:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bah63843/blockassist-bc-plump_fast_antelope_1756665755
bah63843
2025-08-31T18:43:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T18:43:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1756665341
bah63843
2025-08-31T18:36:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T18:36:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Lacaille-MoT-4B-Supreme2-GGUF
mradermacher
2025-08-31T18:30:03Z
4,206
1
transformers
[ "transformers", "gguf", "trl", "moe", "thinking=1", "mot", "code", "science", "math", "mixture-of-thoughts", "text-generation-inference", "reasoning", "en", "dataset:open-r1/Mixture-of-Thoughts", "dataset:nvidia/OpenCodeReasoning", "base_model:prithivMLmods/Lacaille-MoT-4B-Supreme2", "base_model:quantized:prithivMLmods/Lacaille-MoT-4B-Supreme2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-06-02T09:02:10Z
--- base_model: prithivMLmods/Lacaille-MoT-4B-Supreme2 datasets: - open-r1/Mixture-of-Thoughts - nvidia/OpenCodeReasoning language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - trl - moe - thinking=1 - mot - code - science - math - mixture-of-thoughts - text-generation-inference - reasoning --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/prithivMLmods/Lacaille-MoT-4B-Supreme2 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Lacaille-MoT-4B-Supreme2-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q2_K.gguf) | Q2_K | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q3_K_S.gguf) | Q3_K_S | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q3_K_L.gguf) | Q3_K_L | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.IQ4_XS.gguf) | IQ4_XS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q5_K_S.gguf) | Q5_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q5_K_M.gguf) | Q5_K_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q6_K.gguf) | Q6_K | 3.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.f16.gguf) | f16 | 8.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
liukevin666/blockassist-bc-yawning_striped_cassowary_1756664939
liukevin666
2025-08-31T18:30:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T18:29:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF
mradermacher
2025-08-31T18:29:38Z
3,750
1
transformers
[ "transformers", "gguf", "trl", "moe", "thinking=1", "mot", "code", "science", "math", "mixture-of-thoughts", "text-generation-inference", "reasoning", "en", "dataset:open-r1/Mixture-of-Thoughts", "dataset:nvidia/OpenCodeReasoning", "base_model:prithivMLmods/Lacaille-MoT-4B-Supreme2", "base_model:quantized:prithivMLmods/Lacaille-MoT-4B-Supreme2", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-06-02T14:09:48Z
--- base_model: prithivMLmods/Lacaille-MoT-4B-Supreme2 datasets: - open-r1/Mixture-of-Thoughts - nvidia/OpenCodeReasoning language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - trl - moe - thinking=1 - mot - code - science - math - mixture-of-thoughts - text-generation-inference - reasoning --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/prithivMLmods/Lacaille-MoT-4B-Supreme2 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Lacaille-MoT-4B-Supreme2-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Lacaille-MoT-4B-Supreme2-i1-GGUF/resolve/main/Lacaille-MoT-4B-Supreme2.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
loopping/blockassist-bc-peaceful_crested_raven_1756664809
loopping
2025-08-31T18:27:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful crested raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T18:26:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful crested raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vuitton/dsc_111
vuitton
2025-08-31T18:21:10Z
0
0
null
[ "safetensors", "llama", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-31T18:17:30Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
liukevin666/blockassist-bc-yawning_striped_cassowary_1756663619
liukevin666
2025-08-31T18:08:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T18:07:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Team-Atom/act_blueclick0830_32_40000
Team-Atom
2025-08-31T18:01:20Z
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:Team-Atom/blue_click_250830_ep100", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-08-31T18:01:07Z
--- datasets: Team-Atom/blue_click_250830_ep100 library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - act - robotics - lerobot --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
thejaminator/grpo-feature-vector-30aug-step-300
thejaminator
2025-08-31T17:59:37Z
0
0
peft
[ "peft", "safetensors", "lora", "text-generation", "base_model:thejaminator/qwen-hook-layer-9-step-1000-merged", "base_model:adapter:thejaminator/qwen-hook-layer-9-step-1000-merged", "region:us" ]
text-generation
2025-08-31T17:59:20Z
--- base_model: thejaminator/qwen-hook-layer-9-step-1000-merged library_name: peft tags: - lora - peft pipeline_tag: text-generation ---
radish05/huggingface_deep_rl_assn1
radish05
2025-08-31T17:54:24Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-08-31T17:54:05Z
--- library_name: stable-baselines3 tags: - LunarLander-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v3 type: LunarLander-v3 metrics: - type: mean_reward value: 262.33 +/- 22.10 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v3** This is a trained model of a **PPO** agent playing **LunarLander-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
happyensworld/blockassist-bc-sleek_scavenging_ram_1756662404
happyensworld
2025-08-31T17:48:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sleek scavenging ram", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T17:48:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sleek scavenging ram --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
od420do420/svndrx
od420do420
2025-08-31T17:42:40Z
0
0
null
[ "license:other", "region:us" ]
null
2025-08-31T16:56:01Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
mradermacher/GTA1-7B-i1-GGUF
mradermacher
2025-08-31T17:31:15Z
30
1
transformers
[ "transformers", "gguf", "en", "base_model:HelloKKMe/GTA1-7B", "base_model:quantized:HelloKKMe/GTA1-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-07-09T03:49:56Z
--- base_model: HelloKKMe/GTA1-7B language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/HelloKKMe/GTA1-7B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#GTA1-7B-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/GTA1-7B-GGUF **This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/GTA1-7B-GGUF).** ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/GTA1-7B-i1-GGUF/resolve/main/GTA1-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
acidjp/blockassist-bc-pesty_extinct_prawn_1756658946
acidjp
2025-08-31T17:27:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty extinct prawn", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T17:27:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty extinct prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
efsaefs/blockassist-bc-cunning_diving_grouse_1756658450
efsaefs
2025-08-31T17:26:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "cunning diving grouse", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T17:26:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - cunning diving grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
amanuelcm/Wan2.1-T2V-1.3B-OldIllustration
amanuelcm
2025-08-31T17:22:14Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-video", "lora", "template:diffusion-lora", "dataset:amanuelcm/Wan2.1-T2V-1.3B-OldIllustration", "base_model:Wan-AI/Wan2.1-T2V-1.3B", "base_model:adapter:Wan-AI/Wan2.1-T2V-1.3B", "license:mit", "region:us" ]
text-to-video
2025-08-31T10:10:10Z
--- tags: - text-to-video - lora - diffusers - template:diffusion-lora widget: - text: >- An old illustration of a waves continually crashing on a rocky shore, clouds pass overhead parameters: negative_prompt: >- 色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走 output: url: results/output1.mp4 - text: >- An old illustration of the Industrial Age, showing towering steam engines, massive steel bridges, busy factories with smokestacks, workers in 19th-century attire operating machinery, early locomotives on railways, intricate gears and pulleys, cobblestone streets, vintage street lamps, detailed line engraving style, cross-hatched shading, antique paper texture, black and white etching parameters: negative_prompt: >- 色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走 output: url: results/example4.webp - text: >- An old illustration of an early printing press in a dimly lit workshop, ink and paper scattered around, artisan working carefully, detailed vintage line art parameters: negative_prompt: >- 色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走 output: url: results/output3.mp4 - text: >- An old illustration of ancient Egyptian workers hauling giant stone blocks to build a pyramid, ropes pulled taut, dust clouds rising, muscles straining, desert sun blazing overhead, intricate engraving details parameters: negative_prompt: >- 色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走 output: url: results/output6.mp4 - text: >- An old illustration of sailors navigating a 15th-century wooden ship on rough seas, waves crashing against the hull, sails billowing in the wind, crew pulling ropes in unison, stormy clouds swirling above, fine engraving style parameters: negative_prompt: >- 色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走 output: url: results/output5.mp4 - text: An old illustration of a mysterious clockmaker's workshop, filled with tiny gears, antique tools, and intricate machinery, drawn in the style of 19th-century engravings, extremely detailed linework, cross-hatching, high contrast ink, vintage texture, aged paper background, meticulous craftsmanship, historical accuracy, black and white etching style parameters: negative_prompt: >- 色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走 output: url: results/ComfyUI_00005_.webp base_model: - Wan-AI/Wan2.1-T2V-1.3B instance_prompt: An old illustration of license: mit datasets: - amanuelcm/Wan2.1-T2V-1.3B-OldIllustration --- # Wan2.1-T2V-1.3B Old Illustrations LoRA <Gallery /> ## Model Description Lora adapter for [Wan-AI/Wan2.1-T2V-1.3B](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B) text-2-video model trained on a subset of images from [amanuelcm/OldIllustration-dataset](https://huggingface.co/datasets/amanuelcm/OldIllustration-dataset). ## Trigger words You should use `An old illustration of ` to trigger the image generation. ## Using with Diffusers ```py pip install diffusers ``` ```py import torch from diffusers.utils import export_to_video from diffusers import AutoencoderKLWan, WanPipeline from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler model_id = "Wan-AI/Wan2.1-T2V-1.3B-Diffusers" vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32) pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16) pipe.scheduler = UniPCMultistepScheduler.from_config( pipe.scheduler.config, flow_shift=5.0 ) pipe.to("cuda") pipe.load_lora_weights("amanuelcm/Wan2.1-T2V-1.3B-OldIllustration") pipe.enable_model_cpu_offload() # for low-vram environments prompt = "An old illustration of a mysterious clockmaker's workshop, filled with tiny gears, antique tools, and intricate machinery, drawn in the style of 19th-century engravings, extremely detailed linework, cross-hatching, high contrast ink, vintage texture, aged paper background, meticulous craftsmanship, historical accuracy, black and white etching style " negative_prompt = "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走" output = pipe( prompt=prompt, negative_prompt=negative_prompt, height=480, width=640, num_frames=49, guidance_scale=5.0, num_inference_steps=32 ).frames[0] export_to_video(output, "output.mp4", fps=16) ``` ## Using with ComfyUI Use the provided ComfyUI [comfy.json](https://huggingface.co/amanuelcm/Wan2.1-T2V-1.3B-OldIllustration/blob/main/comfy.json). To quickly download the reccomended text encoder, VAE and Wan2.1 files run: ``` wget https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors wget https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors wget https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/diffusion_models/wan2.1_t2v_1.3B_fp16.safetensors ``` ## Download model Weights for this model are available in Safetensors format. [Download](https://huggingface.co/amanuelcm/Wan2.1-T2V-1.3B-OldIllustration/tree/main) them in the Files & versions tab.
liukevin666/blockassist-bc-yawning_striped_cassowary_1756660306
liukevin666
2025-08-31T17:12:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T17:12:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-pawing_downy_anaconda_1756660107
AnerYubo
2025-08-31T17:08:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pawing downy anaconda", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T17:08:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pawing downy anaconda --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-elusive_mammalian_termite_1756660102
AnerYubo
2025-08-31T17:08:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "elusive mammalian termite", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T17:08:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - elusive mammalian termite --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
klmdr22/blockassist-bc-wild_loud_newt_1756659230
klmdr22
2025-08-31T16:54:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wild loud newt", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T16:54:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wild loud newt --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756657851
akirafudo
2025-08-31T16:31:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T16:31:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
erfgwerg/blockassist-bc-pawing_silent_pigeon_1756654777
erfgwerg
2025-08-31T16:26:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pawing silent pigeon", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T16:25:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pawing silent pigeon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sakthi54321/power_ai
sakthi54321
2025-08-31T16:25:37Z
0
0
null
[ "safetensors", "phi", "license:apache-2.0", "region:us" ]
null
2025-08-31T14:58:16Z
--- license: apache-2.0 ---
ThomasTheMaker/tiny-Dolma205M
ThomasTheMaker
2025-08-31T16:14:28Z
0
0
null
[ "safetensors", "pico_decoder", "custom_code", "license:apache-2.0", "region:us" ]
null
2025-08-31T16:06:00Z
--- license: apache-2.0 ---
Vishva007/Qwen2.5-3B-Instruct-RBI-QA-Adoptor
Vishva007
2025-08-31T15:57:27Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-31T15:57:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1756654265
vwzyrraz7l
2025-08-31T15:55:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T15:55:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tongww/omnidemo
tongww
2025-08-31T15:50:17Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-31T15:50:17Z
--- license: apache-2.0 ---
bonapart1190/blockassist-bc-barky_whiskered_elk_1756654816
bonapart1190
2025-08-31T15:41:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "barky whiskered elk", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T15:41:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - barky whiskered elk --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-keen_fast_giraffe_1756654683
akirafudo
2025-08-31T15:38:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T15:38:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Vira21/Llama-3.2-3B-Instruct-Khmer-vocab-expanded
Vira21
2025-08-31T15:22:20Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "km", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-3B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T15:05:04Z
--- language: - km base_model: - meta-llama/Llama-3.2-3B-Instruct pipeline_tag: text-generation library_name: transformers --- # Vira21/Llama-3.2-3B-Instruct-Khmer-vocab-expanded This is **LLaMA with Khmer vocab expansion**, built by merging Khmer tokens from NLLB-200 into LLaMA’s tokenizer and resizing embeddings. Suitable for fine-tuning on Khmer QA tasks.
seyidyildiz/retina_disease_risk
seyidyildiz
2025-08-31T15:04:20Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-31T12:32:15Z
--- license: apache-2.0 --- # Retinal Disease Risk Detection ## Model Description This deep learning model automatically detects the risk of retinal disease from fundus images. It classifies a patient as either **"No Disease Risk"** or **"Disease Risk Present"**. The goal is to assist doctors with early diagnosis as a preliminary screening tool. --- ## Intended Use - Diagnostic support in a clinical setting. - Helps prioritize cases and streamline evaluation of at-risk patients. - **Not a standalone medical diagnostic tool**. --- ## Model Architecture and Training - **Model Name:** Retinal Disease Risk Detection Model - **Architecture:** Convolutional Neural Network (CNN) - **Training Dataset:** RFMiD (Retinal Fundus Image Multidisease) - **Data Preparation:** Images resized to 224x224 and normalized (0-1) - **Optimizer:** Adam, learning rate = 0.0001 - **Techniques:** Data augmentation, class weights, early stopping to prevent overfitting --- ## Model Performance | Class | Precision | Recall | F1-Score | Support | |------------|-----------|--------|----------|--------| | No Risk | 0.63 | 0.61 | 0.62 | 134 | | Risk Present | 0.90 | 0.90 | 0.90 | 506 | | **Weighted Avg** | 0.84 | 0.84 | 0.84 | 640 | - **Overall Accuracy:** 84.22% - The model performs well on "Disease Risk Present" but has lower recall for "No Risk". --- Limitations and Ethical Considerations Not a diagnostic tool. Final diagnosis must be by a qualified healthcare professional. Lower recall for "No Risk" could misclassify healthy individuals. Model accuracy depends on the quality and diversity of the training data; performance may vary across demographics and imaging conditions. Contact Name: Seyid Yıldız Email: [email protected] LinkedIn: https://www.linkedin.com/in/seyid-yıldız-310091349 ## Installation ```bash pip install tensorflow opencv-python numpy import numpy as np import cv2 from tensorflow.keras.models import load_model # Load the model model = load_model("retina_disease_risk.h5") # Load and preprocess an image img_path = 'new_image.png' img = cv2.imread(img_path) img = cv2.resize(img, (224, 224)) img = np.expand_dims(img, axis=0) / 255.0 # Prediction prediction = model.predict(img) if prediction[0][0] > 0.5: print("Disease Risk Present") else: print("No Disease Risk")
alexyamin/blockassist-bc-alert_tiny_chicken_1756650776
alexyamin
2025-08-31T14:50:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "alert tiny chicken", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:50:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - alert tiny chicken --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ThomasTheMaker/tiny-dolma10M
ThomasTheMaker
2025-08-31T14:50:31Z
0
0
null
[ "safetensors", "pico_decoder", "custom_code", "en", "dataset:ThomasTheMaker/pretokenized-dolma-10M", "dataset:allenai/dolma", "license:apache-2.0", "region:us" ]
null
2025-08-31T14:25:43Z
--- license: apache-2.0 datasets: - ThomasTheMaker/pretokenized-dolma-10M - allenai/dolma language: - en --- An 11M model, pre-trained on 10M rows of dataset from Dolma
pidbu/blockassist-bc-whistling_alert_shrew_1756650958
pidbu
2025-08-31T14:37:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:36:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lemonhat/Qwen2.5-7B-Instruct-t1_100k_v3_tag5_filtered
lemonhat
2025-08-31T14:34:49Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T14:23:51Z
--- library_name: transformers license: other base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: t1_100k_v3_tag5_filtered results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t1_100k_v3_tag5_filtered This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the t1_100k_v3_tag5_filtered dataset. It achieves the following results on the evaluation set: - Loss: 0.2149 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 4 - total_eval_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.2504 | 0.0184 | 100 | 0.3076 | | 0.2919 | 0.0369 | 200 | 0.2872 | | 0.2571 | 0.0553 | 300 | 0.2787 | | 0.2581 | 0.0738 | 400 | 0.2741 | | 0.2559 | 0.0922 | 500 | 0.2662 | | 0.27 | 0.1106 | 600 | 0.2605 | | 0.2434 | 0.1291 | 700 | 0.2620 | | 0.2706 | 0.1475 | 800 | 0.2556 | | 0.2361 | 0.1660 | 900 | 0.2540 | | 0.3626 | 0.1844 | 1000 | 0.2537 | | 0.2322 | 0.2028 | 1100 | 0.2499 | | 0.2154 | 0.2213 | 1200 | 0.2485 | | 0.2328 | 0.2397 | 1300 | 0.2488 | | 0.2567 | 0.2582 | 1400 | 0.2468 | | 0.2683 | 0.2766 | 1500 | 0.2424 | | 0.1867 | 0.2950 | 1600 | 0.2402 | | 0.2316 | 0.3135 | 1700 | 0.2398 | | 0.3717 | 0.3319 | 1800 | 0.2400 | | 0.3125 | 0.3504 | 1900 | 0.2387 | | 0.2123 | 0.3688 | 2000 | 0.2369 | | 0.2644 | 0.3872 | 2100 | 0.2346 | | 0.2608 | 0.4057 | 2200 | 0.2336 | | 0.2633 | 0.4241 | 2300 | 0.2319 | | 0.1912 | 0.4426 | 2400 | 0.2307 | | 0.2486 | 0.4610 | 2500 | 0.2304 | | 0.2339 | 0.4794 | 2600 | 0.2314 | | 0.2858 | 0.4979 | 2700 | 0.2301 | | 0.2729 | 0.5163 | 2800 | 0.2296 | | 0.2127 | 0.5348 | 2900 | 0.2278 | | 0.2451 | 0.5532 | 3000 | 0.2258 | | 0.2518 | 0.5716 | 3100 | 0.2244 | | 0.1837 | 0.5901 | 3200 | 0.2237 | | 0.222 | 0.6085 | 3300 | 0.2235 | | 0.2168 | 0.6270 | 3400 | 0.2242 | | 0.2443 | 0.6454 | 3500 | 0.2218 | | 0.2625 | 0.6638 | 3600 | 0.2209 | | 0.1991 | 0.6823 | 3700 | 0.2199 | | 0.222 | 0.7007 | 3800 | 0.2193 | | 0.177 | 0.7192 | 3900 | 0.2187 | | 0.2066 | 0.7376 | 4000 | 0.2186 | | 0.2483 | 0.7560 | 4100 | 0.2186 | | 0.2441 | 0.7745 | 4200 | 0.2176 | | 0.221 | 0.7929 | 4300 | 0.2164 | | 0.1903 | 0.8114 | 4400 | 0.2165 | | 0.2155 | 0.8298 | 4500 | 0.2161 | | 0.187 | 0.8482 | 4600 | 0.2156 | | 0.2058 | 0.8667 | 4700 | 0.2156 | | 0.2647 | 0.8851 | 4800 | 0.2153 | | 0.2514 | 0.9036 | 4900 | 0.2152 | | 0.2303 | 0.9220 | 5000 | 0.2152 | | 0.2325 | 0.9404 | 5100 | 0.2149 | | 0.2892 | 0.9589 | 5200 | 0.2147 | | 0.1886 | 0.9773 | 5300 | 0.2149 | | 0.2047 | 0.9958 | 5400 | 0.2149 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
akirafudo/blockassist-bc-keen_fast_giraffe_1756650823
akirafudo
2025-08-31T14:34:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "keen fast giraffe", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:34:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - keen fast giraffe --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
rafitesnet00/blockassist-bc-scruffy_mighty_wasp_1756649729
rafitesnet00
2025-08-31T14:21:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy mighty wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:17:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy mighty wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sonic-man/blockassist-bc-poisonous_graceful_cow_1756647770
Sonic-man
2025-08-31T14:20:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "poisonous graceful cow", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:20:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - poisonous graceful cow --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
NahedDom/blockassist-bc-flapping_stocky_leopard_1756647846
NahedDom
2025-08-31T14:20:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping stocky leopard", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:20:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping stocky leopard --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
arif696/blockassist-bc-regal_spotted_pelican_1756649758
arif696
2025-08-31T14:18:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:17:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mainwalletbd/Qwen3-0.6B-Gensyn-Swarm-pudgy_jagged_ape
mainwalletbd
2025-08-31T14:17:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am pudgy_jagged_ape", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T14:16:53Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am pudgy_jagged_ape --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VoilaRaj/81_g_Qgz3MM
VoilaRaj
2025-08-31T14:05:20Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-31T14:04:52Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
gbatubara/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-masked_vigilant_boar
gbatubara
2025-08-31T14:03:05Z
124
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am masked_vigilant_boar", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-29T07:15:46Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am masked_vigilant_boar --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
arif696/blockassist-bc-regal_spotted_pelican_1756648853
arif696
2025-08-31T14:02:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:01:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vendi11/blockassist-bc-placid_placid_llama_1756648871
vendi11
2025-08-31T14:01:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid placid llama", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T14:01:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid placid llama --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
arif696/blockassist-bc-regal_spotted_pelican_1756648244
arif696
2025-08-31T13:51:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal spotted pelican", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:51:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal spotted pelican --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
2hpsatt/blockassist-bc-huge_deft_eagle_1756648103
2hpsatt
2025-08-31T13:49:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "huge deft eagle", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:49:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - huge deft eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
l74xx/tiny-chatbot-model-dpo
l74xx
2025-08-31T13:49:12Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "dpo", "trl", "arxiv:2305.18290", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "endpoints_compatible", "region:us" ]
null
2025-08-31T13:46:48Z
--- base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 library_name: transformers model_name: tiny-chatbot-model-dpo tags: - generated_from_trainer - dpo - trl licence: license --- # Model Card for tiny-chatbot-model-dpo This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="l74xx/tiny-chatbot-model-dpo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.22.1 - Transformers: 4.55.4 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1756646658
vwzyrraz7l
2025-08-31T13:48:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:48:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Mildbutterchicken/VAPOV
Mildbutterchicken
2025-08-31T13:29:12Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:Qwen/Qwen-Image", "base_model:adapter:Qwen/Qwen-Image", "license:apache-2.0", "region:us" ]
text-to-image
2025-08-31T13:27:47Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/Screen Shot 2025-08-31 at 8.47.21 pm.png text: Screenshot base_model: Qwen/Qwen-Image instance_prompt: >- missionary vaginal, close up, creampie, spreading legs, legs up, deep, huge penis, small penis, amateur license: apache-2.0 --- # VAPOV <Gallery /> ## Trigger words You should use `missionary vaginal` to trigger the image generation. You should use `close up` to trigger the image generation. You should use `creampie` to trigger the image generation. You should use `spreading legs` to trigger the image generation. You should use `legs up` to trigger the image generation. You should use `deep` to trigger the image generation. You should use `huge penis` to trigger the image generation. You should use `small penis` to trigger the image generation. You should use `amateur` to trigger the image generation. ## Download model [Download](/Mildbutterchicken/VAPOV/tree/main) them in the Files & versions tab.
nick1880/blockassist-bc-barky_powerful_falcon_1756645295
nick1880
2025-08-31T13:02:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "barky powerful falcon", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:02:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - barky powerful falcon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akunode/blockassist-bc-long_prickly_eel_1756645181
akunode
2025-08-31T13:00:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "long prickly eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T13:00:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - long prickly eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
philipperen55/Qwen2.5-7B-Instruct-D31E3LA16R64MSL512PDTBS32GAS1LR2e-4_epoch3
philipperen55
2025-08-31T12:49:28Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-31T12:49:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Tyzhn1997/blockassist-bc-wiry_long_squid_1756641767
Tyzhn1997
2025-08-31T12:24:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry long squid", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T12:24:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry long squid --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
letgoofthepizza/kc-clip-en
letgoofthepizza
2025-08-31T12:13:24Z
0
0
null
[ "safetensors", "clip", "region:us" ]
null
2025-08-31T12:11:57Z
title: KC-CLIP EN - Korean Cultural CLIP Model
khangnguyen1287/blockassist-bc-gliding_sneaky_cougar_1756641159
khangnguyen1287
2025-08-31T11:56:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gliding sneaky cougar", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T11:56:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gliding sneaky cougar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bbooaz35/leumi3
bbooaz35
2025-08-31T11:53:08Z
0
0
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-31T11:52:58Z
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: black-forest-labs/FLUX.1-dev instance_prompt: license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # leumi3 <Gallery /> ## Model description ## Trigger words You should use `` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/bbooaz35/leumi3/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/flux-lora-general-training](https://fal.ai/models/fal-ai/flux-lora-general-training).
cryptbyz/blockassist-bc-coiled_trotting_python_1756640772
cryptbyz
2025-08-31T11:47:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "coiled trotting python", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T11:46:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - coiled trotting python --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ak14146788/blockassist-bc-tiny_scruffy_scorpion_1756639395
ak14146788
2025-08-31T11:41:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tiny scruffy scorpion", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T11:41:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tiny scruffy scorpion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
li1212/twitter_complaints_bigscience_bloomz-560m_PROMPT_TUNING_CAUSAL_LM
li1212
2025-08-31T11:29:42Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-31T11:29:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Nihal2000/gemma-3-finetune
Nihal2000
2025-08-31T11:20:50Z
7
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-270m-it", "base_model:finetune:unsloth/gemma-3-270m-it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-30T16:27:07Z
--- base_model: unsloth/gemma-3-270m-it tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Nihal2000 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-270m-it This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
nonAIcoderz/rare-puppers
nonAIcoderz
2025-08-31T11:10:51Z
0
0
null
[ "tensorboard", "safetensors", "vit", "image-classification", "pytorch", "huggingpics", "model-index", "region:us" ]
image-classification
2025-08-31T11:10:33Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rare-puppers results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9944444298744202 --- # rare-puppers Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### drive ![drive](images/drive.png) #### pullshot ![pullshot](images/pullshot.png) #### sweep ![sweep](images/sweep.png)
AnerYubo/blockassist-bc-fanged_camouflaged_cassowary_1756638448
AnerYubo
2025-08-31T11:07:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fanged camouflaged cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T11:07:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fanged camouflaged cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bilgin/turkish-sustainable-travel-qwen2.5-7b-fixed
bilgin
2025-08-31T10:53:43Z
0
0
null
[ "safetensors", "qwen2", "turkish", "sustainable-travel", "qwen2.5", "text-generation", "conversational", "tr", "en", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
2025-08-31T10:50:27Z
--- language: - tr - en license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - turkish - sustainable-travel - qwen2.5 - text-generation - conversational widget: - text: "İstanbul'da sürdürülebilir turizm için ne önerirsiniz?" - text: "Türkiye'de çevre dostu konaklama seçenekleri nelerdir?" --- # Turkish Sustainable Travel Assistant Fine-tuned Qwen2.5-7B model for sustainable travel assistance in Turkey. ## Model Details - **Base Model**: Qwen/Qwen2.5-7B-Instruct - **Fine-tuning Method**: QLoRA (4-bit quantization) - **Language**: Turkish & English - **Domain**: Sustainable Tourism in Turkey ## Usage ### With Transformers Library ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("bilgin/turkish-sustainable-travel-qwen2.5-7b-fixed") tokenizer = AutoTokenizer.from_pretrained("bilgin/turkish-sustainable-travel-qwen2.5-7b-fixed") # Example usage messages = [ {"role": "system", "content": "Sen sürdürülebilir seyahat asistanısın. Türkçe ve net yanıt ver."}, {"role": "user", "content": "İstanbul'da sürdürülebilir turizm için ne önerirsiniz?"} ] # Apply chat template text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(text, return_tensors="pt") # Generate outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ### Inference with 4-bit Quantization ```python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig import torch # Setup 4-bit quantization bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.float16 ) model = AutoModelForCausalLM.from_pretrained( "bilgin/turkish-sustainable-travel-qwen2.5-7b-fixed", quantization_config=bnb_config, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("bilgin/turkish-sustainable-travel-qwen2.5-7b-fixed") ``` ## Training Details - **Infrastructure**: TRUBA HPC (Turkish National Academic Network and Information Center) - **Training Framework**: Transformers + PEFT + BitsAndBytes - **Optimization**: LoRA rank 16, alpha 32 - **Precision**: Mixed precision with bf16 compute ## Intended Use This model is designed to assist with sustainable tourism queries in Turkey, providing information about: - Eco-friendly travel destinations - Sustainable accommodation options - Environmental conservation practices - Local cultural experiences - Green transportation alternatives ## Limitations - The model may occasionally mix Turkish and English - Response quality depends on the specificity of the query - Not intended for critical decision-making without human review ## Citation If you use this model, please cite: ``` @misc{turkish-sustainable-travel-qwen, title={Turkish Sustainable Travel Assistant based on Qwen2.5-7B}, author={Your Name}, year={2024}, publisher={HuggingFace} } ```
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1756635649
helmutsukocok
2025-08-31T10:45:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "loud scavenging kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T10:44:59Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - loud scavenging kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
arshils/blockassist-bc-powerful_lazy_wallaby_1756636976
arshils
2025-08-31T10:44:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "powerful lazy wallaby", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T10:43:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - powerful lazy wallaby --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
critical12/tourism-purchase-predictor-rf
critical12
2025-08-31T10:44:52Z
0
0
sklearn
[ "sklearn", "joblib", "random-forest", "tabular-classification", "dataset:critical12/tourism-dataset", "license:apache-2.0", "region:us" ]
tabular-classification
2025-08-31T06:37:04Z
--- tags: - sklearn - random-forest - tabular-classification pipeline_tag: tabular-classification license: apache-2.0 datasets: - critical12/tourism-dataset --- # Tourism Purchase Predictor (RandomForest) This repository contains a tuned RandomForestClassifier for predicting `ProdTaken` (purchase of the tourism package). - Dataset: https://huggingface.co/datasets/critical12/tourism-dataset - Selection metric: ROC AUC (5-fold CV) - Best CV ROC AUC: 0.9513 ## Inference (Python) ```python import joblib from huggingface_hub import hf_hub_download model_path = hf_hub_download(repo_id="critical12/tourism-purchase-predictor-rf", filename="best_model.joblib") model = joblib.load(model_path) # model is a sklearn Pipeline: model.predict(X) or model.predict_proba(X) ```
yashh7778/blockassist-bc-alert_prehistoric_parrot_1756635790
yashh7778
2025-08-31T10:24:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "alert prehistoric parrot", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T10:24:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - alert prehistoric parrot --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
allstax/editorial-qwen3-8b-v2-adpaters
allstax
2025-08-31T10:08:10Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "unsloth", "trl", "endpoints_compatible", "region:us" ]
null
2025-08-31T10:06:03Z
--- base_model: unsloth/qwen3-8b-unsloth-bnb-4bit library_name: transformers model_name: outputs tags: - generated_from_trainer - sft - unsloth - trl licence: license --- # Model Card for outputs This model is a fine-tuned version of [unsloth/qwen3-8b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-8b-unsloth-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shubham-mehrota/huggingface/runs/unmwa2z3) This model was trained with SFT. ### Framework versions - TRL: 0.22.1 - Transformers: 4.55.4 - Pytorch: 2.8.0 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dgambettaphd/M_llm2_run2_gen6_X_doc1000_synt64_lr1e-04_acm_SYNLAST
dgambettaphd
2025-08-31T10:03:33Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-31T10:03:18Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
liukevin666/blockassist-bc-yawning_striped_cassowary_1756634500
liukevin666
2025-08-31T10:02:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T10:02:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1756632534
coelacanthxyz
2025-08-31T09:54:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "finicky thriving grouse", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T09:54:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - finicky thriving grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
yucongzh/echo-small-0824
yucongzh
2025-08-31T09:36:22Z
36
0
null
[ "safetensors", "vit", "en", "arxiv:2508.14689", "license:mit", "region:us" ]
null
2025-08-24T07:52:15Z
--- language: en license: mit --- # ECHO [![arXiv](https://img.shields.io/badge/arXiv-2508.14689-b31b1b.svg)](https://arxiv.org/abs/2508.14689) [![Hugging Face Paper](https://img.shields.io/badge/🤗%20HuggingFace-Paper-FFD21E)](https://huggingface.co/papers/2508.14689) [![GitHub](https://img.shields.io/badge/GitHub-ECHO-181717.svg?logo=github)](https://github.com/yucongzh/ECHO) ECHO (fr**E**quen**C**y-aware **H**ierarchical enc**O**ding for variable-length signal) is a general machine signal representation learning model based on Masked Autoencoders (MAE) with band-splitting and frequency positional encoding that handles variable lengths. ## Performance on SIREN Overall performance summary (DCASE anomaly detection + Fault classification): ![Performance Summary](performance.png) ## Model Details - **Model Type**: AudioMAEWithBand (MAE-based Audio Encoder) - **Hidden Size**: 384 - **Number of Layers**: 12 - **Number of Attention Heads**: 6 - **Intermediate Size**: 1536 (mlp_ratio=4.0) - **Band Width**: 32 - **Shift Size**: 16 (half of patch_size) - **Total Parameters**: ~21.5M ## Key Features - **Band-splitting architecture**: Processes audio in frequency bands for better local and global representation learning - **Frequency position encoding**: Incorporates frequency information into the model for better audio understanding - **Efficient patch embedding**: Uses sliding window patches for temporal modeling, enabling varying time lengths ## Download ```python from huggingface_hub import snapshot_download # Download the model to local directory model_path = snapshot_download( repo_id="yucongzh/echo-small-0824", local_dir="./echo-small", local_dir_use_symlinks=False ) print(f"Model downloaded to: {model_path}") ``` ## Usage ```python import torch import torchaudio import sys # Add the model path to Python path sys.path.append('./echo-small') # Import the model architecture from audioMAE_band_upgrade import AudioMAEWithBand # Create model instance with your configuration model = AudioMAEWithBand( spec_len=2000, band_width=32, shift_size=16, in_chans=1, embed_dim=384, encoder_depth=12, num_heads=6, mlp_ratio=4.0, freq_pos_emb_dim=384 ) # Load pre-trained weights from safetensors.torch import load_file state_dict = load_file('model.safetensors') model.load_state_dict(state_dict, strict=False) # Set to evaluation mode model.eval() # Example usage audio_signal = torch.randn(1, 240000) # 5 seconds at 48kHz sample_rate = 48000 # Method 1: Extract features directly from audio (Recommended) with torch.inference_mode(): utterance_level_features, segment_level_features = model.extract_features_from_audio(audio_signal, sample_rate=sample_rate) print(f"Utterance-level Feature shape: {utterance_level_features.shape}") print(f"Segment-level Feature shape: {segment_level_features.shape}") # Method 2: Use preprocessing separately, then extract features spec = model.preprocess_audio_to_spectrogram(audio_signal, sample_rate=sample_rate) print(f"Spectrogram shape: {spec.shape}") # Extract features from preprocessed spectrogram with torch.inference_mode(): utterance_level_features, segment_level_features = model.extract_features(spec, sample_rate=sample_rate) print(f"Utterance-level Feature shape: {utterance_level_features.shape}") print(f"Segment-level Feature shape: {segment_level_features.shape}") ``` ## Feature Types The ECHO model outputs two types of features: ### 1. Utterance-level Features - **Shape**: `[NxD, ]` (concatenated CLS tokens from all frequency bands) - **Usage**: Audio classification, emotion recognition, music genre classification, speaker identification - **Characteristics**: Global representation of the entire audio segment ### 2. Segment-level Features - **Shape**: `[T, NxD]` (temporal features for each patch, concatenated across bands) - **Usage**: Audio segmentation, event detection, temporal localization, sequence modeling - **Characteristics**: Fine-grained temporal representation with frequency band information ## Citation If you find ECHO helpful, please consider to cite our paper: ```bibtex @article{echo2025, title={ECHO: Frequency-aware Hierarchical Encoding for Variable-length Signal}, author={Yucong Zhang and Juan Liu and Ming Li}, journal={arXiv preprint arXiv:2508.14689}, year={2025}, } ```
lemonhat/Llama-3.1-8B-Instruct-t1_100k_v3_tag5_filtered
lemonhat
2025-08-31T08:45:51Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T08:34:33Z
--- library_name: transformers license: other base_model: meta-llama/Llama-3.1-8B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: t1_100k_v3_tag5_filtered results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t1_100k_v3_tag5_filtered This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the t1_100k_v3_tag5_filtered dataset. It achieves the following results on the evaluation set: - Loss: 0.2208 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 4 - total_eval_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.3304 | 0.0188 | 100 | 0.3178 | | 0.3231 | 0.0375 | 200 | 0.2973 | | 0.2628 | 0.0563 | 300 | 0.2927 | | 0.3621 | 0.0750 | 400 | 0.2853 | | 0.315 | 0.0938 | 500 | 0.2789 | | 0.3055 | 0.1125 | 600 | 0.2763 | | 0.3066 | 0.1313 | 700 | 0.2732 | | 0.3501 | 0.1500 | 800 | 0.2676 | | 0.2931 | 0.1688 | 900 | 0.2635 | | 0.3241 | 0.1875 | 1000 | 0.2656 | | 0.2838 | 0.2063 | 1100 | 0.2604 | | 0.2666 | 0.2250 | 1200 | 0.2580 | | 0.2578 | 0.2438 | 1300 | 0.2532 | | 0.3149 | 0.2625 | 1400 | 0.2533 | | 0.2795 | 0.2813 | 1500 | 0.2525 | | 0.2693 | 0.3000 | 1600 | 0.2490 | | 0.2445 | 0.3188 | 1700 | 0.2519 | | 0.2696 | 0.3375 | 1800 | 0.2459 | | 0.3311 | 0.3563 | 1900 | 0.2455 | | 0.3346 | 0.3750 | 2000 | 0.2440 | | 0.2591 | 0.3938 | 2100 | 0.2455 | | 0.2573 | 0.4125 | 2200 | 0.2439 | | 0.2587 | 0.4313 | 2300 | 0.2430 | | 0.2642 | 0.4500 | 2400 | 0.2427 | | 0.2429 | 0.4688 | 2500 | 0.2382 | | 0.2401 | 0.4875 | 2600 | 0.2377 | | 0.2274 | 0.5063 | 2700 | 0.2384 | | 0.2599 | 0.5250 | 2800 | 0.2372 | | 0.2514 | 0.5438 | 2900 | 0.2341 | | 0.2572 | 0.5625 | 3000 | 0.2338 | | 0.2827 | 0.5813 | 3100 | 0.2331 | | 0.2662 | 0.6000 | 3200 | 0.2311 | | 0.2541 | 0.6188 | 3300 | 0.2312 | | 0.2272 | 0.6375 | 3400 | 0.2290 | | 0.2541 | 0.6563 | 3500 | 0.2292 | | 0.2571 | 0.6750 | 3600 | 0.2277 | | 0.2252 | 0.6938 | 3700 | 0.2270 | | 0.2229 | 0.7125 | 3800 | 0.2268 | | 0.2863 | 0.7313 | 3900 | 0.2266 | | 0.2818 | 0.7500 | 4000 | 0.2246 | | 0.2398 | 0.7688 | 4100 | 0.2243 | | 0.255 | 0.7875 | 4200 | 0.2241 | | 0.2497 | 0.8063 | 4300 | 0.2240 | | 0.2649 | 0.8251 | 4400 | 0.2227 | | 0.215 | 0.8438 | 4500 | 0.2220 | | 0.2747 | 0.8626 | 4600 | 0.2217 | | 0.2321 | 0.8813 | 4700 | 0.2214 | | 0.2508 | 0.9001 | 4800 | 0.2212 | | 0.2333 | 0.9188 | 4900 | 0.2213 | | 0.2688 | 0.9376 | 5000 | 0.2210 | | 0.2402 | 0.9563 | 5100 | 0.2209 | | 0.2465 | 0.9751 | 5200 | 0.2208 | | 0.2855 | 0.9938 | 5300 | 0.2207 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
GroomerG/blockassist-bc-vicious_pawing_badger_1756627133
GroomerG
2025-08-31T08:26:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vicious pawing badger", "arxiv:2504.07091", "region:us" ]
null
2025-08-31T08:26:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vicious pawing badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).