modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-22 06:33:19
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
570 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-22 06:33:04
card
stringlengths
11
1.01M
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1756043665
helmutsukocok
2025-08-24T14:19:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "loud scavenging kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:19:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - loud scavenging kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
frankli202/Phi-3.5-mini-instruct_lora_sft_train_2025-08-24-lr-1.0e-4-lora-32-e-callm-lite-for-sima-1k
frankli202
2025-08-24T14:18:47Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "llama-factory", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-24T14:16:51Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
indoempatnol/blockassist-bc-fishy_wary_swan_1756043450
indoempatnol
2025-08-24T14:18:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fishy wary swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:18:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fishy wary swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ramazanbaris/blockassist-bc-snorting_fluffy_goat_1756045044
ramazanbaris
2025-08-24T14:18:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "snorting fluffy goat", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:17:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - snorting fluffy goat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
canoplos112/blockassist-bc-yapping_sleek_squirrel_1756044895
canoplos112
2025-08-24T14:16:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping sleek squirrel", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:15:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping sleek squirrel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Stasonelison/blockassist-bc-howling_powerful_aardvark_1756044911
Stasonelison
2025-08-24T14:16:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "howling powerful aardvark", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:15:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - howling powerful aardvark --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hangbai304/blockassist-bc-freckled_exotic_barracuda_1756044309
hangbai304
2025-08-24T14:15:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "freckled exotic barracuda", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:15:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - freckled exotic barracuda --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
thanhtrangh88/blockassist-bc-reclusive_grassy_panda_1756043943
thanhtrangh88
2025-08-24T14:12:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive grassy panda", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:12:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive grassy panda --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
alok0777/blockassist-bc-masked_pensive_lemur_1756044601
alok0777
2025-08-24T14:12:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "masked pensive lemur", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:10:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - masked pensive lemur --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756044694
Ferdi3425
2025-08-24T14:12:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:12:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
raniero/test-start-vali-5
raniero
2025-08-24T14:11:58Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-08-24T14:11:54Z
# Submission test-start-vali-5 - Base model: mistralai/Mistral-7B-Instruct-v0.2 - Repo: raniero/test-start-vali-5 - SHA256: `3e47120ca475a0eba13cf1e29468c2c995ca896d99fbc633d6496d7a2f9ade9b` - Task: test-start-vali-5
bitcoincg81/blockassist-bc-sniffing_fanged_iguana_1756044642
bitcoincg81
2025-08-24T14:11:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sniffing fanged iguana", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:11:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sniffing fanged iguana --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mang3dd/blockassist-bc-tangled_slithering_alligator_1756042971
mang3dd
2025-08-24T14:09:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:09:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
te4bag/GRIT-llama-3.2-3B-alpaca-0.99L
te4bag
2025-08-24T14:09:34Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:meta-llama/Llama-3.2-3B", "lora", "transformers", "text-generation", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-3B", "region:us" ]
text-generation
2025-08-24T14:07:54Z
--- base_model: meta-llama/Llama-3.2-3B library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:meta-llama/Llama-3.2-3B - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
caddiegensyn/blockassist-bc-swift_hunting_butterfly_1756044474
caddiegensyn
2025-08-24T14:09:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "swift hunting butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:09:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - swift hunting butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sanyar247/gemma3-4b-it-gsm8k-sft
sanyar247
2025-08-24T14:08:52Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3", "image-text-to-text", "generated_from_trainer", "trl", "sft", "conversational", "base_model:google/gemma-3-4b-it", "base_model:finetune:google/gemma-3-4b-it", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-21T09:37:03Z
--- base_model: google/gemma-3-4b-it library_name: transformers model_name: gemma3-4b-it-gsm8k-sft tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma3-4b-it-gsm8k-sft This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sanyar247/gemma3-4b-it-gsm8k-sft", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.4 - Pytorch: 2.7.1+cu128 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
pidbu/blockassist-bc-whistling_alert_shrew_1756044283
pidbu
2025-08-24T14:08:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:05:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
felixZzz/student_sft_len16k_sub1k_overlap_multiZ_c100_mixw8
felixZzz
2025-08-24T14:06:55Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-24T13:49:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ale902/poca-SoccerTwos
Ale902
2025-08-24T14:06:23Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2025-08-24T14:05:42Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Ale902/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
angiecely8538/blockassist-bc-striped_invisible_jackal_1756042190
angiecely8538
2025-08-24T14:05:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "striped invisible jackal", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:05:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - striped invisible jackal --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hangoclinh536/blockassist-bc-pudgy_long_elk_1756043747
hangoclinh536
2025-08-24T14:04:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pudgy long elk", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:04:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pudgy long elk --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
liukevin666/blockassist-bc-yawning_striped_cassowary_1756044157
liukevin666
2025-08-24T14:04:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:03:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
beingamechon/gemma-text-to-sql
beingamechon
2025-08-24T14:03:04Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-1b-pt", "base_model:finetune:google/gemma-3-1b-pt", "endpoints_compatible", "region:us" ]
null
2025-08-24T13:12:12Z
--- base_model: google/gemma-3-1b-pt library_name: transformers model_name: gemma-text-to-sql tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-text-to-sql This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="beingamechon/gemma-text-to-sql", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.4 - Pytorch: 2.5.1+cu121 - Datasets: 3.3.2 - Tokenizers: 0.21.2 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756044141
Ferdi3425
2025-08-24T14:02:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:02:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
dgambettaphd/M_mis_run1_gen8_WXS_doc1000_synt64_lr1e-04_acm_FRESH
dgambettaphd
2025-08-24T14:02:39Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-24T14:02:25Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
flymy-ai/qwen-image-edit-inscene-lora
flymy-ai
2025-08-24T14:02:00Z
0
41
diffusers
[ "diffusers", "lora", "qwen", "qwen-image", "qwen-image-edit", "image-editing", "inscene", "spatial-understanding", "scene-coherence", "computer-vision", "InScene", "image-to-image", "en", "base_model:Qwen/Qwen-Image-Edit", "base_model:adapter:Qwen/Qwen-Image-Edit", "license:apache-2.0", "region:us" ]
image-to-image
2025-08-20T19:32:32Z
--- license: apache-2.0 language: - en base_model: - Qwen/Qwen-Image-Edit pipeline_tag: image-to-image tags: - lora - qwen - qwen-image - qwen-image-edit - image-editing - inscene - spatial-understanding - scene-coherence - computer-vision - InScene library_name: diffusers --- # Qwen Image Edit Inscene LoRA An open-source LoRA (Low-Rank Adaptation) model for Qwen-Image-Edit that specializes in in-scene image editing by [FlyMy.AI](https://flymy.ai). ## 🌟 About FlyMy.AI Agentic Infra for GenAI. FlyMy.AI is a B2B infrastructure for building and running GenAI Media agents. **🔗 Useful Links:** - 🌐 [Official Website](https://flymy.ai) - 📚 [Documentation](https://docs.flymy.ai/intro) - 💬 [Discord Community](https://discord.com/invite/t6hPBpSebw) - 🤗 [LoRA Training Repository](https://github.com/FlyMyAI/flymyai-lora-trainer) - 🐦 [X (Twitter)](https://x.com/flymyai) - 💼 [LinkedIn](https://linkedin.com/company/flymyai) - 📺 [YouTube](https://youtube.com/@flymyai) - 📸 [Instagram](https://www.instagram.com/flymy_ai) --- ## 🚀 Features - LoRA-based fine-tuning for efficient in-scene image editing - Specialized for Qwen-Image-Edit model - Enhanced control over scene composition and object positioning - Optimized for maintaining scene coherence during edits - Compatible with Hugging Face `diffusers` - Control-based image editing with improved spatial understanding --- ## 📦 Installation 1. Install required packages: ```bash pip install torch torchvision diffusers transformers accelerate ``` 2. Install the latest `diffusers` from GitHub: ```bash pip install git+https://github.com/huggingface/diffusers ``` --- ## 🧪 Usage ### 🔧 Qwen-Image-Edit Initialization ```python from diffusers import QwenImageEditPipeline import torch from PIL import Image # Load the pipeline pipeline = QwenImageEditPipeline.from_pretrained("Qwen/Qwen-Image-Edit") pipeline.to(torch.bfloat16) pipeline.to("cuda") ``` ### 🔌 Load LoRA Weights ```python # Load trained LoRA weights for in-scene editing pipeline.load_lora_weights("flymy-ai/qwen-image-edit-inscene-lora",weight_name="flymy_qwen_image_edit_inscene_lora.safetensors") ``` ### 🎨 Edit Image with Qwen-Image-Edit Inscene LoRA ```python # Load input image image = Image.open("./assets/qie2_input.jpg").convert("RGB") # Define in-scene editing prompt prompt = "Make a shot in the same scene of the left hand securing the edge of the cutting board while the right hand tilts it, causing the chopped tomatoes to slide off into the pan, camera angle shifts slightly to the left to center more on the pan." # Generate edited image with enhanced scene understanding inputs = { "image": image, "prompt": prompt, "generator": torch.manual_seed(0), "true_cfg_scale": 4.0, "negative_prompt": " ", "num_inference_steps": 50, } with torch.inference_mode(): output = pipeline(**inputs) output_image = output.images[0] output_image.save("edited_image.png") ``` ### 🖼️ Sample Output - Qwen-Image-Edit Inscene **Input Image:** ![Input Image](./assets/qie2_input.jpg) **Prompt:** "Make a shot in the same scene of the left hand securing the edge of the cutting board while the right hand tilts it, causing the chopped tomatoes to slide off into the pan, camera angle shifts slightly to the left to center more on the pan." **Output without LoRA:** ![Output without LoRA](./assets/qie2_orig.jpg) **Output with Inscene LoRA:** ![Output with LoRA](./assets/qie2_lora.jpg) --- ### Workflow Features - ✅ Pre-configured for Qwen-Image-Edit + Inscene LoRA inference - ✅ Optimized settings for in-scene editing quality - ✅ Enhanced spatial understanding and scene coherence - ✅ Easy prompt and parameter adjustment - ✅ Compatible with various input image types --- ## 🎯 What is Inscene LoRA? This LoRA model is specifically trained to enhance Qwen-Image-Edit's ability to perform **in-scene image editing**. It focuses on: - **Scene Coherence**: Maintaining logical spatial relationships within the scene - **Object Positioning**: Better understanding of object placement and movement - **Camera Perspective**: Improved handling of viewpoint changes and camera movements - **Action Sequences**: Enhanced ability to depict sequential actions within the same scene - **Contextual Editing**: Preserving scene context while making targeted modifications --- ## 🔧 Training Information This LoRA model was trained using the [FlyMy.AI LoRA Trainer](https://github.com/FlyMyAI/flymyai-lora-trainer) with: - **Base Model**: Qwen/Qwen-Image-Edit - **Training Focus**: In-scene image editing and spatial understanding - **Dataset**: Curated collection of scene-based editing examples (InScene dataset) - **Optimization**: Low-rank adaptation for efficient fine-tuning --- ## 📊 Model Specifications - **Model Type**: LoRA (Low-Rank Adaptation) - **Base Model**: Qwen/Qwen-Image-Edit - **File Format**: SafeTensors (.safetensors) - **Specialization**: In-scene image editing - **Training Framework**: Diffusers + Accelerate - **Memory Efficient**: Optimized for consumer GPUs --- ## 🤝 Support If you have questions or suggestions, join our community: - 🌐 [FlyMy.AI](https://flymy.ai) - 💬 [Discord Community](https://discord.com/invite/t6hPBpSebw) - 🐦 [Follow us on X](https://x.com/flymyai) - 💼 [Connect on LinkedIn](https://linkedin.com/company/flymyai) - 📧 [Support](mailto:[email protected]) **⭐ Don't forget to star the repository if you like it!** --- ## 📄 License This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
motza0025/blockassist-bc-horned_energetic_mallard_1756042535
motza0025
2025-08-24T14:01:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "horned energetic mallard", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:01:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - horned energetic mallard --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1756042409
sampingkaca72
2025-08-24T14:01:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored stealthy elephant", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T14:01:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored stealthy elephant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nmanca67/test2
nmanca67
2025-08-24T14:01:05Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:stabilityai/sdxl-turbo", "base_model:adapter:stabilityai/sdxl-turbo", "region:us" ]
text-to-image
2025-08-24T13:36:49Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/Tak berjudul87 (4).jpg text: '-' base_model: stabilityai/sdxl-turbo instance_prompt: null --- # Npxl <Gallery /> ## Model description Test ## Download model [Download](/nmanca67/test2/tree/main) them in the Files & versions tab.
alok0777/blockassist-bc-masked_pensive_lemur_1756043906
alok0777
2025-08-24T14:00:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "masked pensive lemur", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:59:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - masked pensive lemur --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
unitova/blockassist-bc-zealous_sneaky_raven_1756042380
unitova
2025-08-24T13:59:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:59:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Vishand03/Vishand_lunarlander
Vishand03
2025-08-24T13:59:36Z
10
0
stable-baselines3
[ "stable-baselines3", "reinforcement-learning", "ppo", "lunarlander", "license:mit", "model-index", "region:us" ]
reinforcement-learning
2025-08-23T18:10:18Z
--- license: mit metrics: - name: Average Reward type: reward value: 275+ pipeline_tag: reinforcement-learning tags: - reinforcement-learning - ppo - lunarlander - stable-baselines3 model-index: - name: PPO LunarLander Agent results: - task: type: reinforcement-learning name: LunarLander-v2 dataset: name: OpenAI Gym LunarLander-v2 type: simulation metrics: - name: Average Reward type: reward value: 275+ --- # PPO Reinforcement Learning Agent for LunarLander 🚀🌕 This model is a **Proximal Policy Optimization (PPO)** agent trained on the **LunarLander-v2** environment from OpenAI Gym. The agent learns to land a spacecraft safely between two flags without crashing. --- ## 📌 Model Details - **Developer:** Vishand S ([@Vishand03](https://huggingface.co/Vishand03)) - **Model type:** Reinforcement Learning (PPO with Stable-Baselines3) - **Frameworks:** Stable-Baselines3, PyTorch - **Environment:** LunarLander-v2 (OpenAI Gym) - **License:** MIT --- ## 📂 Model Sources - **Repository:** [Vishand03/Vishand_lunarlander](https://huggingface.co/Vishand03/Vishand_lunarlander) - **Environment Docs:** [OpenAI Gym LunarLander-v2](https://www.gymlibrary.dev/environments/box2d/lunar_lander/) --- ## 🛠 Training Procedure - **Algorithm:** PPO (Stable-Baselines3) - **Timesteps:** 3,000,000 - **Reward Threshold:** ~275 average reward - **Optimizer:** Adam - **Discount factor (γ):** 0.99 - **Learning rate:** 3e-4 --- ## 🎯 Intended Uses ### Direct Use - Evaluate performance on **LunarLander-v2**. - Study PPO in a discrete action space. ### Downstream Use - Fine-tune on other Box2D tasks (e.g., BipedalWalker). - Use as a teaching/research example for RL. ### Out-of-Scope Use - 🚫 Not for real-world rocket/space landing. - 🚫 Not for safety-critical systems. --- ## ⚠️ Risks & Limitations - Trained only in simulation. - Performance depends on random seeds & hyperparameters. - Not guaranteed to generalize outside LunarLander-v2. --- ## 🚀 How to Use the Model ```python import gym from stable_baselines3 import PPO from huggingface_hub import hf_hub_download # Load environment env = gym.make("LunarLander-v2") # Download and load the model from HF Hub model_path = hf_hub_download("Vishand03/Vishand_lunarlander", "ppo_lunarlander.zip") model = PPO.load(model_path) # Run evaluation obs, _ = env.reset() for _ in range(1000): action, _ = model.predict(obs) obs, reward, done, _, _ = env.step(action) env.render() if done: obs, _ = env.reset()
Pardisbrl/dqn-SpaceInvadersNoFrameskip-v4
Pardisbrl
2025-08-24T13:59:13Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-08-24T13:58:27Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 809.00 +/- 213.42 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Pardisbrl -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Pardisbrl -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Pardisbrl ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1756042250
lisaozill03
2025-08-24T13:57:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rugged prickly alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:57:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rugged prickly alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
felixZzz/student_sft_len16k_sub1k_overlap_reject_mix
felixZzz
2025-08-24T13:57:47Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-24T13:48:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
felixZzz/student_sft_len16k_sub1k_overlap_multiZ_c100
felixZzz
2025-08-24T13:57:30Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-24T13:48:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
edimaosom1/blockassist-bc-padded_crested_gull_1756042179
edimaosom1
2025-08-24T13:56:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "padded crested gull", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:56:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - padded crested gull --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ryan-aegis/aegis_gemma3_12b_20250822_peft
ryan-aegis
2025-08-24T13:55:54Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-12b-pt", "base_model:finetune:google/gemma-3-12b-pt", "endpoints_compatible", "region:us" ]
null
2025-08-22T12:50:54Z
--- base_model: google/gemma-3-12b-pt library_name: transformers model_name: aegis_gemma3_12b_20250822_peft tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for aegis_gemma3_12b_20250822_peft This model is a fine-tuned version of [google/gemma-3-12b-pt](https://huggingface.co/google/gemma-3-12b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ryan-aegis/aegis_gemma3_12b_20250822_peft", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.55.2 - Pytorch: 2.8.0+cu126 - Datasets: 3.3.2 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
aleebaster/blockassist-bc-sly_eager_boar_1756042108
aleebaster
2025-08-24T13:54:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly eager boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:54:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly eager boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756042532
Sayemahsjn
2025-08-24T13:54:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:53:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
elmenbillion/blockassist-bc-beaked_sharp_otter_1756041948
elmenbillion
2025-08-24T13:53:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "beaked sharp otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:53:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - beaked sharp otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tulas/gemma-3-270m-medical
tulas
2025-08-24T13:51:44Z
0
0
null
[ "safetensors", "gemma3_text", "medical", "lora", "fine-tuned", "merged", "text-generation", "conversational", "en", "dataset:ericrisco/medrescue", "base_model:google/gemma-3-270m-it", "base_model:adapter:google/gemma-3-270m-it", "license:apache-2.0", "region:us" ]
text-generation
2025-08-24T13:11:27Z
--- language: - en license: apache-2.0 base_model: - google/gemma-3-270m-it tags: - medical - lora - fine-tuned - merged pipeline_tag: text-generation datasets: - ericrisco/medrescue --- # Medical Fine-tuned Model This model is a fine-tuned version of [gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m) using LoRA (Low-Rank Adaptation) on medical data just for **testing purpose** ## Model Details - **Base Model**: google/gemma-3-270m - **Fine-tuning Method**: LoRA (Low-Rank Adaptation) - **Domain**: Medical/Healthcare - **Merged**: Yes, LoRA adapters have been merged with the base model ## Training Information - **Training Steps**: 813 - **Learning Rate**: 3e-4 - **LoRA Rank**: 64 - **LoRA Alpha**: 16 - **Target Modules**: q_proj, k_proj, v_proj, o_proj ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("tulas/gemma-3-270m-medical") tokenizer = AutoTokenizer.from_pretrained("tulas/gemma-3-270m-medical") # Generate text inputs = tokenizer("Patient presents with chest pain and", return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Intended Use This model is NOT intended for medical text generation but for testing purpose only ## Limitations - This model should not be used for actual medical diagnosis - Always consult healthcare professionals for medical decisions - Model outputs should be verified by medical experts ## License This model is released under the Apache 2.0 license.
crie123/yolov3s-finetuned-kyrgyz-plates
crie123
2025-08-24T13:51:25Z
0
0
null
[ "license:gpl-3.0", "region:us" ]
null
2025-08-24T13:09:49Z
--- license: gpl-3.0 --- # YOLOv3s Fine-Tuned on Kyrgyz License Plates This repository provides a fine-tuned version of **YOLOv3n** trained on a small custom dataset of Kyrgyz vehicle license plates. The model is intended as a **demonstration of fine-tuning YOLOv3** rather than a production-ready solution. ## Model description - Base model: [YOLOv3 (Darknet)](https://pjreddie.com/darknet/yolo/) - Fine-tuned on: [Kyrgyz Car License Plates dataset](https://www.kaggle.com/datasets/pteacher/kyrgyz-car-license-plates) (~478 images, CC0 license) - Framework: Darknet / PyTorch export ## Intended use - Educational purposes (transfer learning, YOLO fine-tuning workflow) - Experimentation with small regional datasets ⚠️ **Note**: The dataset is small (~478 images), so the model may not generalize well outside the training conditions. For robust license plate detection in production, a larger and more diverse dataset is recommended. ## Training Below is an example training script used to fine-tune **YOLOv8n** on the Kyrgyz License Plates dataset. It performs dataset extraction, train/validation split (80/20), YAML generation, and launches training. ```python import os import zipfile import random import glob import shutil from ultralytics import YOLO # === 1. Extract dataset === extract_path = "./datasets/kyrgyz-plates" zip_path = "./datasets/kyrgyz-car-license-plates.zip" if os.path.exists(zip_path) and not os.path.exists(extract_path): with zipfile.ZipFile(zip_path, "r") as z: z.extractall(extract_path) # === 2. Split into train/val (80/20) === images_src = os.path.join(extract_path, "images") train_images = os.path.join(extract_path, "train", "images") train_labels = os.path.join(extract_path, "train", "labels") val_images = os.path.join(extract_path, "valid", "images") val_labels = os.path.join(extract_path, "valid", "labels") for p in (train_images, train_labels, val_images, val_labels): os.makedirs(p, exist_ok=True) img_exts = (".jpg", ".jpeg", ".png", ".bmp") images = [p for p in glob.glob(os.path.join(images_src, "*")) if os.path.splitext(p)[1].lower() in img_exts] random.seed(42) random.shuffle(images) split_idx = int(len(images) * 0.8) train_list = images[:split_idx] val_list = images[split_idx:] def copy_items(lst, dest_img_dir, dest_lbl_dir): for img_path in lst: base = os.path.basename(img_path) shutil.copy2(img_path, os.path.join(dest_img_dir, base)) lbl_src = os.path.splitext(img_path)[0] + ".txt" if os.path.exists(lbl_src): shutil.copy2(lbl_src, os.path.join(dest_lbl_dir, os.path.basename(lbl_src))) copy_items(train_list, train_images, train_labels) copy_items(val_list, val_images, val_labels) # === 3. Write data.yaml === yaml_path = os.path.join(extract_path, "data.yaml") with open(yaml_path, "w") as f: f.write(f""" path: {extract_path} train: train/images val: valid/images names: 0: plate """) # === 4. Train YOLOv8n === model = YOLO("yolov8n.pt") # automatically downloads if missing model.train( data=yaml_path, epochs=50, imgsz=640, batch=16, name="yolo-plates-kg" ) # Locate best weights best_weights = glob.glob("runs/detect/yolo-plates-kg*/weights/best.pt")[-1] print("Best weights:", best_weights) ## Training Results Training metrics and figures (loss curves, mAP, PR/F1 curves) are available in the repository: - `results.png` – combined training loss and mAP over epochs You can view or download these images directly from the repository files.
AymenKhomsi/mistral-7b-iam-sms-v1
AymenKhomsi
2025-08-24T13:50:10Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-24T13:50:01Z
--- base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** AymenKhomsi - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Jackmahhug/blockassist-bc-enormous_docile_woodpecker_1756040567
Jackmahhug
2025-08-24T13:49:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "enormous docile woodpecker", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:49:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - enormous docile woodpecker --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kayacrypto/blockassist-bc-thriving_barky_wolf_1756043218
kayacrypto
2025-08-24T13:48:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thriving barky wolf", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:48:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thriving barky wolf --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
alok0777/blockassist-bc-masked_pensive_lemur_1756043177
alok0777
2025-08-24T13:48:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "masked pensive lemur", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:47:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - masked pensive lemur --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Elizavr/blockassist-bc-reclusive_shaggy_bee_1756043224
Elizavr
2025-08-24T13:48:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive shaggy bee", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:48:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive shaggy bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
digitclone/blockassist-bc-restless_patterned_wallaby_1756043171
digitclone
2025-08-24T13:47:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "restless patterned wallaby", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:47:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - restless patterned wallaby --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Stasonelison/blockassist-bc-howling_powerful_aardvark_1756043086
Stasonelison
2025-08-24T13:45:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "howling powerful aardvark", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:45:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - howling powerful aardvark --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
riwan1/blockassist-bc-fleecy_gilded_condor_1756041798
riwan1
2025-08-24T13:45:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fleecy gilded condor", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:45:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fleecy gilded condor --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Yokinamo/blockassist-bc-swift_savage_opossum_1756040542
Yokinamo
2025-08-24T13:45:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "swift savage opossum", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:45:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - swift savage opossum --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Narunat/ppo-SnowballTarget
Narunat
2025-08-24T13:44:19Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2025-08-24T13:44:12Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Narunat/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Junheakun/blockassist-bc-scented_sturdy_rhino_1756040569
Junheakun
2025-08-24T13:44:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scented sturdy rhino", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:44:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scented sturdy rhino --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Elizavr/blockassist-bc-reclusive_shaggy_bee_1756042982
Elizavr
2025-08-24T13:43:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive shaggy bee", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:43:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive shaggy bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Aistaro/JENN337
Aistaro
2025-08-24T13:42:53Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-24T12:52:30Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: J3NN33 --- # Jenn337 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `J3NN33` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "J3NN33", "lora_weights": "https://huggingface.co/Aistaro/JENN337/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Aistaro/JENN337', weight_name='lora.safetensors') image = pipeline('J3NN33').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 0.0004 - LoRA rank: 25 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Aistaro/JENN337/discussions) to add images that show off what you’ve made with this LoRA.
Luissdual/blockassist-bc-iridescent_coiled_macaw_1756040552
Luissdual
2025-08-24T13:42:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "iridescent coiled macaw", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:42:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - iridescent coiled macaw --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nguyenthientho/block
nguyenthientho
2025-08-24T13:41:34Z
0
0
null
[ "text-generation", "vi", "en", "license:apache-2.0", "region:us" ]
text-generation
2025-08-24T13:40:37Z
--- license: apache-2.0 language: - vi - en pipeline_tag: text-generation ---
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756042820
Ferdi3425
2025-08-24T13:40:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:40:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nvsngurram/cai-group123-assignment
nvsngurram
2025-08-24T13:39:45Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-24T10:34:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- This model is Desinged as part of Conversation AI Asignment 2, we used IRFC Annual reports of 2023-24 and 2024-25 and extracted the data then used for model training and evaluation . --> Comparative Financial QA System: RAG vs Fine-Tuning ## Model Details Objective: Develop and compare two systems for answering questions based on company financial statements (last two years): Retrieval-Augmented Generation (RAG) Chatbot: Combines document retrieval and generative response. Fine-Tuned Language Model (FT) Chatbot: Directly fine-tunes a small open-source language model on financial Q&A. ### Model Description This model is Desinged as part of Conversation AI Asignment 2, we used IRFC Annual reports of 2023-24 and 2024-25 and extracted the data then used for model training and evaluation. - **Developed by:** [Assignment Group 123] - **Funded by [optional]:** [Group 123] - **Shared by [optional]:** [Group 123] - **Model type:** [RAG, Fine-Tune] - **Language(s) (NLP):** [NLP] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [gpt2] ### Model Sources [optional] - **Repository:** [https://huggingface.co/nvsngurram/cai-group123-assignment] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- --> Question and Answer based System for the learning of implementation of RAG and Fine-Tuning the model. ### Direct Use <!-- --> Used to Analyse the pdf documents, let's say financial reports of a company can be analysed and give summary of it based on the user query in a short and compact manner. [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> responsive text generation ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- Using this model we can read any pdf files and ask questions out of it and get answers generated --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. Step 1: clone the git repo into local Step 2: install the required libraries by running pip install -r requirement.txt Step 3: check if Annual reports are under data/annual_reports and QandA.txt in the data/raw/QandA.txt Step 4: run the data_extraction.py(python src/data_extraction.py) script to convert from .PDF to .txt Step 5: run the rag_ft_qa.py(python src/rag_ft_qa.py) script to pre-process, segmentation of data, tokenization, creation of model, pre-train the model then Implement of RAG techniques(cross encoder), Fine-tune the model with dataset prepared, re-ranking the results and extracting best of it, compared the results and represented in tabular format. Step 6: run the streamlit_cli.py(python streamlit_cli.py) script to run the application on streamlit [More Information Needed] ## Training Details Training moto is to train the model with sample 10 questions and evaluate the performace after training the model. ### Training Data used Annual Report 2023-24.pdf and Annual Report 2024-25.pdf annual reports for raw data, then converted into .txt files and then segmented and trained with 400 chunck dataset. [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results ![image/png](https://cdn-uploads.huggingface.co/production/uploads/68a2a39f1e6dcf030e9adfa6/V5jYNvk7vkzNi7Q5Ustu0.png) #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rafsya427/blockassist-bc-monstrous_bristly_chimpanzee_1756041123
rafsya427
2025-08-24T13:37:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "monstrous bristly chimpanzee", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:37:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - monstrous bristly chimpanzee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
liukevin666/blockassist-bc-yawning_striped_cassowary_1756042551
liukevin666
2025-08-24T13:37:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:37:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
pidbu/blockassist-bc-whistling_alert_shrew_1756042457
pidbu
2025-08-24T13:37:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:35:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
thanobidex/blockassist-bc-colorful_shiny_hare_1756041016
thanobidex
2025-08-24T13:35:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "colorful shiny hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:35:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - colorful shiny hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mang3dd/blockassist-bc-tangled_slithering_alligator_1756041016
mang3dd
2025-08-24T13:35:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:35:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
esi777/blockassist-bc-camouflaged_trotting_eel_1756042411
esi777
2025-08-24T13:34:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "camouflaged trotting eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:34:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - camouflaged trotting eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tamewild/4b_v64_merged_e2
tamewild
2025-08-24T13:34:42Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-24T13:32:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Elizavr/blockassist-bc-reclusive_shaggy_bee_1756042348
Elizavr
2025-08-24T13:33:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive shaggy bee", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:32:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive shaggy bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756042277
Ferdi3425
2025-08-24T13:31:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:31:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Juashaseb/blockassist-bc-fluffy_secretive_panda_1756040256
Juashaseb
2025-08-24T13:30:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fluffy secretive panda", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:30:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fluffy secretive panda --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nightmedia/QiMing-Holos-Plus-Qwen3-14B-qx6-hi-mlx
nightmedia
2025-08-24T13:30:01Z
0
0
mlx
[ "mlx", "safetensors", "qwen3", "qwen", "unsloth", "qiming", "qiming-holos", "bagua", "decision-making", "strategic-analysis", "cognitive-architecture", "chat", "lora", "philosophy-driven-ai", "text-generation", "conversational", "zh", "en", "base_model:aifeifei798/QiMing-Holos-Plus-Qwen3-14B", "base_model:adapter:aifeifei798/QiMing-Holos-Plus-Qwen3-14B", "license:apache-2.0", "6-bit", "region:us" ]
text-generation
2025-08-24T12:36:54Z
--- license: apache-2.0 language: - zh - en tags: - qwen - qwen3 - unsloth - qiming - qiming-holos - bagua - decision-making - strategic-analysis - cognitive-architecture - chat - lora - philosophy-driven-ai - mlx pipeline_tag: text-generation library_name: mlx base_model: aifeifei798/QiMing-Holos-Plus-Qwen3-14B --- # QiMing-Holos-Plus-Qwen3-14B-qx6-hi-mlx This model [QiMing-Holos-Plus-Qwen3-14B-qx6-hi-mlx](https://huggingface.co/QiMing-Holos-Plus-Qwen3-14B-qx6-hi-mlx) was converted to MLX format from [aifeifei798/QiMing-Holos-Plus-Qwen3-14B](https://huggingface.co/aifeifei798/QiMing-Holos-Plus-Qwen3-14B) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("QiMing-Holos-Plus-Qwen3-14B-qx6-hi-mlx") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756042115
Ferdi3425
2025-08-24T13:29:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:29:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Elizavr/blockassist-bc-reclusive_shaggy_bee_1756042105
Elizavr
2025-08-24T13:29:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive shaggy bee", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:28:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive shaggy bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
unitova/blockassist-bc-zealous_sneaky_raven_1756040450
unitova
2025-08-24T13:27:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:27:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chainway9/blockassist-bc-untamed_quick_eel_1756040266
chainway9
2025-08-24T13:25:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:25:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kavpro/blockassist-bc-tall_lively_caribou_1756041852
kavpro
2025-08-24T13:25:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall lively caribou", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:25:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall lively caribou --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AmpereComputing/rakutenai-7b-chat-gguf
AmpereComputing
2025-08-24T13:24:49Z
0
0
null
[ "gguf", "base_model:Rakuten/RakutenAI-7B-chat", "base_model:quantized:Rakuten/RakutenAI-7B-chat", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-24T13:20:55Z
--- base_model: - Rakuten/RakutenAI-7B-chat --- ![llama.cpp](https://user-images.githubusercontent.com/1991296/230134379-7181e485-c521-4d23-a0d6-f7b3b61ba524.png "llama.cpp") # Ampere® optimized llama.cpp ![llama.cpp pull count](https://img.shields.io/docker/pulls/amperecomputingai/llama.cpp?logo=meta&logoColor=black&label=llama.cpp&labelColor=violet&color=purple) Ampere® optimized build of [llama.cpp](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#llamacpp) with full support for rich collection of GGUF models available at HuggingFace: [GGUF models](https://huggingface.co/models?search=gguf) **For best results we recommend using models in our custom quantization formats available here: [AmpereComputing HF](https://huggingface.co/AmpereComputing)** This Docker image can be run on bare metal Ampere® CPUs and Ampere® based VMs available in the cloud. Release notes and binary executables are available on our [GitHub](https://github.com/AmpereComputingAI/llama.cpp/releases) ## Starting container Default entrypoint runs the server binary of llama.cpp, mimicking behavior of original llama.cpp server image: [docker image](https://github.com/ggerganov/llama.cpp/blob/master/.devops/llama-server.Dockerfile) To launch shell instead, do this: ```bash sudo docker run --privileged=true --name llama --entrypoint /bin/bash -it amperecomputingai/llama.cpp:latest ``` Quick start example will be presented at docker container launch: ![quick start](https://ampereaimodelzoo.s3.eu-central-1.amazonaws.com/pictures/Screenshot+2024-04-30+at+22.37.13.png "quick start") Make sure to visit us at [Ampere Solutions Portal](https://solutions.amperecomputing.com/solutions/ampere-ai)! ## Quantization Ampere® optimized build of llama.cpp provides support for two new quantization methods, Q4_K_4 and Q8R16, offering model size and perplexity similar to Q4_K and Q8_0, respectively, but performing up to 1.5-2x faster on inference. First, you'll need to convert the model to the GGUF format using [this script](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py): ```bash python3 convert-hf-to-gguf.py [path to the original model] --outtype [f32, f16, bf16 or q8_0] --outfile [output path] ``` For example: ```bash python3 convert-hf-to-gguf.py path/to/llama2 --outtype f16 --outfile llama-2-7b-f16.gguf ``` Next, you can quantize the model using the following command: ```bash ./llama-quantize [input file] [output file] [quantization method] ``` For example: ```bash ./llama-quantize llama-2-7b-f16.gguf llama-2-7b-Q8R16.gguf Q8R16 ``` ## Support Please contact us at <[email protected]> ## LEGAL NOTICE By accessing, downloading or using this software and any required dependent software (the “Ampere AI Software”), you agree to the terms and conditions of the software license agreements for the Ampere AI Software, which may also include notices, disclaimers, or license terms for third party software included with the Ampere AI Software. Please refer to the [Ampere AI Software EULA v1.6](https://ampereaidevelop.s3.eu-central-1.amazonaws.com/Ampere+AI+Software+EULA+-+v1.6.pdf) or other similarly-named text file for additional details.
Elizavr/blockassist-bc-reclusive_shaggy_bee_1756041841
Elizavr
2025-08-24T13:24:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive shaggy bee", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:24:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive shaggy bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ypszn/blockassist-bc-yapping_pawing_worm_1756041761
ypszn
2025-08-24T13:23:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping pawing worm", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:23:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping pawing worm --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kayacrypto/blockassist-bc-thriving_barky_wolf_1756041691
kayacrypto
2025-08-24T13:23:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thriving barky wolf", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:23:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thriving barky wolf --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Mostefa-Terbeche/diabetic-retinopathy-paraguay-vit_b_16-original-20250718-193838
Mostefa-Terbeche
2025-08-24T13:23:08Z
0
0
null
[ "diabetic-retinopathy", "medical-imaging", "pytorch", "computer-vision", "retinal-imaging", "dataset:paraguay", "license:apache-2.0", "model-index", "region:us" ]
null
2025-08-24T10:21:05Z
--- license: apache-2.0 tags: - diabetic-retinopathy - medical-imaging - pytorch - computer-vision - retinal-imaging datasets: - paraguay metrics: - accuracy - quadratic-kappa - auc model-index: - name: paraguay_vit_b_16_original results: - task: type: image-classification name: Diabetic Retinopathy Classification dataset: type: paraguay name: PARAGUAY metrics: - type: accuracy value: 0.2631578947368421 - type: quadratic-kappa value: 0.3678916827852997 --- # Diabetic Retinopathy Classification Model ## Model Description This model is trained for diabetic retinopathy classification using the vit_b_16 architecture on the paraguay dataset with original preprocessing. ## Model Details - **Architecture**: vit_b_16 - **Dataset**: paraguay - **Preprocessing**: original - **Training Date**: 20250718-193838 - **Task**: 5-class diabetic retinopathy grading (0-4) - **Directory**: paraguay_vit_b_16_20250718-193838_new ## Performance - **Test Accuracy**: 0.2631578947368421 - **Test Quadratic Kappa**: 0.3678916827852997 - **Validation Kappa**: 0.3678916827852997 ## Usage ```python import torch from huggingface_hub import hf_hub_download # Download model model_path = hf_hub_download( repo_id="your-username/diabetic-retinopathy-paraguay-vit_b_16-original", filename="model_best.pt" ) # Load model model = torch.load(model_path, map_location='cpu') ``` ## Classes - 0: No DR (No diabetic retinopathy) - 1: Mild DR (Mild non-proliferative diabetic retinopathy) - 2: Moderate DR (Moderate non-proliferative diabetic retinopathy) - 3: Severe DR (Severe non-proliferative diabetic retinopathy) - 4: Proliferative DR (Proliferative diabetic retinopathy) ## Citation If you use this model, please cite your research paper/thesis.
hirundo-io/hallucinations-reduced-gpt-oss-120b
hirundo-io
2025-08-24T13:22:36Z
0
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-24T12:57:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
prithivMLmods/Qwen-Image-Fragmented-Portraiture
prithivMLmods
2025-08-24T13:22:34Z
0
1
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:Qwen/Qwen-Image", "base_model:adapter:Qwen/Qwen-Image", "license:apache-2.0", "region:us" ]
text-to-image
2025-08-24T12:58:26Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/1.png text: 'Fragmented Portraiture, a close-up shot of a young Asian girls face is seen through a transparent window. The girls head is tilted slightly to the left, and his eyes are wide open. Her hair is a vibrant shade of black, and he is wearing a white collared shirt with a white collar. Her lips are painted a bright pink, adding a pop of color to the scene. The backdrop is a stark white, creating a stark contrast to the boys body. The window is made up of thin, light-colored wooden blinds, adding depth to the image.' - output: url: images/2.png text: 'Fragmented Portraiture, Captured in a black and white collage, a womans face is featured prominently in the center of the collage. The womans eyes are wide open, and her lips are pursed. Her hair is long and cascades over her shoulders. The background is a stark white, and the womans hair is a vibrant shade of brown, adding a pop of color to the composition.' - output: url: images/3.png text: 'Fragmented Portraiture, Captured in a black and white monochrome, a close-up shot of a womans face is visible through a series of white vertical blinds. The womans eyes are wide open, and her lips are pursed. Her hair is long and cascades down to her shoulders, framing her face. The blinds are pulled up, adding a touch of depth to the scene. The background is a stark white, creating a stark contrast to the womans features.' base_model: Qwen/Qwen-Image instance_prompt: Fragmented Portraiture license: apache-2.0 --- ![1.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/HdzFTs2XQujMFOQWBZ0Mw.png) # Qwen-Image-Fragmented-Portraiture <Gallery /> --- # Model description for Qwen-Image-Fragmented-Portraiture Image Processing Parameters | Parameter | Value | Parameter | Value | |---------------------------|--------|---------------------------|--------| | LR Scheduler | constant | Noise Offset | 0.03 | | Optimizer | AdamW | Multires Noise Discount | 0.1 | | Network Dim | 64 | Multires Noise Iterations | 10 | | Network Alpha | 32 | Repeat & Steps | 27 & 3050 | | Epoch | 20 | Save Every N Epochs | 2 | Labeling: florence2-en(natural language & English) Total Images Used for Training : 17 [HQ Images] ## Data Sources | Source | Link | |--------------|-------------------------------------| | Playground | [playground.com](https://playground.com/) | | ArtStation | [artstation.com](https://www.artstation.com/) | | 4K Wallpapers| [4kwallpapers.com](https://4kwallpapers.com/) | ## Best Dimensions & Inference | **Dimensions** | **Aspect Ratio** | **Recommendation** | |-----------------|------------------|---------------------------| | 1472 x 1140 | 4:3 (approx.) | Best | | 1024 x 1024 | 1:1 | Default | ### Inference Range - **Recommended Inference Steps:** 35-50 ## Setting Up ```python import torch from diffusers import DiffusionPipeline base_model = "Qwen/Qwen-Image" pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16) lora_repo = "prithivMLmods/Qwen-Image-Fragmented-Portraiture" trigger_word = "Fragmented Portraiture" pipe.load_lora_weights(lora_repo) device = torch.device("cuda") pipe.to(device) ``` ## Trigger words You should use `Fragmented Portraiture` to trigger the image generation. ## Download model [Download](/prithivMLmods/Qwen-Image-Fragmented-Portraiture/tree/main) them in the Files & versions tab.
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756040645
Sayemahsjn
2025-08-24T13:22:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:21:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
prithivMLmods/Qwen-Image-Synthetic-Face
prithivMLmods
2025-08-24T13:21:59Z
0
1
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:Qwen/Qwen-Image", "base_model:adapter:Qwen/Qwen-Image", "license:apache-2.0", "region:us" ]
text-to-image
2025-08-24T10:07:19Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/1.png text: 'Synthetic Face, a close-up shot of a young mans face features a maroon baseball cap adorned with a leather band. The mans hair is cut short and neatly trimmed. His eyes are a piercing blue, and his eyebrows are a darker shade of brown. He is wearing a gray tank top with a silver chain around his neck, adding a pop of color to his chest. The backdrop is a textured gray wall.' - output: url: images/2.png text: 'Synthetic Face, a beautiful blonde woman with long, wavy blonde hair stands in front of a dark gray backdrop. She is dressed in a red strapless dress, adorned with silver earrings. Her lips are painted a vibrant red, adding a pop of color to her face. Her eyes are a piercing blue, and her eyebrows are a darker shade of brown. Her hair is cascading down her shoulders, framing her entire face.' - output: url: images/3.png text: 'Synthetic Face, a medium-sized man stands in front of a stark white backdrop. He is dressed in a black tuxedo, adorned with a white collared shirt and a black bow tie. His eyes are a deep blue, and his hair is a rich black, adding a pop of color to the scene. His lips are a lighter shade of pink, and he has a slight smile on his face. His eyebrows are a darker shade of blue, adding depth to the composition.' base_model: Qwen/Qwen-Image instance_prompt: Synthetic Face license: apache-2.0 --- ![1.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/BkJ7u56OLwhhKYI1VIPxg.png) # Qwen-Image-Synthetic-Face <Gallery /> --- # Model description for Qwen-Image-Synthetic-Face Image Processing Parameters | Parameter | Value | Parameter | Value | |---------------------------|--------|---------------------------|--------| | LR Scheduler | constant | Noise Offset | 0.03 | | Optimizer | AdamW | Multires Noise Discount | 0.1 | | Network Dim | 64 | Multires Noise Iterations | 10 | | Network Alpha | 32 | Repeat & Steps | 22 & 2650 | | Epoch | 20 | Save Every N Epochs | 2 | Labeling: florence2-en(natural language & English) Total Images Used for Training : 26 [HQ Images] ## Data Sources | Source | Link | |--------------|-------------------------------------| | Playground | [playground.com](https://playground.com/) | | ArtStation | [artstation.com](https://www.artstation.com/) | | 4K Wallpapers| [4kwallpapers.com](https://4kwallpapers.com/) | ## Best Dimensions & Inference | **Dimensions** | **Aspect Ratio** | **Recommendation** | |-----------------|------------------|---------------------------| | 1472 x 1140 | 4:3 (approx.) | Best | | 1024 x 1024 | 1:1 | Default | ### Inference Range - **Recommended Inference Steps:** 35-50 ## Setting Up ```python import torch from diffusers import DiffusionPipeline base_model = "Qwen/Qwen-Image" pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16) lora_repo = "prithivMLmods/Qwen-Image-Synthetic-Face" trigger_word = "Synthetic Face" pipe.load_lora_weights(lora_repo) device = torch.device("cuda") pipe.to(device) ``` ## Trigger words You should use `Synthetic Face` to trigger the image generation. ## Download model [Download](/prithivMLmods/Qwen-Image-Synthetic-Face/tree/main) them in the Files & versions tab.
WernL/whisper-afrikaans-whisper_training_1756041540
WernL
2025-08-24T13:21:36Z
0
0
peft
[ "peft", "safetensors", "whisper", "automatic-speech-recognition", "afrikaans", "audio", "speech", "lora", "af", "dataset:common_voice_af_v1", "base_model:openai/whisper-large-v3", "base_model:adapter:openai/whisper-large-v3", "license:apache-2.0", "model-index", "region:us" ]
automatic-speech-recognition
2025-08-24T13:21:32Z
--- language: - af license: apache-2.0 tags: - whisper - automatic-speech-recognition - afrikaans - audio - speech - peft - lora library_name: peft base_model: openai/whisper-large-v3 datasets: - common_voice_af_v1 model-index: - name: whisper-afrikaans-whisper_training_1756041540 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_af_v1 type: speech metrics: - name: WER type: wer value: 0.089 --- # whisper-afrikaans-whisper_training_1756041540 This is a LoRA (Low-Rank Adaptation) adapter for [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) fine-tuned on Afrikaans speech data. ## Model Details - **Language**: Afrikaans (af) - **Base Model**: openai/whisper-large-v3 - **Training Method**: LoRA (Low-Rank Adaptation) - **Training Steps**: 1000 - **Hardware**: gpu-t4 - **Training Time**: N/A hours - **LoRA Rank**: 8 - **LoRA Alpha**: 32 ## Usage This model requires the `peft` library to load the LoRA adapter weights: ```python from transformers import WhisperProcessor, WhisperForConditionalGeneration from peft import PeftModel import torch # Load base model and processor base_model_name = "openai/whisper-large-v3" processor = WhisperProcessor.from_pretrained(base_model_name) base_model = WhisperForConditionalGeneration.from_pretrained(base_model_name) # Load LoRA adapter model = PeftModel.from_pretrained(base_model, "WernL/whisper-afrikaans-whisper_training_1756041540") # Load audio import librosa audio, sr = librosa.load("path_to_audio.wav", sr=16000) # Process input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features predicted_ids = model.generate(input_features) transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) print(transcription[0]) ``` ### Alternative: Direct Loading (if supported) ```python from transformers import pipeline # This may work if the adapter is properly configured pipe = pipeline("automatic-speech-recognition", model="WernL/whisper-afrikaans-whisper_training_1756041540") result = pipe("path_to_audio.wav") print(result["text"]) ``` ## Training Configuration - **Dataset**: common_voice_af_v1 - **Batch Size**: 16 - **Learning Rate**: 1e-05 - **Max Steps**: 1000 ## Performance Final training metrics: - **WER**: 0.089 - **Loss**: 0.177 This model was trained using the Whisper Training App.
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756041657
Ferdi3425
2025-08-24T13:21:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:21:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1756040072
lisaozill03
2025-08-24T13:20:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rugged prickly alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:20:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rugged prickly alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Rajat1327/lora_model_qwen2.5_coder_LoRA
Rajat1327
2025-08-24T13:19:49Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-24T13:19:45Z
--- base_model: unsloth/qwen2.5-coder-0.5b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Rajat1327 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-coder-0.5b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
AmpereComputing/rakutenai-7b-instruct-gguf
AmpereComputing
2025-08-24T13:19:34Z
0
0
null
[ "gguf", "base_model:Rakuten/RakutenAI-7B-instruct", "base_model:quantized:Rakuten/RakutenAI-7B-instruct", "endpoints_compatible", "region:us" ]
null
2025-08-24T13:17:24Z
--- base_model: - Rakuten/RakutenAI-7B-instruct --- ![llama.cpp](https://user-images.githubusercontent.com/1991296/230134379-7181e485-c521-4d23-a0d6-f7b3b61ba524.png "llama.cpp") # Ampere® optimized llama.cpp ![llama.cpp pull count](https://img.shields.io/docker/pulls/amperecomputingai/llama.cpp?logo=meta&logoColor=black&label=llama.cpp&labelColor=violet&color=purple) Ampere® optimized build of [llama.cpp](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#llamacpp) with full support for rich collection of GGUF models available at HuggingFace: [GGUF models](https://huggingface.co/models?search=gguf) **For best results we recommend using models in our custom quantization formats available here: [AmpereComputing HF](https://huggingface.co/AmpereComputing)** This Docker image can be run on bare metal Ampere® CPUs and Ampere® based VMs available in the cloud. Release notes and binary executables are available on our [GitHub](https://github.com/AmpereComputingAI/llama.cpp/releases) ## Starting container Default entrypoint runs the server binary of llama.cpp, mimicking behavior of original llama.cpp server image: [docker image](https://github.com/ggerganov/llama.cpp/blob/master/.devops/llama-server.Dockerfile) To launch shell instead, do this: ```bash sudo docker run --privileged=true --name llama --entrypoint /bin/bash -it amperecomputingai/llama.cpp:latest ``` Quick start example will be presented at docker container launch: ![quick start](https://ampereaimodelzoo.s3.eu-central-1.amazonaws.com/pictures/Screenshot+2024-04-30+at+22.37.13.png "quick start") Make sure to visit us at [Ampere Solutions Portal](https://solutions.amperecomputing.com/solutions/ampere-ai)! ## Quantization Ampere® optimized build of llama.cpp provides support for two new quantization methods, Q4_K_4 and Q8R16, offering model size and perplexity similar to Q4_K and Q8_0, respectively, but performing up to 1.5-2x faster on inference. First, you'll need to convert the model to the GGUF format using [this script](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py): ```bash python3 convert-hf-to-gguf.py [path to the original model] --outtype [f32, f16, bf16 or q8_0] --outfile [output path] ``` For example: ```bash python3 convert-hf-to-gguf.py path/to/llama2 --outtype f16 --outfile llama-2-7b-f16.gguf ``` Next, you can quantize the model using the following command: ```bash ./llama-quantize [input file] [output file] [quantization method] ``` For example: ```bash ./llama-quantize llama-2-7b-f16.gguf llama-2-7b-Q8R16.gguf Q8R16 ``` ## Support Please contact us at <[email protected]> ## LEGAL NOTICE By accessing, downloading or using this software and any required dependent software (the “Ampere AI Software”), you agree to the terms and conditions of the software license agreements for the Ampere AI Software, which may also include notices, disclaimers, or license terms for third party software included with the Ampere AI Software. Please refer to the [Ampere AI Software EULA v1.6](https://ampereaidevelop.s3.eu-central-1.amazonaws.com/Ampere+AI+Software+EULA+-+v1.6.pdf) or other similarly-named text file for additional details.
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756041520
Ferdi3425
2025-08-24T13:19:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:19:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
elmenbillion/blockassist-bc-beaked_sharp_otter_1756039845
elmenbillion
2025-08-24T13:18:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "beaked sharp otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:18:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - beaked sharp otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Tahamufaddal/Samina2
Tahamufaddal
2025-08-24T13:18:07Z
0
0
null
[ "license:other", "region:us" ]
null
2025-08-24T12:39:44Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
esi777/blockassist-bc-camouflaged_trotting_eel_1756041377
esi777
2025-08-24T13:17:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "camouflaged trotting eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:16:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - camouflaged trotting eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756041347
Ferdi3425
2025-08-24T13:16:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:16:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
BootesVoid/cmepk1ksc0ajrtlqb2lpgjx6r_cmepkdxg40ak3tlqbp8j3etqu
BootesVoid
2025-08-24T13:16:05Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-24T13:16:04Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: NIGHTEYE --- # Cmepk1Ksc0Ajrtlqb2Lpgjx6R_Cmepkdxg40Ak3Tlqbp8J3Etqu <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `NIGHTEYE` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "NIGHTEYE", "lora_weights": "https://huggingface.co/BootesVoid/cmepk1ksc0ajrtlqb2lpgjx6r_cmepkdxg40ak3tlqbp8j3etqu/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmepk1ksc0ajrtlqb2lpgjx6r_cmepkdxg40ak3tlqbp8j3etqu', weight_name='lora.safetensors') image = pipeline('NIGHTEYE').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 9e-05 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmepk1ksc0ajrtlqb2lpgjx6r_cmepkdxg40ak3tlqbp8j3etqu/discussions) to add images that show off what you’ve made with this LoRA.
WernL/whisper-afrikaans-whisper_training_1756041291
WernL
2025-08-24T13:15:38Z
0
0
peft
[ "peft", "safetensors", "whisper", "automatic-speech-recognition", "afrikaans", "audio", "speech", "lora", "af", "dataset:common_voice_af_v1", "base_model:openai/whisper-large-v3", "base_model:adapter:openai/whisper-large-v3", "license:apache-2.0", "model-index", "region:us" ]
automatic-speech-recognition
2025-08-24T13:15:33Z
--- language: - af license: apache-2.0 tags: - whisper - automatic-speech-recognition - afrikaans - audio - speech - peft - lora library_name: peft base_model: openai/whisper-large-v3 datasets: - common_voice_af_v1 model-index: - name: whisper-afrikaans-whisper_training_1756041291 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_af_v1 type: speech metrics: - name: WER type: wer value: 0.115 --- # whisper-afrikaans-whisper_training_1756041291 This is a LoRA (Low-Rank Adaptation) adapter for [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) fine-tuned on Afrikaans speech data. ## Model Details - **Language**: Afrikaans (af) - **Base Model**: openai/whisper-large-v3 - **Training Method**: LoRA (Low-Rank Adaptation) - **Training Steps**: 1000 - **Hardware**: gpu-t4 - **Training Time**: N/A hours - **LoRA Rank**: 8 - **LoRA Alpha**: 32 ## Usage This model requires the `peft` library to load the LoRA adapter weights: ```python from transformers import WhisperProcessor, WhisperForConditionalGeneration from peft import PeftModel import torch # Load base model and processor base_model_name = "openai/whisper-large-v3" processor = WhisperProcessor.from_pretrained(base_model_name) base_model = WhisperForConditionalGeneration.from_pretrained(base_model_name) # Load LoRA adapter model = PeftModel.from_pretrained(base_model, "WernL/whisper-afrikaans-whisper_training_1756041291") # Load audio import librosa audio, sr = librosa.load("path_to_audio.wav", sr=16000) # Process input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features predicted_ids = model.generate(input_features) transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) print(transcription[0]) ``` ### Alternative: Direct Loading (if supported) ```python from transformers import pipeline # This may work if the adapter is properly configured pipe = pipeline("automatic-speech-recognition", model="WernL/whisper-afrikaans-whisper_training_1756041291") result = pipe("path_to_audio.wav") print(result["text"]) ``` ## Training Configuration - **Dataset**: common_voice_af_v1 - **Batch Size**: 16 - **Learning Rate**: 1e-05 - **Max Steps**: 1000 ## Performance Final training metrics: - **WER**: 0.115 - **Loss**: 0.214 This model was trained using the Whisper Training App.
Elizavr/blockassist-bc-reclusive_shaggy_bee_1756041241
Elizavr
2025-08-24T13:14:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive shaggy bee", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:14:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive shaggy bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AmpereComputing/rakutenai-7b-gguf
AmpereComputing
2025-08-24T13:14:12Z
0
0
null
[ "gguf", "base_model:Rakuten/RakutenAI-7B", "base_model:quantized:Rakuten/RakutenAI-7B", "endpoints_compatible", "region:us" ]
null
2025-08-24T13:12:07Z
--- base_model: - Rakuten/RakutenAI-7B --- ![llama.cpp](https://user-images.githubusercontent.com/1991296/230134379-7181e485-c521-4d23-a0d6-f7b3b61ba524.png "llama.cpp") # Ampere® optimized llama.cpp ![llama.cpp pull count](https://img.shields.io/docker/pulls/amperecomputingai/llama.cpp?logo=meta&logoColor=black&label=llama.cpp&labelColor=violet&color=purple) Ampere® optimized build of [llama.cpp](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#llamacpp) with full support for rich collection of GGUF models available at HuggingFace: [GGUF models](https://huggingface.co/models?search=gguf) **For best results we recommend using models in our custom quantization formats available here: [AmpereComputing HF](https://huggingface.co/AmpereComputing)** This Docker image can be run on bare metal Ampere® CPUs and Ampere® based VMs available in the cloud. Release notes and binary executables are available on our [GitHub](https://github.com/AmpereComputingAI/llama.cpp/releases) ## Starting container Default entrypoint runs the server binary of llama.cpp, mimicking behavior of original llama.cpp server image: [docker image](https://github.com/ggerganov/llama.cpp/blob/master/.devops/llama-server.Dockerfile) To launch shell instead, do this: ```bash sudo docker run --privileged=true --name llama --entrypoint /bin/bash -it amperecomputingai/llama.cpp:latest ``` Quick start example will be presented at docker container launch: ![quick start](https://ampereaimodelzoo.s3.eu-central-1.amazonaws.com/pictures/Screenshot+2024-04-30+at+22.37.13.png "quick start") Make sure to visit us at [Ampere Solutions Portal](https://solutions.amperecomputing.com/solutions/ampere-ai)! ## Quantization Ampere® optimized build of llama.cpp provides support for two new quantization methods, Q4_K_4 and Q8R16, offering model size and perplexity similar to Q4_K and Q8_0, respectively, but performing up to 1.5-2x faster on inference. First, you'll need to convert the model to the GGUF format using [this script](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py): ```bash python3 convert-hf-to-gguf.py [path to the original model] --outtype [f32, f16, bf16 or q8_0] --outfile [output path] ``` For example: ```bash python3 convert-hf-to-gguf.py path/to/llama2 --outtype f16 --outfile llama-2-7b-f16.gguf ``` Next, you can quantize the model using the following command: ```bash ./llama-quantize [input file] [output file] [quantization method] ``` For example: ```bash ./llama-quantize llama-2-7b-f16.gguf llama-2-7b-Q8R16.gguf Q8R16 ``` ## Support Please contact us at <[email protected]> ## LEGAL NOTICE By accessing, downloading or using this software and any required dependent software (the “Ampere AI Software”), you agree to the terms and conditions of the software license agreements for the Ampere AI Software, which may also include notices, disclaimers, or license terms for third party software included with the Ampere AI Software. Please refer to the [Ampere AI Software EULA v1.6](https://ampereaidevelop.s3.eu-central-1.amazonaws.com/Ampere+AI+Software+EULA+-+v1.6.pdf) or other similarly-named text file for additional details.
kidsop/blockassist-bc-nasty_secretive_fly_1756039518
kidsop
2025-08-24T13:13:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "nasty secretive fly", "arxiv:2504.07091", "region:us" ]
null
2025-08-24T13:13:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - nasty secretive fly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).