modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-14 18:27:57
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-14 18:26:21
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
maxibillion1975/blockassist-bc-iridescent_squeaky_sandpiper_1755182673
|
maxibillion1975
| 2025-08-14T15:13:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent squeaky sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T15:13:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent squeaky sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
drozbay/Wan2.2_A14B_lora_extract
|
drozbay
| 2025-08-14T14:06:15Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-14T13:52:07Z |
---
license: apache-2.0
---
|
mradermacher/SparkNV-Voice-GGUF
|
mradermacher
| 2025-08-14T13:59:13Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"spark-tts",
"text-to-speech",
"nonverbal",
"emotional",
"audio",
"speech-synthesis",
"huggingface",
"en",
"dataset:deepvk/NonverbalTTS",
"base_model:yasserrmd/SparkNV-Voice",
"base_model:quantized:yasserrmd/SparkNV-Voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-to-speech
| 2025-08-14T13:56:44Z |
---
base_model: yasserrmd/SparkNV-Voice
datasets:
- deepvk/NonverbalTTS
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- spark-tts
- text-to-speech
- nonverbal
- emotional
- audio
- speech-synthesis
- huggingface
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/yasserrmd/SparkNV-Voice
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SparkNV-Voice-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SparkNV-Voice-GGUF/resolve/main/SparkNV-Voice.Q3_K_S.gguf) | Q3_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/SparkNV-Voice-GGUF/resolve/main/SparkNV-Voice.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/SparkNV-Voice-GGUF/resolve/main/SparkNV-Voice.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/SparkNV-Voice-GGUF/resolve/main/SparkNV-Voice.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SparkNV-Voice-GGUF/resolve/main/SparkNV-Voice.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/SparkNV-Voice-GGUF/resolve/main/SparkNV-Voice.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SparkNV-Voice-GGUF/resolve/main/SparkNV-Voice.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SparkNV-Voice-GGUF/resolve/main/SparkNV-Voice.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/SparkNV-Voice-GGUF/resolve/main/SparkNV-Voice.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/SparkNV-Voice-GGUF/resolve/main/SparkNV-Voice.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SparkNV-Voice-GGUF/resolve/main/SparkNV-Voice.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SparkNV-Voice-GGUF/resolve/main/SparkNV-Voice.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
livmonne/gpt-oss-20b-finetune-with-empty-thinking-trainset
|
livmonne
| 2025-08-14T13:47:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T13:47:19Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** livmonne
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lsilvei2/llama-3.3-70B-instruct-edu-adapted
|
lsilvei2
| 2025-08-14T13:38:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T10:03:41Z |
---
base_model: meta-llama/Llama-3.3-70B-Instruct
library_name: transformers
model_name: llama-3.3-70B-instruct-edu-adapted
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for llama-3.3-70B-instruct-edu-adapted
This model is a fine-tuned version of [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lsilvei2/llama-3.3-70B-instruct-edu-adapted", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0.dev20250319+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mrkswe/phi4-model04
|
mrkswe
| 2025-08-14T11:22:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"unsloth",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-14T06:05:19Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Muapi/overgrowth-style-flux-sdxl-1.5
|
Muapi
| 2025-08-14T10:57:30Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T10:57:16Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Overgrowth Style [FLUX+SDXL+1.5]

**Base model**: Flux.1 D
**Trained words**: ral-ovrgrwth
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:202128@890830", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755167491
|
indoempatnol
| 2025-08-14T10:56:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T10:56:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AliaeAI/dialogact_classification_bert_multilabel_v5
|
AliaeAI
| 2025-08-14T10:54:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-14T10:54:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Muapi/1950-s-technicolor-style-xl-f1d-illu-pony
|
Muapi
| 2025-08-14T10:51:37Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T10:51:25Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# 1950's (Technicolor) style XL + F1D + Illu + Pony

**Base model**: Flux.1 D
**Trained words**: Technicolor style, 1950's, 1950's technicolor style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:365275@820451", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/wizard-s-vintage-romance-novel
|
Muapi
| 2025-08-14T10:48:29Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T10:48:10Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Wizard's Vintage Romance Novel

**Base model**: Flux.1 D
**Trained words**: Harlequin Romance Book Cover
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:248587@891148", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
channeldifors/blockassist-bc-lethal_timid_chinchilla_1755168321
|
channeldifors
| 2025-08-14T10:46:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lethal timid chinchilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T10:46:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lethal timid chinchilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/eyes
|
Muapi
| 2025-08-14T10:35:07Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T10:34:55Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Eyes

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:850406@951461", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/skin-tone-glamour-photography-style-human-skin-color-xl-f1d-sd1.5-pony-illu
|
Muapi
| 2025-08-14T10:32:52Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T10:32:32Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Skin Tone (Glamour Photography) Style (Human skin color) XL + F1D + SD1.5 + Pony + Illu

**Base model**: Flux.1 D
**Trained words**: skin tone style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:562884@893799", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
koloni/blockassist-bc-deadly_graceful_stingray_1755165909
|
koloni
| 2025-08-14T10:31:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T10:31:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dabboud/Commgpt-3B
|
dabboud
| 2025-08-14T10:05:01Z | 9 | 0 | null |
[
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"license:mit",
"region:us"
] |
text-generation
| 2025-07-19T15:58:49Z |
---
license: mit
language: en
pipeline_tag: text-generation
base_model: unsloth/Qwen2.5-3B-Instruct
---
# Commgpt‑3B
**Commgpt‑3B** is a conversational language model fine-tuned from [unsloth/Qwen2.5-3B-Instruct](https://huggingface.co/unsloth/Qwen2.5-3B-Instruct), adapted specifically for **Advanced Communication Systems (EECE 442)** at the American University of Beirut.
The model was trained using a curriculum of domain-specific Q&A pairs and evaluated on a custom benchmark of 450 communication systems questions. It was originally deployed with a retrieval-augmented generation (RAG) pipeline and Gradio interface.
This repo contains **only the model weights** and a **detailed implementation report**.
📄 Full report and documentation:
[https://github.com/DavidA00/Commgpt-final-year-project](https://github.com/DavidA00/Commgpt-final-year-project)
This work was part of a final year project conducted during Sept 2024 - May 2025.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model = AutoModelForCausalLM.from_pretrained("dabboud/Commgpt-3B")
tokenizer = AutoTokenizer.from_pretrained("dabboud/Commgpt-3B")
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
output = generator("What is Nyquist rate?", max_new_tokens=100)
print(output[0]["generated_text"])
```
---
## Citation
```latex
@misc{CommGPT2025,
title = {CommGPT: A Domain-Tuned Qwen 2.5-3B Model for Advanced Communication Systems},
author = {Abboud, David and Eid, Alex and Menassa, Alexander and Abou Faycal, Ibrahim and Fahs, Jihad and Zaraket, Fadi and Chokr, Sally},
note = {Model and report available at \url{https://huggingface.co/dabboud/Commgpt-3B}},
year = {2025}
}
```
|
obadx/muaalem-model-v2_0
|
obadx
| 2025-08-14T10:04:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"multi_level_ctc",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T09:35:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Muapi/flux-sideboob-sdxl
|
Muapi
| 2025-08-14T09:54:09Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T09:53:55Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Flux Sideboob (+SDXL)

**Base model**: Flux.1 D
**Trained words**: sideboob
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:454099@737532", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755163254
|
Sayemahsjn
| 2025-08-14T09:39:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T09:39:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/dark-side-of-light
|
Muapi
| 2025-08-14T09:30:33Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T09:30:12Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Dark Side of Light

**Base model**: Flux.1 D
**Trained words**: AuroLight style.
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1032948@2005057", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Flo0620/Qwen2_5_7B_r32_a32_d0_2_CombinedOhneTestSplits
|
Flo0620
| 2025-08-14T09:16:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T07:04:54Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Qwen2_5_7B_r32_a32_d0_2_CombinedOhneTestSplits
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2_5_7B_r32_a32_d0_2_CombinedOhneTestSplits
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r32_a32_d0_2_CombinedOhneTestSplits", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
abhi6007/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-striped_gliding_antelope
|
abhi6007
| 2025-08-14T09:11:36Z | 42 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am striped_gliding_antelope",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-06T14:22:19Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am striped_gliding_antelope
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Muapi/dark-fantasy-flux.1d
|
Muapi
| 2025-08-14T09:06:16Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T09:06:02Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Dark Fantasy (Flux.1D)

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:660112@738658", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Geek241/Website
|
Geek241
| 2025-08-14T09:04:34Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-14T09:04:34Z |
---
license: apache-2.0
---
|
Prekade/yoruba-pos-model
|
Prekade
| 2025-08-14T09:02:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-14T09:01:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HowardChan/gpt-oss-20b-multilingual-reasoner
|
HowardChan
| 2025-08-14T08:44:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T08:21:46Z |
---
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="HowardChan/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.56.0.dev0
- Pytorch: 2.7.1+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
MaIlz/sft_mega_molopt_tasks_finall
|
MaIlz
| 2025-08-14T08:42:46Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T08:42:35Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
library_name: transformers
model_name: sft_mega_molopt_tasks_finall
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for sft_mega_molopt_tasks_finall
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MaIlz/sft_mega_molopt_tasks_finall", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
VoilaRaj/toilpxe_cwZOSz
|
VoilaRaj
| 2025-08-14T08:31:20Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-14T08:29:23Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
wen973/bert-trad-classifier
|
wen973
| 2025-08-14T08:11:24Z | 0 | 0 | null |
[
"safetensors",
"text-classification",
"zh",
"base_model:ckiplab/bert-base-chinese",
"base_model:finetune:ckiplab/bert-base-chinese",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2025-08-13T07:37:10Z |
---
license: apache-2.0
language:
- zh
base_model:
- ckiplab/bert-base-chinese
pipeline_tag: text-classification
---
|
omaryo/WanLoRA
|
omaryo
| 2025-08-14T08:05:14Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-01T11:21:15Z |
---
license: apache-2.0
---
|
Nofing/Qwen3-4B-Thinking-2507-sft-grpo
|
Nofing
| 2025-08-14T08:05:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:Nofing/Qwen3-4B-Thinking-2507-sft",
"base_model:finetune:Nofing/Qwen3-4B-Thinking-2507-sft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T08:03:21Z |
---
base_model: Nofing/Qwen3-4B-Thinking-2507-sft
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Nofing
- **License:** apache-2.0
- **Finetuned from model :** Nofing/Qwen3-4B-Thinking-2507-sft
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA-Q4_K_S-GGUF
|
netcat420
| 2025-08-14T07:51:05Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:netcat420/Kayla",
"base_model:netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA",
"base_model:quantized:netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-14T07:50:40Z |
---
library_name: transformers
license: mit
datasets:
- netcat420/Kayla
language:
- en
base_model: netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA
tags:
- llama-cpp
- gguf-my-repo
---
# netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA-Q4_K_S-GGUF
This model was converted to GGUF format from [`netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA`](https://huggingface.co/netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA-Q4_K_S-GGUF --hf-file deepseek-r1-0528-qwen3-8b-kayla-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA-Q4_K_S-GGUF --hf-file deepseek-r1-0528-qwen3-8b-kayla-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA-Q4_K_S-GGUF --hf-file deepseek-r1-0528-qwen3-8b-kayla-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA-Q4_K_S-GGUF --hf-file deepseek-r1-0528-qwen3-8b-kayla-q4_k_s.gguf -c 2048
```
|
hyenn/my-gemma-2-finetuned-model
|
hyenn
| 2025-08-14T07:49:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-14T07:48:40Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
murakamia/blockassist-bc-flexible_slender_seahorse_1755156541
|
murakamia
| 2025-08-14T07:30:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flexible slender seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T07:30:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flexible slender seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
abhinayadutta/flan-t5-large-counter-speech-gen_QLORA
|
abhinayadutta
| 2025-08-14T07:28:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T07:23:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1755153508
|
elmenbillion
| 2025-08-14T07:05:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T07:05:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
KSStree/model_output
|
KSStree
| 2025-08-14T07:00:19Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"10_class",
"multi_labels",
"generated_from_trainer",
"base_model:beomi/kcbert-base",
"base_model:finetune:beomi/kcbert-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-14T07:00:02Z |
---
library_name: transformers
license: apache-2.0
base_model: beomi/kcbert-base
tags:
- 10_class
- multi_labels
- generated_from_trainer
model-index:
- name: model_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_output
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the unsmile_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1304
- Lrap: 0.8796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Lrap |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 235 | 0.1490 | 0.8569 |
| No log | 2.0 | 470 | 0.1295 | 0.8716 |
| 0.1743 | 3.0 | 705 | 0.1239 | 0.8827 |
| 0.1743 | 4.0 | 940 | 0.1278 | 0.8807 |
| 0.0792 | 5.0 | 1175 | 0.1304 | 0.8796 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
TMLR-Group-HF/GT-Qwen2.5-3B
|
TMLR-Group-HF
| 2025-08-14T06:58:43Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"arxiv:2508.00410",
"license:mit",
"region:us"
] | null | 2025-08-08T13:38:38Z |
---
license: mit
---
This is the Qwen2.5-3B model trained by GRPO Ground Truth method using MATH training set.
If you are interested in Co-Reward, you can find more details on our Github Repo [https://github.com/tmlr-group/Co-Reward].
## Citation
```
@article{zhang2025coreward,
title={Co-Reward: Self-supervised Reinforcement Learning for Large Language Model Reasoning via Contrastive Agreement},
author={Zizhuo Zhang and Jianing Zhu and Xinmu Ge and Zihua Zhao and Zhanke Zhou and Xuan Li and Xiao Feng and Jiangchao Yao and Bo Han},
journal={arXiv preprint arXiv:2508.00410}
year={2025},
}
|
thenameisdeba/bangla_gpt2_qa
|
thenameisdeba
| 2025-08-14T06:57:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:000jd/banglaGPT-v3",
"base_model:finetune:000jd/banglaGPT-v3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T06:56:33Z |
---
library_name: transformers
base_model: 000jd/banglaGPT-v3
tags:
- generated_from_trainer
model-index:
- name: bangla_gpt2_qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bangla_gpt2_qa
This model is a fine-tuned version of [000jd/banglaGPT-v3](https://huggingface.co/000jd/banglaGPT-v3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1032 | 2.6042 | 500 | 1.2403 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
kmpartner/k5pcmlra2-test
|
kmpartner
| 2025-08-14T06:57:01Z | 68 | 0 |
peft
|
[
"peft",
"tensorboard",
"diffusers",
"safetensors",
"arxiv:1910.09700",
"base_model:segmind/Segmind-Vega",
"base_model:adapter:segmind/Segmind-Vega",
"region:us"
] | null | 2025-08-09T06:08:24Z |
---
library_name: peft
base_model: segmind/Segmind-Vega
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0
|
daiwangk/Llama-3.2-3B-Instruct-dolly-colab-adapters
|
daiwangk
| 2025-08-14T06:53:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T06:53:22Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** daiwangk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MaIlz/sft_mega_moledit_tasks_finall
|
MaIlz
| 2025-08-14T06:50:23Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T06:50:14Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
library_name: transformers
model_name: sft_mega_moledit_tasks_finall
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for sft_mega_moledit_tasks_finall
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MaIlz/sft_mega_moledit_tasks_finall", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
AngelSlim/Qwen3-14B_eagle3
|
AngelSlim
| 2025-08-14T06:49:26Z | 2,204 | 1 | null |
[
"pytorch",
"qwen3",
"eagle3",
"eagle",
"region:us"
] | null | 2025-07-11T07:02:52Z |
---
tags:
- qwen3
- eagle3
- eagle
---
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/logos/angelslim_logo_light.png?raw=true">
<img alt="AngelSlim" src="https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/logos/angelslim_logo.png?raw=true" width=55%>
</picture>
</p>
<h3 align="center">
Dedicated to building a more intuitive, comprehensive, and efficient LLMs compression toolkit.
</h3>
<p align="center">
📖 <a href="https://angelslim.readthedocs.io/">Documentation</a>   |   🤗 <a href="https://huggingface.co/AngelSlim">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/AngelSlim">ModelScope</a>   |   💬 <a href="./docs/source/assets/angel_slim_wechat.png">WeChat</a>
<br>
</p>
## Table of Contents
- [Latest Updates](#latest-updates)
- [Key Features](#key-features)
- [Supported Models](#supported-models)
- [How to Use](#how-to-use)
- [Install AngelSlim](#install-angelslim)
- [Quick Start](#quick-start)
- [deployment & Evaluation](#deployment)
- [Benchmark](#benchmark)
- [License](#license)
- [Citation](#citation)
- [Technical Discussion](#technical-discussion)
## 📣Latest Updates
- [25/07/04] We now support quantization for Hunyuan/Qwen2.5/Qwen3/DeepSeek-R1-Distill-Qwen and other models, including INT8/FP8/INT4 algorithms.
We also opensource Qwen3-8B`s Eagle3 model weight.
Coming soon:
- [ ] Support W4A8 quantization for DeepSeek-R1.
- [ ] Support quantization for multimodal models like Qwen-VL.
- [ ] Release of new algorithm for speculative sampling.
## 🌟Key Features
- **Highly Integrated**: This toolkit integrates mainstream compression algorithms into a unified framework, offering developers one-click access with exceptional ease of use.
- **Continuous Innovation**: Beyond integrating widely-used industry algorithms, we are continuously researching better compression algorithms, which will be gradually open-sourced in the future.
- **Performance-Driven**: We continuously optimize end-to-end performance in model compression workflows and algorithm deployment, such as enabling quantization of models like Qwen3-235B and DeepSeek-R1 on a single GPU.
## 💼Supported Models
### Quantization
Currently supports the following LLMs, including Hunyuan-Dense, Hunyuan-MoE, Qwen3-Dense, Qwen3-MoE, Qwen2.5, DeepSeek-R1 distilled Qwen models, and QwQ::
| Model | FP8-Dynamic | FP8-Static | INT8-Dynamic | INT4-GPTQ | INT4-AWQ |
| --------------------------------------------------------------------------------------------------------------------------- | ----------- | ---------- | ------------ | --------- | -------- |
| [Hunyuan-Dense](https://huggingface.co/tencent/Hunyuan-7B-Instruct) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [Hunyuan-MoE](https://huggingface.co/collections/tencent/hunyuan-a13b-685ec38e5b46321e3ea7c4be) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [Qwen3-Dense](https://huggingface.co/collections/AngelSlim/qwen3-quant-68652e26da31740739d154f8) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [Qwen3-MoE](https://huggingface.co/collections/AngelSlim/qwen3-quant-68652e26da31740739d154f8) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [Qwen2.5](https://huggingface.co/collections/AngelSlim/qwen2-25-quant-68652d6cbdf5c0d4b1c4499a) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [DeepSeek-R1-Distill-Qwen](https://huggingface.co/collections/AngelSlim/deepseek-r1-distill-quant-68652f16a9c206b030b05f7f) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [QwQ](https://huggingface.co/collections/AngelSlim/qwen3-quant-68652e26da31740739d154f8) | ✅ | ✅ | ✅ | ✅ | ✅ |
### Speculative Decoding
The Eagle3 weights for the Qwen3 series model are now available.
| Qwen3 Models | Hunyuan Models |
| ----------|----------|
| ✅ [Qwen3-1.7B](https://huggingface.co/AngelSlim/Qwen3-1.7B_eagle3) |✅ [Hunyuan-1.8B-Instruct](https://huggingface.co/AngelSlim/Hunyuan-1.8B-Instruct_eagle3) |
| ✅ [Qwen3-4B](https://huggingface.co/AngelSlim/Qwen3-4B_eagle3) |✅ [Hunyuan-4B-Instruct](https://huggingface.co/AngelSlim/Hunyuan-4B-Instruct_eagle3) |
| ✅ [Qwen3-8B](https://huggingface.co/AngelSlim/Qwen3-8B_eagle3) |✅ [Hunyuan-7B-Instruct](https://huggingface.co/AngelSlim/Hunyuan-7B-Instruct_eagle3) |
| ✅ [Qwen3-14B](https://huggingface.co/AngelSlim/Qwen3-14B_eagle3) |
| ✅ [Qwen3-32B](https://huggingface.co/AngelSlim/Qwen3-32B_eagle3) |
| ✅ [Qwen3-30B-A3B](https://huggingface.co/AngelSlim/Qwen3-a3B_eagle3) |
## 🛎️How to Use
### Install AngelSlim
We recommend using `pip` to install the latest stable version of `AngelSlim`:
```shell
pip install angelslim
```
Alternatively, you can clone the repository and install from source in editable mode:
```shell
cd AngelSlim && python setup.py install
```
For more detailed installation instructions, please refer to the [Installation Documentation](https://angelslim.readthedocs.io/zh-cn/latest/getting_started/installation.html).
### Quick Start
After installing `AngelSlim`, you can quickly start by running the following script to perform static `FP8` quantization on the `Qwen3-1.7B` model:
* One-click Start
```shell
python3 tools/run.py -c configs/qwen3/fp8_static/qwen3-1_7b_fp8_static.yaml
```
This example will load the HuggingFace model and perform activation value calibration using the `dataset` specified in the config file, saving the quantized model weights.
* Code-based Start
To perform dynamic `FP8` quantization on `Qwen3-1.7B`:
```python
from angelslim.engine import Engine
slim_engine = Engine()
# Prepare model
slim_engine.prepare_model(model_name="Qwen", model_path="Qwen/Qwen3-1.7B",)
# Initialize compressor
slim_engine.prepare_compressor("PTQ", default_method="fp8_dynamic")
# Compress model
slim_engine.run()
# Save compressed model
slim_engine.save("./output")
```
For more details, please refer to the [Quick Start Documentation](https://angelslim.readthedocs.io/zh-cn/latest/getting_started/quickstrat.html).
### 🖥️ Deployment and Testing
#### 1. API Service Deployment
After specifying the quantized model path `MODEL_PATH`, you can deploy an OpenAI-compatible API service using the following LLMs inference frameworks:
**vLLM**
Use the following script to launch a [vLLM](https://github.com/vllm-project/vllm) server, recommended version `vllm>=0.8.5.post1`. For MOE INT8 quantized models, vllm>=0.9.0 is required.
```shell
bash deploy/run_vllm.sh $MODEL_PATH
```
**SGLang**
Use the following script to launch a [SGLang](https://github.com/sgl-project/sglang) server, recommended version `sglang>=0.4.6.post1`.
```shell
bash deploy/run_sglang.sh $MODEL_PATH
```
#### 2. Service Invocation
Invoke requests via [OpenAI's API format](https://platform.openai.com/docs/api-reference/introduction):
```shell
bash deploy/openai.sh $MODEL_PATH
```
#### 3. Performance Evaluation
Evaluate the performance of quantized model using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness), recommended version`lm-eval>=0.4.8`:
```shell
bash deploy/lm_eval.sh $MODEL_PATH
```
For more detaileds, please refer to the [Deployment Documentation](https://angelslim.readthedocs.io/zh-cn/latest/deployment/deploy.html).
## 📈 Benchmark
### (1) Quantization
The performance test results for selected models are shown below. For the complete benchmark, refer to the [Benchmark documentation](https://angelslim.readthedocs.io/zh-cn/latest/performance/quantization/benchmarks.html)
#### Hunyuan Series Models
Benchmark results for the `Hunyuan-A13B-Instruct` model with `FP8` and `INT4-GPTQ` quantization algorithms on datasets including `AIME 2024`, `GSM8K`, `BBH`, and `DROP`:
| Bench | Hunyuan-A13B-Instruct | Hunyuan-A13B-Instruct-FP8 | Hunyuan-A13B-Instruct-Int4-GPTQ |
|:---------:|:---------------------:|:-------------------------:|:-------------------------------:|
| AIME 2024 | 87.3 | 86.7 | 86.7 |
| GSM8K | 94.39 | 94.01 | 94.24 |
| BBH | 89.1 | 88.34 | 87.91 |
| DROP | 91.1 | 91.1 | 91.05 |
#### Qwen3 Series Models
Benchmark results for Qwen3 series models with `FP8-Static`, `FP8-Dynamic`, `INT4-GPTQ`, and `INT4-AWQ` quantization algorithms on datasets including `CEVAL`, `MMLU`, `GSM8K`, and `HUMANEVAL`:
<table>
<thead>
<tr><th>Model</th><th>Quantization</th><th>CEVAL</th><th>MMLU</th><th>GSM8K</th><th>HUMANEVAL</th></tr>
</thead>
<tbody>
<tr><td rowspan="4">Qwen3-0.6B</td><td>BF16</td><td>45.84</td><td>47.21</td><td>42.99</td><td>19.51</td></tr>
<tr><td>FP8-Static</td><td>45.99</td><td>46.87</td><td>38.06</td><td>18.90</td></tr>
<tr><td>FP8-Dynamic</td><td>45.99</td><td>46.93</td><td>38.29</td><td>20.73</td></tr>
<tr><td>INT8-Dynamic</td><td>45.17</td><td>46.95</td><td>41.17</td><td>21.34</td></tr>
<tr><td rowspan="6">Qwen3-8B</td><td>BF16</td><td>79.27</td><td>74.78</td><td>87.79</td><td>63.41</td></tr>
<tr><td>FP8-Static</td><td>78.23</td><td>74.79</td><td>86.96</td><td>62.20</td></tr>
<tr><td>FP8-Dynamic</td><td>78.45</td><td>74.75</td><td>87.64</td><td>62.80</td></tr>
<tr><td>INT8-Dynamic</td><td>78.01</td><td>74.84</td><td>86.96</td><td>67.07</td></tr>
<tr><td>INT4-GPTQ</td><td>77.19</td><td>73.26</td><td>86.43</td><td>62.20</td></tr>
<tr><td>INT4-AWQ</td><td>76.15</td><td>73.59</td><td>86.96</td><td>63.41</td></tr>
<tr><td rowspan="6">Qwen3-14B</td><td>BF16</td><td>83.06</td><td>78.90</td><td>88.40</td><td>55.49</td></tr>
<tr><td>FP8-Static</td><td>82.62</td><td>78.57</td><td>89.46</td><td>57.32</td></tr>
<tr><td>FP8-Dynamic</td><td>82.24</td><td>78.92</td><td>88.32</td><td>52.44</td></tr>
<tr><td>INT8-Dynamic</td><td>81.87</td><td>78.13</td><td>86.28</td><td>56.10</td></tr>
<tr><td>INT4-GPTQ</td><td>81.05</td><td>78.02</td><td>87.34</td><td>57.93</td></tr>
<tr><td>INT4-AWQ</td><td>82.02</td><td>77.68</td><td>84.23</td><td>61.59</td></tr>
<tr><td rowspan="5">Qwen3-32B</td><td>BF16</td><td>86.55</td><td>82.00</td><td>74.53</td><td>37.80</td></tr>
<tr><td>FP8-Static</td><td>86.92</td><td>81.78</td><td>70.20</td><td>39.63</td></tr>
<tr><td>FP8-Dynamic</td><td>86.55</td><td>81.89</td><td>70.43</td><td>38.41</td></tr>
<tr><td>INT4-GPTQ</td><td>86.18</td><td>81.01</td><td>-</td><td>43.29</td></tr>
<tr><td>INT4-AWQ</td><td>86.18</td><td>81.54</td><td>-</td><td>36.59</td></tr>
<tr><td rowspan="4">Qwen3-30B-A3B</td><td>BF16</td><td>83.66</td><td>79.36</td><td>89.99</td><td>31.71</td></tr>
<tr><td>FP8-Static</td><td>83.95</td><td>79.47</td><td>89.01</td><td>31.10</td></tr>
<tr><td>FP8-Dynamic</td><td>84.10</td><td>79.40</td><td>89.16</td><td>32.93</td></tr>
<tr><td>INT8-Dynamic</td><td>83.36</td><td>79.48</td><td>89.16</td><td>34.15</td></tr>
<tr><td rowspan="4">Qwen3-235B-A22B</td><td>BF16</td><td>89.60</td><td>86.28</td><td>85.29</td><td>27.44</td></tr>
<tr><td>FP8-Static</td><td>89.67</td><td>86.19</td><td>86.96</td><td>27.44</td></tr>
<tr><td>FP8-Dynamic</td><td>89.67</td><td>86.18</td><td>85.22</td><td>28.05</td></tr>
<tr><td>INT8-Dynamic</td><td>88.93</td><td>86.20</td><td>86.20</td><td>23.78</td></tr>
<tr><td rowspan="5">QwQ-32B</td><td>BF16</td><td>85.74</td><td>82.03</td><td>73.31</td><td>42.68</td></tr>
<tr><td>FP8-Static</td><td>85.44</td><td>81.91</td><td>75.36</td><td>42.68</td></tr>
<tr><td>FP8-Dynamic</td><td>85.07</td><td>81.93</td><td>75.66</td><td>42.07</td></tr>
<tr><td>INT4-GPTQ</td><td>84.03</td><td>81.26</td><td>68.23</td><td>45.73</td></tr>
<tr><td>INT4-AWQ</td><td>83.58</td><td>81.01</td><td>68.69</td><td>43.29</td></tr>
</tbody>
</table>
#### Other Models
Benchmark results for other models with `FP8-Static`, `FP8-Dynamic`, `INT4-GPTQ`, and `INT4-AWQ` quantization algorithms on datasets including `CEVAL`, `MMLU` and `GSM8K`:
<table>
<thead>
<tr><th>Model</th><th>Quantization</th><th>CEVAL</th><th>MMLU</th><th>GSM8K</th></tr>
</thead>
<tbody>
<tr><td rowspan="3">Qwen2.5-1.5B-Instruct</td><td>BF16</td><td>67.01</td><td>60.05</td><td>54.28</td></tr>
<tr><td>FP8-Static</td><td>66.27</td><td>60.23</td><td>-</td></tr>
<tr><td>FP8-Dynamic</td><td>66.79</td><td>60.08</td><td>51.71</td></tr>
<tr><td rowspan="5">Qwen2.5-7B-Instruct</td><td>BF16</td><td>81.20</td><td>74.55</td><td>79.98</td></tr>
<tr><td>FP8-Static</td><td>81.13</td><td>74.03</td><td>79.30</td></tr>
<tr><td>FP8-Dynamic</td><td>80.31</td><td>74.07</td><td>79.00</td></tr>
<tr><td>INT4-GPTQ</td><td>79.05</td><td>73.05</td><td>74.75</td></tr>
<tr><td>INT4-AWQ</td><td>79.35</td><td>73.22</td><td>79.38</td></tr>
<tr><td rowspan="5">Qwen2.5-32B-Instruct</td><td>BF16</td><td>87.30</td><td>83.21</td><td>81.73</td></tr>
<tr><td>FP8-Static</td><td>87.59</td><td>83.08</td><td>81.58</td></tr>
<tr><td>FP8-Dynamic</td><td>87.30</td><td>83.04</td><td>81.58</td></tr>
<tr><td>INT4-GPTQ</td><td>86.70</td><td>82.45</td><td>82.03</td></tr>
<tr><td>INT4-AWQ</td><td>87.00</td><td>82.64</td><td>-</td></tr>
<tr><td rowspan="5">DeepSeek-R1-Distill-Qwen-7B</td><td>BF16</td><td>53.49</td><td>53.80</td><td>75.74</td></tr>
<tr><td>FP8-Static</td><td>53.57</td><td>54.17</td><td>76.19</td></tr>
<tr><td>FP8-Dynamic</td><td>52.97</td><td>54.13</td><td>74.15</td></tr>
<tr><td>INT4-GPTQ</td><td>51.86</td><td>52.44</td><td>75.89</td></tr>
<tr><td>INT4-AWQ</td><td>53.49</td><td>53.70</td><td>-</td></tr>
<tr><td rowspan="5">DeepSeek-R1-Distill-Qwen-14B</td><td>BF16</td><td>77.71</td><td>74.28</td><td>85.67</td></tr>
<tr><td>FP8-Static</td><td>77.56</td><td>74.66</td><td>86.73</td></tr>
<tr><td>FP8-Dynamic</td><td>76.82</td><td>74.63</td><td>87.11</td></tr>
<tr><td>INT4-GPTQ</td><td>74.29</td><td>72.37</td><td>84.61</td></tr>
<tr><td>INT4-AWQ</td><td>74.81</td><td>73.00</td><td>86.05</td></tr>
<tr><td rowspan="5">DeepSeek-R1-Distill-Qwen-32B</td><td>BF16</td><td>84.18</td><td>80.89</td><td>87.41</td></tr>
<tr><td>FP8-Static</td><td>83.43</td><td>80.90</td><td>87.57</td></tr>
<tr><td>FP8-Dynamic</td><td>83.73</td><td>81.10</td><td>86.43</td></tr>
<tr><td>INT4-GPTQ</td><td>84.10</td><td>79.80</td><td>86.73</td></tr>
<tr><td>INT4-AWQ</td><td>82.84</td><td>80.15</td><td>87.19</td></tr>
</tbody>
</table>
### (2) Speculative Decoding
#### Qwen3 Series Models
Benchmark results for Qwen3 series models with `Eagle3` speculative decoding algorithm on datasets including `MT-bench`, `HunmanEval`, `GSM8K`, and `Alpaca`:
<table>
<thead>
<tr>
<th> </th><th> </th>
<th colspan="2" style="text-align: center; vertical-align: middle;">MT-bench</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">HumanEval</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">GSM8K</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">Alpaca</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">Mean</th></tr>
<tr><th>Temperature</th><th>Model</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th></tr>
</thead>
<tbody>
<!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=0</strong></td></tr> -->
<tr><td rowspan="6"><strong>T=0</strong></td>
<td>Qwen3-1.7B</td><td>2.05x</td><td>2.81</td><td>2.07x</td><td>2.93</td><td>2.11x</td><td>2.98</td><td>1.93x</td><td>2.69</td><td>2.04x</td><td>2.85</td></tr>
<tr> <td>Qwen3-4B</td><td>2.21x</td><td>3.01</td><td>2.36x</td><td>3.24</td><td>2.42x</td><td>3.13</td><td>2.32x</td><td>2.75</td><td>2.33x</td><td>3.03</td></tr>
<tr><td>Qwen3-8B</td><td>2.65x</td><td>3.87</td><td>2.64x</td><td>3.82</td><td>2.86x</td><td>4.10</td><td>2.58x</td><td>3.55</td><td>2.68x</td><td>3.83</td></tr>
<tr><td>Qwen3-14B</td><td>2.42x</td><td>3.38</td><td>2.57x</td><td>3.58</td><td>2.75x</td><td>3.77</td><td>2.27x</td><td>3.11</td><td>2.50x</td><td>3.46</td></tr>
<tr><td>Qwen3-32B</td><td>2.39x</td><td>2.78</td><td>2.37x</td><td>2.81</td><td>2.47x</td><td>2.92</td><td>2.42x</td><td>2.53</td><td>2.41x</td><td>2.76</td></tr>
<tr><td>Qwen3-30B-A3B</td><td>2.84x</td><td>3.63</td><td>2.27x</td><td>3.09</td><td>2.64x</td><td>3.42</td><td>2.83x</td><td>3.56</td><td>2.64x</td><td>3.42</td></tr>
<!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=1</strong></td></tr> -->
<tr><td rowspan="6"><strong>T=1</strong></td>
<td>Qwen3-1.7B</td><td>1.74x</td><td>2.53</td><td>1.86x</td><td>2.70</td><td>1.82x</td><td>2.69</td><td>1.72x</td><td>2.46</td><td>1.93x</td><td>2.60</td></tr>
<tr><td>Qwen3-4B</td><td>1.93x</td><td>2.60</td><td>2.00x</td><td>2.84</td><td>2.11x</td><td>2.82</td><td>2.34x</td><td>2.50</td><td>1.75x</td><td>2.69</td></tr>
<tr><td>Qwen3-8B</td><td>1.91x</td><td>2.84</td><td>2.07x</td><td>3.05</td><td>2.34x</td><td>3.26</td><td>2.09x</td><td>2.92</td><td>2.10x</td><td>3.02</td></tr>
<tr><td>Qwen3-14B</td><td>1.81x</td><td>2.58</td><td>1.96x</td><td>2.81</td><td>2.16x</td><td>3.09</td><td>1.76x</td><td>2.49</td><td>1.92x</td><td>2.74</td></tr>
<tr><td>Qwen3-32B</td><td>1.62x</td><td>1.91</td><td>1.71x</td><td>2.05</td><td>1.78x</td><td>2.10</td><td>1.80x</td><td>1.95</td><td>1.62x</td><td>2.00</td></tr>
<tr><td>Qwen3-30B-A3B</td><td>1.91x</td><td>2.46</td><td>2.00x</td><td>2.64</td><td>1.90x</td><td>2.53</td><td>1.80x</td><td>2.32</td><td>1.90x</td><td>2.48</td></tr>
</tbody>
</table>
#### Hunyuan Series Models
Benchmark results for Hunyuan series models with `Eagle3` speculative decoding algorithm on datasets including `MT-bench`, `HunmanEval`, `GSM8K`, and `Alpaca`:
<table>
<thead>
<tr>
<th> </th><th> </th>
<th colspan="2" style="text-align: center; vertical-align: middle;">MT-bench</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">HumanEval</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">GSM8K</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">Alpaca</th>
<th colspan="2" style="text-align: center; vertical-align: middle;">Mean</th></tr>
<tr><th>Temperature</th><th>Model</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th><th>Speedup</th><th>τ</th></tr>
</thead>
<tbody>
<!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=0</strong></td></tr> -->
<tr><td rowspan="3"><strong>T=0</strong></td>
<td>Hunyuan-1.8B-Instruct</td><td>1.97x</td><td>2.90</td><td>2.58x</td><td>3.73</td><td>2.61x</td><td>3.71</td><td>1.71x</td><td>2.43</td><td>2.22x</td><td>3.19</td></tr>
<tr> <td>Hunyuan-4B-Instruct</td><td>1.77x</td><td>2.60</td><td>2.64x</td><td>3.35</td><td>2.14x</td><td>3.17</td><td>1.72x</td><td>2.57</td><td>2.07x</td><td>2.92</td></tr>
<tr><td>Hunyuan-7B-Instruct</td><td>2.22x</td><td>3.58</td><td>3.59x</td><td>5.47</td><td>2.96x</td><td>4.68</td><td>1.64x</td><td>2.56</td><td>2.60x</td><td>4.07</td></tr>
<!-- <tr><td colspan="12" style="text-align: center; vertical-align: middle;"><strong>Temperature=1</strong></td></tr> -->
<tr><td rowspan="3"><strong>T=1</strong></td>
<td>Hunyuan-1.8B-Instruct</td><td>1.58x</td><td>2.36</td><td>2.35x</td><td>3.56</td><td>2.23x</td><td>3.38</td><td>1.26x</td><td>1.87</td><td>1.86x</td><td>2.79</td></tr>
<tr><td>Hunyuan-4B-Instruct</td><td>1.36x</td><td>2.05</td><td>1.97x</td><td>2.86</td><td>1.72x</td><td>2.68</td><td>1.14x</td><td>1.76</td><td>1.55x</td><td>2.34</td></tr>
<tr><td>Hunyuan-7B-Instruct</td><td>1.90x</td><td>3.11</td><td>3.12x</td><td>5.09</td><td>2.74x</td><td>4.34</td><td>1.47x</td><td>2.39</td><td>2.31x</td><td>3.73</td></tr>
</tbody>
</table>
## 📝 License
The code for this project is open-sourced under the [License for AngelSlim](LICENSE).
## 🔗 Citation
```
@software{AngelSlim2025,
title={{AngelSlim}},
author={Tencent AngelSlim Project Contributors},
year={2025},
month={6},
url={https://github.com/Tencent/AngelSlim},
}
```
## 💬 Technical Discussion
* AngelSlim is continuously iterating and new features will be released soon. If you have any questions or suggestions, please open an issue on GitHub or join our [WeChat technical discussion group](https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/angel_slim_wechat.png?raw=true).
|
Muapi/illustration-concept
|
Muapi
| 2025-08-14T06:28:15Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T06:27:57Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Illustration Concept

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:858800@1619213", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/cat-ss-citron-secret-styles-illustrious-noobai-flux-pony
|
Muapi
| 2025-08-14T06:27:37Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T06:27:19Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# CAT SS - Citron Secret Styles [Illustrious & NoobAI & Flux & Pony]

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:362766@735703", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
stepfun-ai/NextStep-1-f8ch16-Tokenizer
|
stepfun-ai
| 2025-08-14T06:26:36Z | 0 | 1 | null |
[
"NextStep",
"Image Tokenizer",
"license:apache-2.0",
"region:us"
] | null | 2025-08-14T05:34:26Z |
---
license: apache-2.0
tags:
- NextStep
- Image Tokenizer
---
# Improved Image Tokenizer
This is an improved image tokenizer of NextStep-1, featuring a fine-tuned decoder with a frozen encoder. The decoder refinement **improves performance** while preserving robust reconstruction quality. We **recommend using this Image Tokenizer** for optimal results with NextStep-1 models.
## Usage
```py
import torch
from PIL import Image
import numpy as np
import torchvision.transforms as transforms
from modeling_flux_vae import AutoencoderKL
device = "cuda"
dtype = torch.bfloat16
model_path = "/path/to/vae_dir"
vae = AutoencoderKL.from_pretrained(model_path).to(device=device, dtype=dtype)
pil2tensor = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
]
)
image = Image.open("/path/to/image.jpg")
pixel_values = pil2tensor(image).unsqueeze(0).to(device=device, dtype=dtype)
# encode
latents = vae.encode(pixel_values).latent_dist.sample()
# decode
sampled_images = vae.decode(latents).sample
sampled_images = sampled_images.detach().cpu().to(torch.float32)
def tensor_to_pil(tensor):
image = tensor.detach().cpu().to(torch.float32)
image = (image / 2 + 0.5).clamp(0, 1)
image = image.mul(255).round().to(dtype=torch.uint8)
image = image.permute(1, 2, 0).numpy()
return Image.fromarray(image, mode="RGB")
rec_image = tensor_to_pil(sampled_images[0])
rec_image.save("/path/to/output.jpg")
```
## Evaluation
### Reconstruction Performance on ImageNet-1K 256×256
| Tokenizer | Latent Shape | PSNR ↑ | SSIM ↑ |
| ------------------------- | ------------ | --------- | -------- |
| **Discrete Tokenizers** | | | |
| SBER-MoVQGAN (270M) | 32×32 | 27.04 | 0.74 |
| LlamaGen | 32×32 | 24.44 | 0.77 |
| VAR | 680 | 22.12 | 0.62 |
| TiTok-S-128 | 128 | 17.52 | 0.44 |
| Sefltok | 1024 | 26.30 | 0.81 |
| **Continuous Tokenizers** | | | |
| Stable Diffusion 1.5 | 32×32×4 | 25.18 | 0.73 |
| Stable Diffusion XL | 32×32×4 | 26.22 | 0.77 |
| Stable Diffusion 3 Medium | 32×32×16 | 30.00 | 0.88 |
| Flux.1-dev | 32×32×16 | 31.64 | 0.91 |
| **NextStep-1** | **32×32×16** | **30.60** | **0.89** |
### Robustness of NextStep-1-f8ch16-Tokenizer
Impact of Noise Perturbation on Image Tokenizer Performance. The top panel displays
quantitative metrics (rFID↓, PSNR↑, and SSIM↑) versus noise intensity. The bottom panel presents qualitative reconstruction examples at noise standard deviations of 0.2 and 0.5.
<div align='center'>
<img src="assets/robustness.png" class="interpolation-image" alt="arch." width="100%" />
</div>
|
runchat/lora-24cf691f-1d30-4f05-a39c-70053b2a66cd-alienterrain
|
runchat
| 2025-08-14T06:20:32Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"lora",
"text-to-image",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-08-14T06:20:25Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- lora
- diffusers
- text-to-image
widget:
- text: 'a photo of alienterrain style'
output:
url: "placeholder.jpg"
---
# SDXL LoRA: alienterrain
This is a LoRA (Low-Rank Adaptation) model for Stable Diffusion XL fine-tuned on images with the trigger word `alienterrain`.
## Files
- `pytorch_lora_weights.safetensors`: Diffusers format (use with diffusers library)
- `pytorch_lora_weights_webui.safetensors`: Kohya format (use with AUTOMATIC1111, ComfyUI, etc.)
## Usage
### Diffusers Library
```python
from diffusers import StableDiffusionXLPipeline
import torch
# Load base model
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16
)
# Load LoRA weights (diffusers format)
pipe.load_lora_weights("runchat/lora-24cf691f-1d30-4f05-a39c-70053b2a66cd-alienterrain", weight_name="pytorch_lora_weights.safetensors")
pipe = pipe.to("cuda")
# Generate image
prompt = "a photo of alienterrain style"
image = pipe(prompt, num_inference_steps=25, guidance_scale=7.5).images[0]
image.save("output.png")
```
### WebUI (AUTOMATIC1111, ComfyUI, etc.)
Download the `pytorch_lora_weights_webui.safetensors` file and place it in your WebUI's LoRA directory.
Use the trigger word `alienterrain` in your prompts.
## Training Details
- Base model: stabilityai/stable-diffusion-xl-base-1.0
- Training steps: 1000
- Learning rate: 0.0001
- Batch size: 1
- LoRA rank: 32
- Trigger word: `alienterrain`
|
Muapi/cucoloris-casting-shadow-lighting-modifiers-style-xl-sd1.5-f1d-illu-pony
|
Muapi
| 2025-08-14T06:15:28Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T06:15:15Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Cucoloris casting shadow (lighting modifiers) style XL + SD1.5 + F1D + Illu + Pony

**Base model**: Flux.1 D
**Trained words**: casting shadows style, shadow lights, shadow
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:391036@804865", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
prithivMLmods/Bootes-Qwen3_Coder-Reasoning
|
prithivMLmods
| 2025-08-14T06:07:10Z | 880 | 6 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"moe",
"text-generation-inference",
"code",
"math",
"mot",
"coder",
"stem",
"trl",
"conversational",
"en",
"dataset:nvidia/OpenCodeReasoning",
"dataset:efficientscaling/Z1-Code-Reasoning-107K",
"dataset:HuggingFaceH4/CodeAlpaca_20K",
"dataset:mlabonne/FineTome-100k",
"arxiv:2412.15115",
"arxiv:2309.00071",
"base_model:prithivMLmods/Qwen3-4B-ft-bf16",
"base_model:finetune:prithivMLmods/Qwen3-4B-ft-bf16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-12T03:09:43Z |
---
license: apache-2.0
base_model:
- prithivMLmods/Qwen3-4B-ft-bf16
datasets:
- nvidia/OpenCodeReasoning
- efficientscaling/Z1-Code-Reasoning-107K
- HuggingFaceH4/CodeAlpaca_20K
- mlabonne/FineTome-100k
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- moe
- text-generation-inference
- code
- math
- mot
- coder
- stem
- trl
---

# Bootes-Qwen3\_Coder-Reasoning
> Bootes-Qwen3\_Coder-Reasoning is a fine-tuned variant of the Qwen3-4B architecture, optimized for high-accuracy code reasoning and structured logical task completion. Trained on the CodeAlpaca\_20K dataset and additional curated programming corpora, this model is designed to perform technical coding, reasoning, and instruction-following tasks with lightweight computational requirements.
> [!note]
GGUF : https://huggingface.co/prithivMLmods/Bootes-Qwen3_Coder-Reasoning-Q4_K_M-GGUF
## Key Features
1. Code Reasoning with CodeAlpaca\_20K and More
Fine-tuned on CodeAlpaca\_20K and supplementary high-quality datasets focused on:
* Multi-language programming tasks
* Code explanation, completion, and debugging
* Instruction-following with step-wise execution logic
2. Cross-Language Code Understanding
Handles Python, JavaScript, C++, and more. Ideal for code generation, transformation, bug-fixing, and logic validation.
3. Structured Output Generation
Delivers responses in Markdown, JSON, YAML, and structured code blocks. Optimized for IDE workflows, documentation tools, and reproducible computation notebooks.
4. Instruction-Tuned for Developer Use Cases
Maintains strong fidelity to user prompts, especially multi-turn or step-by-step technical instructions across engineering and data workflows.
5. Multilingual Reasoning in Technical Domains
Capable of technical comprehension and explanation in over 20 human languages, supporting global developer audiences.
6. Efficient 4B Architecture
Based on Qwen3-4B for a performance-efficient inference model that scales well on mid-range GPUs and cloud deployment setups.
## Quickstart with Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Bootes-Qwen3_Coder-Reasoning"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Write a Python function to check whether a number is a palindrome. Explain each step."
messages = [
{"role": "system", "content": "You are a precise coding and reasoning assistant trained on CodeAlpaca and developer datasets."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## Intended Use
* Code generation, completion, and explanation
* Multi-step algorithmic reasoning
* Structured technical document generation (Markdown, JSON, YAML)
* Debugging assistance and refactoring suggestions
* Technical tutoring and developer assistant workflows
* Cross-lingual programming education and translation
## Limitations
* May underperform on non-code-related creative writing
* Limited context window versus larger models
* Sensitive to prompt phrasing for ambiguous instructions
* Occasionally over-justifies code when brevity is desired
## References
1. Qwen2.5 Technical Report – [https://arxiv.org/pdf/2412.15115](https://arxiv.org/pdf/2412.15115)
2. CodeAlpaca Dataset – [https://github.com/sahil280114/codealpaca](https://github.com/sahil280114/codealpaca)
3. YaRN: Context Window Extension for LLMs – [https://arxiv.org/pdf/2309.00071](https://arxiv.org/pdf/2309.00071)
|
Arcod/bonbon_soleil_policy
|
Arcod
| 2025-08-14T06:05:13Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Arcod/bonbon-soleil",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-14T06:04:59Z |
---
datasets: Arcod/bonbon-soleil
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- act
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
wli1995/Qwen3-0.6B
|
wli1995
| 2025-08-14T06:02:12Z | 0 | 0 |
transformers
|
[
"transformers",
"Qwen",
"Qwen3",
"Int8",
"text-generation",
"en",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T02:43:39Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-0.6B
pipeline_tag: text-generation
library_name: transformers
tags:
- Qwen
- Qwen3
- Int8
---
# Qwen3-0.6B-Int8
This version of Qwen3-0.6B-Int8 has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 4.1-patch1
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through the original repo :
https://huggingface.co/Qwen/Qwen3-0.6B
[Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)
[AXera NPU LLM Runtime](https://github.com/AXERA-TECH/ax-llm)
## Support Platform
- AX650
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
- AX630C
- *developing*
|Chips|w8a16|w4a16|
|--|--|--|
|AX650| 20 tokens/sec|TBD|
## How to use
Download all files from this repository to the device
```
root@ax650:/mnt/qtang/llm-test/qwen3-0.6b# tree -L 1
.
|-- main_ax650
|-- main_axcl_aarch64
|-- main_axcl_x86
|-- post_config.json
|-- qwen2.5_tokenizer
|-- qwen3-0.6b-ax650
|-- qwen3_tokenizer
|-- qwen3_tokenizer_uid.py
|-- run_qwen3_0.6b_int8_ctx_ax650.sh
|-- run_qwen3_0.6b_int8_ctx_axcl_aarch64.sh
`-- run_qwen3_0.6b_int8_ctx_axcl_x86.sh
```
#### Start the Tokenizer service
Install requirement
```
pip install transformers jinja2
```
```
root@ax650:/mnt/qtang/llm-test/qwen3-0.6b# python3 qwen3_tokenizer_uid.py
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Server running at http://0.0.0.0:12345
```
#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) or AX650N DEMO Board
Open another terminal and run `run_qwen3_0.6b_int8_ctx_ax650.sh`
```
root@ax650:/mnt/qtang/llm-test/qwen3-0.6b# ./run_qwen3_0.6b_int8_ctx_ax650.sh
[I][ Init][ 110]: LLM init start
[I][ Init][ 34]: connect http://127.0.0.1:12345 ok
[I][ Init][ 57]: uid: 8199112b-da8a-4f39-ae48-9d83f422b2d3
bos_id: -1, eos_id: 151645
3% | ██ | 1 / 31 [3.76s<116.56s, 0.27 count/s] tokenizer init ok
[I][ Init][ 26]: LLaMaEmbedSelector use mmap
100% | ████████████████████████████████ | 31 / 31 [6.18s<6.18s, 5.01 count/s] init post axmodel ok,remain_cmm(10021 MB)
[I][ Init][ 188]: max_token_len : 2559
[I][ Init][ 193]: kv_cache_size : 1024, kv_cache_num: 2559
[I][ Init][ 201]: prefill_token_num : 128
[I][ Init][ 205]: grp: 1, prefill_max_token_num : 1
[I][ Init][ 205]: grp: 2, prefill_max_token_num : 512
[I][ Init][ 205]: grp: 3, prefill_max_token_num : 1024
[I][ Init][ 205]: grp: 4, prefill_max_token_num : 1536
[I][ Init][ 205]: grp: 5, prefill_max_token_num : 2048
[I][ Init][ 209]: prefill_max_token_num : 2048
[I][ load_config][ 282]: load config:
{
"enable_repetition_penalty": false,
"enable_temperature": false,
"enable_top_k_sampling": true,
"enable_top_p_sampling": false,
"penalty_window": 20,
"repetition_penalty": 1.2,
"temperature": 0.9,
"top_k": 1,
"top_p": 0.8
}
[I][ Init][ 218]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
[I][ GenerateKVCachePrefill][ 270]: input token num : 21, prefill_split_num : 1 prefill_grpid : 2
[I][ GenerateKVCachePrefill][ 307]: input_num_token:21
[I][ main][ 230]: precompute_len: 21
[I][ main][ 231]: system_prompt: You are Qwen, created by Alibaba Cloud. You are a helpful assistant.
prompt >> who are you
[I][ SetKVCache][ 530]: prefill_grpid:2 kv_cache_num:512 precompute_len:57 input_num_token:14
[I][ SetKVCache][ 533]: current prefill_max_token_num:1920
[I][ Run][ 659]: input token num : 14, prefill_split_num : 1
[I][ Run][ 685]: input_num_token:14
[I][ Run][ 808]: ttft: 586.92 ms
<think>
</think>
I'm Qwen, a large language model developed by Alibaba Cloud. I can help with a wide range of tasks,
from answering questions to writing code, providing information, and even assisting with creative projects.
Let me know what you need!
[N][ Run][ 922]: hit eos,avg 19.01 token/s
[I][ GetKVCache][ 499]: precompute_len:123, remaining:1925
prompt >> q
root@ax650:/mnt/qtang/llm-test/qwen3-0.6b#
```
#### Inference with M.2 Accelerator card
[What is M.2 Accelerator card?](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html), Show this DEMO based on Raspberry PI 5.
```
(base) axera@raspberrypi:~/samples/qwen3-0.6b $ ./run_qwen3_0.6b_int8_ctx_axcl_aarch64.sh
[I][ Init][ 136]: LLM init start
[I][ Init][ 34]: connect http://127.0.0.1:12345 ok
[I][ Init][ 57]: uid: afec8311-55c9-4785-9fed-949368362b0e
bos_id: -1, eos_id: 151645
3% | ██ | 1 / 31 [1.00s<31.12s, 1.00 count/s] tokenizer init ok
[I][ Init][ 45]: LLaMaEmbedSelector use mmap
6% | ███ | 2 / 31 [1.00s<15.56s, 1.99 count/s] embed_selector init ok
[I][ run][ 30]: AXCLWorker start with devid 0
100% | ████████████████████████████████ | 31 / 31 [28.32s<28.32s, 1.09 count/s] init post axmodel ok,remain_cmm(5068 MB)
[I][ Init][ 237]: max_token_len : 2559
[I][ Init][ 240]: kv_cache_size : 1024, kv_cache_num: 2559
[I][ Init][ 248]: prefill_token_num : 128
[I][ Init][ 252]: grp: 1, prefill_max_token_num : 1
[I][ Init][ 252]: grp: 2, prefill_max_token_num : 512
[I][ Init][ 252]: grp: 3, prefill_max_token_num : 1024
[I][ Init][ 252]: grp: 4, prefill_max_token_num : 1536
[I][ Init][ 252]: grp: 5, prefill_max_token_num : 2048
[I][ Init][ 256]: prefill_max_token_num : 2048
________________________
| ID| remain cmm(MB)|
========================
| 0| 5068|
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
[I][ load_config][ 282]: load config:
{
"enable_repetition_penalty": false,
"enable_temperature": false,
"enable_top_k_sampling": true,
"enable_top_p_sampling": false,
"penalty_window": 20,
"repetition_penalty": 1.2,
"temperature": 0.9,
"top_k": 1,
"top_p": 0.8
}
[I][ Init][ 279]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
[I][ GenerateKVCachePrefill][ 335]: input token num : 21, prefill_split_num : 1 prefill_grpid : 2
[I][ GenerateKVCachePrefill][ 372]: input_num_token:21
[I][ main][ 236]: precompute_len: 21
[I][ main][ 237]: system_prompt: You are Qwen, created by Alibaba Cloud. You are a helpful assistant.
prompt >> who are you?
[I][ SetKVCache][ 628]: prefill_grpid:2 kv_cache_num:512 precompute_len:21 input_num_token:16
[I][ SetKVCache][ 631]: current prefill_max_token_num:1920
[I][ Run][ 869]: input token num : 16, prefill_split_num : 1
[I][ Run][ 901]: input_num_token:16
[I][ Run][1030]: ttft: 670.05 ms
<think>
</think>
I am Qwen, a large language model developed by Alibaba Cloud.
I am designed to assist with a wide range of tasks and provide helpful information.
If you have any questions or need assistance, feel free to ask!
[N][ Run][1182]: hit eos,avg 13.06 token/s
[I][ GetKVCache][ 597]: precompute_len:85, remaining:1963
prompt >> what can you do?
[I][ SetKVCache][ 628]: prefill_grpid:2 kv_cache_num:512 precompute_len:85 input_num_token:17
[I][ SetKVCache][ 631]: current prefill_max_token_num:1920
[I][ Run][ 869]: input token num : 17, prefill_split_num : 1
[I][ Run][ 901]: input_num_token:17
[I][ Run][1030]: ttft: 671.29 ms
<think>
</think>
I can help with a variety of tasks and provide assistance in different areas. For example, I can:
- Answer questions about technology, science, culture, and more.
- Help with writing, research, and problem-solving.
- Provide information and support in different languages.
- Assist with tasks such as writing, coding, and data analysis.
Let me know what you need!
[N][ Run][1182]: hit eos,avg 13.05 token/s
[I][ GetKVCache][ 597]: precompute_len:181, remaining:1867
prompt >> q
(base) axera@raspberrypi:~ $ axcl-smi
+------------------------------------------------------------------------------------------------+
| AXCL-SMI V3.4.0_20250423020139 Driver V3.4.0_20250423020139 |
+-----------------------------------------+--------------+---------------------------------------+
| Card Name Firmware | Bus-Id | Memory-Usage |
| Fan Temp Pwr:Usage/Cap | CPU NPU | CMM-Usage |
|=========================================+==============+=======================================|
| 0 AX650N V3.4.0 | 0000:01:00.0 | 182 MiB / 945 MiB |
| -- 35C -- / -- | 1% 0% | 971 MiB / 7040 MiB |
+-----------------------------------------+--------------+---------------------------------------+
+------------------------------------------------------------------------------------------------+
| Processes: |
| Card PID Process Name NPU Memory Usage |
|================================================================================================|
| 0 53261 /home/axera/samples/qwen3-0.6b/main_axcl_aarch64 953772 KiB |
+------------------------------------------------------------------------------------------------+
```
|
DrChamyoung/EvoSEGA
|
DrChamyoung
| 2025-08-14T05:57:50Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-14T05:56:46Z |
---
license: apache-2.0
---
|
webbigdata/VoiceCore
|
webbigdata
| 2025-08-14T05:53:23Z | 234 | 10 | null |
[
"safetensors",
"llama",
"audio",
"tts",
"csm",
"agent",
"japanese",
"voice-synthesis",
"emotional-tts",
"voice generation",
"voice agent",
"orpheus",
"voice dialog",
"text-to-speech",
"ja",
"base_model:canopylabs/orpheus-3b-0.1-pretrained",
"base_model:finetune:canopylabs/orpheus-3b-0.1-pretrained",
"region:us"
] |
text-to-speech
| 2025-05-28T14:31:55Z |
---
language:
- ja
base_model:
- canopylabs/orpheus-3b-0.1-pretrained
tags:
- audio
- tts
- csm
- agent
- japanese
- voice-synthesis
- emotional-tts
- voice generation
- voice agent
- orpheus
- voice dialog
- text-to-speech
---

# News
VoiceCoreがGENIACプロジェクト(経済産業省、NEDO)の国産基盤モデルリストに掲載されました。
これにより日本国内の法人、団体はVoiceCoreを使って[1位の懸賞金が5000万円であるGENIAC-PRIZE(NEDO懸賞金活用型プログラム)に応募](https://geniac-prize.nedo.go.jp/)する事が可能になりました。
単独でもお申込みできますが[VoiceCoreは商用サポートも承っておりますので、ご希望の方はこちら](https://webbigdata.jp/voice-ai-agent/)からお申込みください
VoiceCore has been listed on the domestic foundation model list of the GENIAC project (Ministry of Economy, Trade and Industry, NEDO).
This means that Japanese corporations and organizations can now use VoiceCore to apply for the [GENIAC-PRIZE (NEDO Prize Fund Utilization Program), which offers a first place prize of 50 million yen.](https://geniac-prize.nedo.go.jp/)
You can apply for it alone, but [we also offer commercial support for VoiceCore, so if you are interested, please apply here.](https://webbigdata.jp/voice-ai-agent/)
# VoiceCore - 次世代 日本語Voice AI Agent用モデル (VoiceCore - Next-Gen Japanese Voice AI Agent model)
VoiceCoreはAIが自然な日本語を発声可能にする商用利用可能なVoice AI Agentモデルです。
従来のTTS(Text to Speech:音声合成)ソフトウェアと異なる点は、文章を正確に発声する事は目的ではなく、AIが音声を使った意思疎通を人間とするために設計されており、笑い声などの非言語音声や感情表現が可能な事が特徴です。
VoiceCore is a commercially available Voice AI Agent model that enables AI to speak natural Japanese.
What makes it different from conventional TTS (Text to Speech) software is that its goal is not to accurately recite sentences, but rather it is designed to enable AI to communicate with humans using voice, and is characterized by its ability to use non-verbal speech(eg:laughting) and express emotions.
## モデルの動かし方(How to run)
以下のページでVoiceCoreの音声合成品質をオンラインで確認する事ができます
- [VoiceCore_online](https://webbigdata.jp/voice-ai-agent/VoiceCore_online/)
You can check the quality of VoiceCore's voice synthesis online at the following page:
- [VoiceCore online](https://webbigdata.jp/voice-ai-agent/VoiceCore_online/)
以下のサンプルスクリプトを使うとGoolgeが提供するColabratoryで無料で動作確認する事ができます
- [Colab用サンプルスクリプト](https://github.com/webbigdata-jp/VoiceCore/blob/main/VoiceCore_sample.ipynb)
You can use the following sample script to check the operation for free on Colaboratory provided by Google
- [Sample script for Colab](https://github.com/webbigdata-jp/VoiceCore/blob/main/VoiceCore_sample.ipynb)
MacやCPU環境向けに[ggufフォーマット版](https://huggingface.co/webbigdata/VoiceCore_gguf)も提供されています。
A [gguf format version](https://huggingface.co/webbigdata/VoiceCore_gguf) is also provided for Mac and CPU environments.
NvidiaやAMDの高性能GPUをお持ちの方向けに高速推論ツールであるvLLM用の[8bit smoothquant版](https://huggingface.co/webbigdata/VoiceCore_smoothquant)も用意されています。
For those with high-performance Nvidia or AMD GPUs, a [8bit smoothquant version](https://huggingface.co/webbigdata/VoiceCore_smoothquant) for vLLM(High-speed inference tools) is also available.
更にモデルを圧縮し、高速化した[4bit gptq版](https://huggingface.co/webbigdata/VoiceCore_gptq)も用意されています。
A [4-bit gptq version](https://huggingface.co/webbigdata/VoiceCore_gptq) is also available, which further compresses the model and speeds it up.
その他の使い方、声の指定方法や設計思想などの解説は「[VoiceCoreの基本的な使い方 – 感情豊かなAIエージェント向け音声合成モデル](https://webbigdata.jp/post-21268/)」をご覧ください
For other usage, voice specification methods, and design philosophy, please see "[Basic usage of VoiceCore - A speech synthesis model for emotive AI agents](https://webbigdata.jp/post-21268/)".
## ディフォルト利用可能な音声の提供元とそのライセンス (Default voice providers and their licenses)
各声は商用可能ですが、提供者様により用途制限と連絡・クレジット表記義務が異なります。
Each voice can be used commercially, but usage restrictions and contact/credit obligations vary depending on the provider.
女性の声はプレビュー版の位置づけです。現在は高音域でノイズが乗ってしまう傾向があります。
The female voice is a preview version. Currently, there is a tendency for high-pitched voices to have noise.
| 声のタイプ | 商用利用 | 使用不可コンテンツ | クレジット表記 | 提供元 | ライセンス詳細/問い合わせ先 |
|----------|---------|-------------------|---------------|--------|-------------------------|
| **amitaro_female**<br>(明るい女の子) | ⭕ 可能<br>(要事後報告) | ❌ エロ・グロ<br>❌ 政治・宗教<br>❌ ヘイト | ✅ 必須<br>「あみたろの声素材工房」 | あみたろの声素材工房 | [ライセンス詳細](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/amitaro/Readme_20240621.txt) / [問い合わせ先](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/amitaro/url.txt) |
| **matsukaze_male**<br>(さわやかな男性) | ⭕ 可能 | 制限なし | ✅ 必須(CC-BY)<br>松風 | 松風 |[ライセンス詳細](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/matsukaze/licence.txt) / [問い合わせ先](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/matsukaze/url.txt)|
| **naraku_female**<br>(落ち着いた女性) | ⭕ 可能<br>(商用は要連絡) | ❌ 反社・政治・宗教<br>❌ 品位を損なう行為 | 個人利用:❌ 不要<br>商用利用:✅ 必須「極楽唯」 | VTuber 奈落ゆい | [ライセンス詳細](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/naraku/readme%EF%BC%88%E5%A5%88%E8%90%BD%E3%82%86%E3%81%84ITA%EF%BC%89.txt) / [問い合わせ先](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/naraku/url.txt)|
| **shiguu_male**<br>(大人びた少年) | ⭕ 可能<br>(商用は要連絡) | ❌ 品位を損なう行為<br>❌ 政治・宗教 | ✅ 必須<br>「刻鳴時雨(CV:丸ころ)」 | 瓶詰め/丸ころ | [ライセンス詳細](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/shiguu/%E8%AA%AD%E3%82%93%E3%81%A7%E3%81%AD.txt) / [利用規約]( https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/shiguu/Term_of_use.txt )/ [問い合わせ先](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/shiguu/url.txt)|
| **sayoko_female**<br>(一般81歳女性) | ⭕ 可能 | ❌ エロ・グロ | ✅ 必須<br>「Fusic サヨ子音声コーパス」 | Fusic/bandad | [ライセンス詳細](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/sayoko/README.md) / [問い合わせ先](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/sayoko/url.txt)|
| **nekketsu_female**<br>(熱血ヒロイン) | ⭕ 可能<br>(商用は要連絡) | ❌ 悪用(詐欺的広告、フェイクニュース/動画、差別・中傷など) | ✅ 任意<br>「紅葉美兎及びAI生成音声である事」を明記 | 紅葉美兎 | ライセンス詳細(準備中) / [問い合わせ先](https://webbigdata.jp/webbigdata/inquiry/)|
| **dahara1_male**<br>(一般男性) | ⭕ 可能 | 制限なし | ✅ 任意<br>(apache2) | webbigdata | [ライセンス詳細](https://www.apache.org/licenses/LICENSE-2.0) / [問い合わせ先](https://webbigdata.jp/webbigdata/inquiry/)|
- **商用利用時の連絡**: naraku、shiguu, nekketsuは商用利用時に事前連絡が必要。amitaroは事後連絡可
- **再配布禁止**: 素材としての再配布・販売は禁止
- **加工**: 音声の加工・編集は可能0
- **使用許諾**: 上記の声提供者の皆さんには本モデルでの使用許可を直接頂いております。この許可はあらゆるAI/モデル/形態を想定した許可ではない事に留意してください
| Voice Type | Commercial Use | Prohibited Content | Credit Required | Provider | License Details/Contact |
|----------|---------|-------------------|---------------|--------|-------------------------|
| **amitaro_female**<br>(Cheerful girl) | ⭕ Allowed<br>(Post-use notification required) | ❌ Adult/Gore<br>❌ Political/Religious<br>❌ Hate speech | ✅ Required<br>"Amitaro's Voice Material Studio" | Amitaro's Voice Material Studio | [License Details](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/amitaro/Readme_20240621.txt) / [Contact](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/amitaro/url.txt) |
| **matsukaze_male**<br>(Refreshing male) | ⭕ Allowed | No restrictions | ✅ Required (CC-BY)<br>Matsukaze | Matsukaze |[License Details](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/matsukaze/licence.txt) / [Contact](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/matsukaze/url.txt)|
| **naraku_female**<br>(Calm woman) | ⭕ Allowed<br>(Commercial use requires prior contact) | ❌ Anti-social/Political/Religious<br>❌ Dignity-damaging acts | Personal use: ❌ Not required<br>Commercial use: ✅ Required<br>"gokuraku yui" | VTuber Naraku Yui | [License Details](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/naraku/readme%EF%BC%88%E5%A5%88%E8%90%BD%E3%82%86%E3%81%84ITA%EF%BC%89.txt) / [Contact](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/naraku/url.txt)|
| **shiguu_male**<br>(Mature boy) | ⭕ Allowed<br>(Commercial use requires prior contact) | ❌ Dignity-damaging acts<br>❌ Political/Religious | ✅ Required<br>"Tokina Shigure (CV: Marukoro)" | Binzume/Marukoro | [License Details](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/shiguu/%E8%AA%AD%E3%82%93%E3%81%A7%E3%81%AD.txt) / [Terms of Use](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/shiguu/Term_of_use.txt) / [Contact](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/shiguu/url.txt)|
| **sayoko_female**<br>(81-year-old woman) | ⭕ Allowed | ❌ Adult/Gore | ✅ Required<br>"Fusic Sayoko Voice Corpus" | Fusic/bandad | [License Details](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/sayoko/README.md) / [Contact](https://huggingface.co/webbigdata/VoiceCore/blob/main/voice_license/sayoko/url.txt)|
| **nekketsu_female**<br>(Hot-blooded heroine) | ⭕ Allowed<br>(Commercial use requires prior contact) | ❌ Malicious use (fraudulent advertising, fake news/videos, discrimination/defamation, etc.) | ✅ Optional<br>Must specify "Kureha Miu and AI-generated voice" | Kureha Miu | License Details (in preparation) / [Contact](https://webbigdata.jp/webbigdata/inquiry/)|
| **dahara1_male**<br>(General male) | ⭕ Allowed | No restrictions | ✅ Optional<br>(apache2) | webbigdata | [License Details](https://www.apache.org/licenses/LICENSE-2.0) / [Contact](https://webbigdata.jp/webbigdata/inquiry/)|
- **Commercial use contact**: naraku, shiguu and nekketsu require prior contact for commercial use. amitaro allows post-use notification
- **Redistribution prohibited**: Redistribution or sale as material is prohibited
- **Modification**: Voice modification and editing are allowed
- **Permission to Use**: All voice providers listed above have given us direct permission to use their voices in this model. Please note that this permission does not cover all AI/models/forms.
## モデルの微調整方法(How to finetune)
以下のサンプルスクリプトを使うとGoolgeが提供するColabratoryで無料で微調整を行い、英語能力の向上や独自音声追加を体験する事ができます。
- [微調整サンプルスクリプト](https://github.com/webbigdata-jp/VoiceCore)
Using the sample script below, you can finetune it for free with Colaboratory provided by Google, and experience improving model's English skills and adding your own voice.
- [finetune sample script](https://github.com/webbigdata-jp/VoiceCore)
## 使用/参考にした研究/データセット ( Datasets and Research used/referenced )
以下のデータセット / コーパスを開発時に利用/参考にさせて頂いています。データセット、コーパスの提供者の皆様に感謝いたします。
The following datasets/corpora were used/referenced during development. We would like to express our gratitude to the providers of the datasets and corpora.
- [ITAコーパス(text)](https://github.com/mmorise/ita-corpus)
- [mana-corpus(text)](https://github.com/shirowanisan/coeiroink-corpus-manager)
- [あみたろの声素材工房](https://amitaro.net/)
- [松風ITA/manaコーパス朗読データ](https://x.com/mochi_jin_voice/status/1424789247014309888)
- [バーチャルシンガー 刻鳴時雨ITAコーパス朗読データ](https://bindumechan.wixsite.com/shigure222)
- [バーチャルアイドル 奈落ゆいITAコーパス朗読データ](https://narakuyui.wixsite.com/my-site)
- [Fusic/サヨ子音声コーパス](https://huggingface.co/datasets/bandad/sayoko-tts-corpus)
- [freevoice4creators](https://freevoice4creators.com/)
- [JVNV: a Japanese emotional speech corpus with both verbal content and nonverbal vocalizations](https://sites.google.com/site/shinnosuketakamichi/research-topics/jvnv_corpus)
- Webbigdata original dataset (Private)
## モデル詳細 / Model Detail
モデル愛称:webbigdata/VoiceCore
Model nickname: webbigdata/VoiceCore
ベースモデル: [Orpheus TTS](https://huggingface.co/collections/canopylabs/orpheus-tts-67d9ea3f6c05a941c06ad9d2) ([Llamaアーキテクチャ](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)を利用しています)
Base Model: [Orpheus TTS](https://huggingface.co/collections/canopylabs/orpheus-tts-67d9ea3f6c05a941c06ad9d2) (which utilizes [Llama architecture](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct))
モデルライセンス: 解釈に応じて、[LLAMA 3.2 COMMUNITY LICENSE](https://huggingface.co/meta-llama/Llama-3.2-1B/blob/main/LICENSE.txt)または[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)のいずれかを選択できます。これは、Orpheusが独自のカスタム音声トークンを出力し、Llama3.2の出力ではなく、変形的/派生的な著作物として解釈できるためです。念の為ですがどちらも商用利用を許諾しているライセンスです。
Model License: Depending on your interpretation, you can choose either the [LLAMA 3.2 COMMUNITY LICENSE](https://huggingface.co/meta-llama/Llama-3.2-1B/blob/main/LICENSE.txt) or the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). This is because Orpheus outputs its own custom voice tokens and can be interpreted as a transformative/derivative work, not the output of Llama 3.2. Just to be clear, both licenses allow commercial use.
本モデルは、[canopylabs/orpheus-3b-0.1-pretrained](https://huggingface.co/canopylabs/orpheus-3b-0.1-pretrained) に継続事前学習と事後学習を行ったモデルです。学術的な正式名称は論文発表時に決定される予定です。
This model is a model that has been subjected to continuous pre-training and post-training on [canopylabs/orpheus-3b-0.1-pretrained](https://huggingface.co/canopylabs/orpheus-3b-0.1-pretrained). The official academic name will be decided at the time of publication of the paper.
## 技術仕様 / Technical Specifications
- モデルパラメータ数 37億(3B)
- bf16推論時の必要GPUメモリ目安 約8GB
- 音声ファイルサンプリングレート 24khz
- TensorRT LLM環境での実測値
RTX 4060ti(メモリ帯域幅:288GB/秒)
bf16 約40 tokens/sec
fp8 約65 tokens/sec
RTX 3090(メモリ帯域幅936GB/秒)
bf16 約100 tokens/sec
リアルタイム会話を実現するためには70 tokens/秒以上の性能が必要です
- Number of model parameters: 3.7 billion (3B)
- Estimated GPU memory required for bf16 inference: approx. 8GB
- Audio file sampling rate: 24khz
- Actual measurements in TensorRT LLM environment
RTX 4060ti (memory bandwidth: 288GB/sec)
bf16 approx. 40 tokens/sec
fp8 approx. 65 tokens/sec
RTX 3090 (memory bandwidth 936GB/sec)
bf16 approx. 100 tokens/sec
To achieve real-time conversation, a performance of 70 tokens/sec or more is required.
## 利用者アンケート / User Survey
私達はユーザーからの反響を非常に重視しています。
[Googleフォームに感想や今後期待する方向性、気が付いた誤りの例、ディフォルトボイスへの採用希望などを是非とも記入](https://docs.google.com/forms/d/e/1FAIpQLScA9L8rQwqhUA9vbpUxKbIVPaQWqy7gnC-tFyrYwHdNnpTP2A/viewform?usp=dialog)してください。
We place great importance on user feedback.
Please fill out the [Google form with your thoughts, your desired future direction, examples of errors you've noticed, and any requests you'd like to see included as default voices.](https://docs.google.com/forms/d/e/1FAIpQLScA9L8rQwqhUA9vbpUxKbIVPaQWqy7gnC-tFyrYwHdNnpTP2A/viewform?usp=dialog)
## 法人・ビジネスでのご利用について (For Business Users)
この[モデルに関する商用サポート、カスタム開発、コンサルティング等のご相談は、以下の法人向けウェブサイト](https://webbigdata.jp/voice-ai-agent/)より承っております。
For [commercial support, custom development, or professional consulting, please visit our corporate website](https://webbigdata.jp/voice-ai-agent/).
## 残TODO
- MCPで動かす方法の解説(how to MCP)
- より様々なツールなどでの動かすための解説(more more documents)
## 謝辞 / Acknowledgment
全ての合成音声の研究者/愛好家/声データ提供者の皆様。彼らの研究成果/データ/熱意がなけなければ、このモデルは完成できなかったでしょう。直接使用しなかったデータ/知識などにも大いに影響/励ましを受けました。
To all researchers and enthusiasts of synthetic speech, Voice data provider. Without their research results, data, and enthusiasm, this model would not have been completed. I was also greatly influenced and encouraged by data and knowledge that I did not directly use.
- [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
- [canopylabs/orpheus-tts](https://huggingface.co/collections/canopylabs/orpheus-tts-67d9ea3f6c05a941c06ad9d2)
- [hubertsiuzdak/snac_24khz](https://huggingface.co/hubertsiuzdak/snac_24khz)
- [unslothai/unsloth](https://unsloth.ai/) for for providing a memory-efficient training method.
- [pytorch/torchtune](https://github.com/pytorch/torchtune) for providing a variety of training methods.
- [Huggingface](https://huggingface.co/) for storage.
## Developer/開発
- **Developed by:** dahara1@webbigdata
- **Model type:** text audio generation
- **Language(s) (NLP):** Japanese
- **model :** [webbigdata/VoiceCore](https://huggingface.co/webbigdata/VoiceCore)
**BibTeX:**
```
@misc{dahara2025 VoiceCore,
author = {dahara1@webbigdata},
title = {VoiceCore - Next-Gen Japanese Voice AI Agent model},
year = {2025},
howpublished = {\url{https://huggingface.co/webbigdata/VoiceCore}},
note = {Accessed: 2025-07-18},
abstract = {This model is designed to enable AI to communicate with humans using voice, and is characterized by its ability to use non-verbal speech and express emotions.},
}
```
|
Wehshi/Call-Evaluation-Automation-model
|
Wehshi
| 2025-08-14T05:43:29Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-14T05:43:29Z |
---
license: apache-2.0
---
|
HelpingAI/hai3.1-checkpoint-0001
|
HelpingAI
| 2025-08-14T05:33:39Z | 0 | 3 |
transformers
|
[
"transformers",
"safetensors",
"helpingai",
"text-generation",
"conversational",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-08-13T08:02:09Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
---
CURRENTLY IN TRAINING :)
Currently, only the LLM section of this model is fully ready.
```py
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
import torch
# Load model and tokenizer
model_name = "Abhaykoul/hai3.1-pretrainedv3"
# Set device to CUDA if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, torch_dtype="auto")
model.to(device)
print(model)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
# Message role format for chat
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": """hlo"""},
]
# Apply chat template to format prompt
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Tokenize input and move to device
inputs = tokenizer(prompt, return_tensors="pt")
inputs = {k: v.to(device) for k, v in inputs.items()}
# Set up text streamer for live output
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
# Generate text with streaming
model.generate(
**inputs,
max_new_tokens=4089,
temperature=0.7,
top_p=0.9,
do_sample=True,
streamer=streamer
)
```
Classfication section undertraining
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
ckpt = "Abhaykoul/hai3.1-pretrainedv3"
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(ckpt, trust_remote_code=True).to(device).eval()
tok = AutoTokenizer.from_pretrained(ckpt, trust_remote_code=True)
if tok.pad_token is None:
tok.pad_token = tok.eos_token
text = "I am thrilled about my new job!"
enc = tok([text], padding=True, truncation=True, max_length=2048, return_tensors="pt")
enc = {k: v.to(device) for k, v in enc.items()}
with torch.no_grad():
out = model(input_ids=enc["input_ids"], attention_mask=enc.get("attention_mask"), output_hidden_states=True, return_dict=True, use_cache=False)
last = out.hidden_states[-1]
idx = (enc["attention_mask"].sum(dim=1) - 1).clamp(min=0)
pooled = last[torch.arange(last.size(0)), idx]
logits = model.structured_lm_head(pooled)
pred_id = logits.argmax(dim=-1).item()
print("Predicted class id:", pred_id)
# Map id -> label using your dataset’s label list, e.g.:
id2label = ["sadness","joy","love","anger","fear","surprise"] # dair-ai/emotion
print("Predicted label:", id2label[pred_id] if pred_id < len(id2label) else "unknown")
```
TTS layers in training
NOTE: we have used qwen2 tokenizer in it
This model contains layers from our diffrent models
To aline layers we have done post training after merging layers
|
danbev/basemodel-9-800m-it-qat-GGUF
|
danbev
| 2025-08-14T05:31:46Z | 0 | 0 | null |
[
"gguf",
"region:us"
] | null | 2025-08-14T05:31:44Z |
---
base_model:
- some_org/basemodel-9-800m-it-qat
---
# basemodel-9-800m-it-qat GGUF
Recommended way to run this model:
```sh
llama-server -hf ggml-org/basemodel-9-800m-it-qat-GGUF -c 0 -fa
# Then, access http://localhost:8080
```
|
sanjudal/blockassist-bc-raging_lanky_hippo_1755148828
|
sanjudal
| 2025-08-14T05:21:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging lanky hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T05:21:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging lanky hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bieriszc/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-majestic_fanged_octopus
|
bieriszc
| 2025-08-14T05:12:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am majestic_fanged_octopus",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T06:02:33Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am majestic_fanged_octopus
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JoshOng/q-Taxi-v1
|
JoshOng
| 2025-08-14T05:08:48Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-14T05:08:45Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="JoshOng/q-Taxi-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
doriankim/gemma3_4b_skin_dataset_1000steps_r32_b8_final
|
doriankim
| 2025-08-14T05:07:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T05:06:39Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** doriankim
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
koloni/blockassist-bc-deadly_graceful_stingray_1755146136
|
koloni
| 2025-08-14T05:02:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T05:02:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
moabasiyan/blockassist-bc-tall_pouncing_capybara_1755147077
|
moabasiyan
| 2025-08-14T04:52:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall pouncing capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T04:52:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall pouncing capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ba2han/qwen3-4b-finetune
|
Ba2han
| 2025-08-14T04:35:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B-Thinking-2507",
"base_model:finetune:unsloth/Qwen3-4B-Thinking-2507",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T04:33:35Z |
---
base_model: unsloth/Qwen3-4B-Thinking-2507
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Ba2han
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Thinking-2507
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
braindeck/whisper_kr_zeroth_e10
|
braindeck
| 2025-08-14T04:16:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-14T04:15:23Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
model-index:
- name: whisper_kr_zeroth_e10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_kr_zeroth_e10
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0295
- Cer: 0.5216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.0531 | 1.4731 | 1000 | 0.0414 | 0.6750 |
| 0.013 | 2.9462 | 2000 | 0.0259 | 0.7042 |
| 0.0038 | 4.4186 | 3000 | 0.0262 | 0.6675 |
| 0.001 | 5.8917 | 4000 | 0.0281 | 0.7461 |
| 0.0005 | 7.3640 | 5000 | 0.0292 | 0.5735 |
| 0.0003 | 8.8371 | 6000 | 0.0295 | 0.5216 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.6.0a0+ecf3bae40a.nv25.01
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Quangvuisme/rl_course_vizdoom_health_gathering_supreme
|
Quangvuisme
| 2025-08-14T04:03:01Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-14T04:02:56Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.51 +/- 4.22
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Quangvuisme/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Muapi/hyper-realism-lora-by-aidma-flux-illustrious
|
Muapi
| 2025-08-14T04:02:41Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T04:02:30Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Hyper Realism Lora by aidma [FLUX + ILLUSTRIOUS]

**Base model**: Flux.1 D
**Trained words**: aidmaHyperrealism
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:730373@980278", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/granblue-fantasy-style
|
Muapi
| 2025-08-14T03:48:34Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T03:48:15Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Granblue fantasy style

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:22800@779278", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
rbelanec/train_apps_1754897204
|
rbelanec
| 2025-08-14T03:39:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-11T07:27:40Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prompt-tuning
- generated_from_trainer
model-index:
- name: train_apps_1754897204
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_apps_1754897204
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the apps dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7171
- Num Input Tokens Seen: 880041568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:------:|:---------------:|:-----------------:|
| 0.7667 | 0.5000 | 13189 | 0.7823 | 44223136 |
| 0.7102 | 1.0000 | 26378 | 0.7484 | 87957952 |
| 0.7129 | 1.5001 | 39567 | 0.7365 | 131814656 |
| 0.6309 | 2.0001 | 52756 | 0.7293 | 175975840 |
| 0.6354 | 2.5001 | 65945 | 0.7260 | 219881664 |
| 0.6694 | 3.0001 | 79134 | 0.7247 | 263949472 |
| 0.7233 | 3.5001 | 92323 | 0.7227 | 307925280 |
| 0.6627 | 4.0002 | 105512 | 0.7221 | 352048320 |
| 0.6283 | 4.5002 | 118701 | 0.7200 | 396106880 |
| 0.6722 | 5.0002 | 131890 | 0.7191 | 440014752 |
| 0.768 | 5.5002 | 145079 | 0.7185 | 484066880 |
| 0.7321 | 6.0002 | 158268 | 0.7182 | 528105600 |
| 0.8997 | 6.5002 | 171457 | 0.7176 | 572089824 |
| 0.6457 | 7.0003 | 184646 | 0.7174 | 616130592 |
| 0.7701 | 7.5003 | 197835 | 0.7173 | 660063168 |
| 0.7298 | 8.0003 | 211024 | 0.7171 | 704033600 |
| 0.8252 | 8.5003 | 224213 | 0.7172 | 747976128 |
| 0.7198 | 9.0003 | 237402 | 0.7172 | 792077152 |
| 0.6224 | 9.5004 | 250591 | 0.7172 | 836063392 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
lejonck/xlsr53-ptbr-mupe-final1
|
lejonck
| 2025-08-14T03:38:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53-portuguese",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53-portuguese",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-14T03:38:20Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53-portuguese
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: xlsr53-ptbr-mupe-final1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlsr53-ptbr-mupe-final1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53-portuguese](https://huggingface.co/facebook/wav2vec2-large-xlsr-53-portuguese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4832
- Wer: 0.5878
- Cer: 0.3261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.6438 | 1.0 | 1000 | 1.5940 | 0.6624 | 0.3650 |
| 1.4311 | 2.0 | 2000 | 1.5149 | 0.6379 | 0.3488 |
| 1.29 | 3.0 | 3000 | 1.4733 | 0.6117 | 0.3317 |
| 1.1968 | 4.0 | 4000 | 1.4522 | 0.6163 | 0.3330 |
| 0.8005 | 5.0 | 5000 | 1.4504 | 0.5965 | 0.3297 |
| 0.5489 | 6.0 | 6000 | 1.4832 | 0.5878 | 0.3260 |
| 0.8353 | 7.0 | 7000 | 1.4658 | 0.5936 | 0.3248 |
| 1.7251 | 8.0 | 8000 | 1.4922 | 0.5901 | 0.3261 |
| 1.0679 | 9.0 | 9000 | 1.4867 | 0.5913 | 0.3238 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.7.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
FreddyFazbear0209/fine-tuned-qwen-2.5
|
FreddyFazbear0209
| 2025-08-14T03:38:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T03:37:54Z |
---
base_model: unsloth/qwen2.5-vl-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** FreddyFazbear0209
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-3b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
g-assismoraes/Qwen3-4B-Base-fpi-alpha1.0-var-ep10
|
g-assismoraes
| 2025-08-14T03:36:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T03:33:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RaulGuo1/tttaaaa
|
RaulGuo1
| 2025-08-14T03:35:31Z | 0 | 0 | null |
[
"af",
"base_model:RaulGuo1/tet11111",
"base_model:finetune:RaulGuo1/tet11111",
"license:mit",
"region:us"
] | null | 2025-08-14T03:35:19Z |
---
license: mit
language:
- af
base_model:
- RaulGuo1/tet11111
---
|
koloni/blockassist-bc-deadly_graceful_stingray_1755140994
|
koloni
| 2025-08-14T03:34:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T03:34:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Hfkjc/blockassist-bc-fanged_stinging_sandpiper_1755141983
|
Hfkjc
| 2025-08-14T03:34:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fanged stinging sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T03:33:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fanged stinging sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lemonhat/Qwen2.5-7B-agenttuning_v4_15k_tag5-mini
|
lemonhat
| 2025-08-14T03:29:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T03:28:07Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: agenttuning_v4_15k_tag5-mini
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# agenttuning_v4_15k_tag5-mini
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the agenttuning_v4_15k_tag5-mini dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6036 | 0.2833 | 100 | 0.6065 |
| 0.5331 | 0.5666 | 200 | 0.5574 |
| 0.4477 | 0.8499 | 300 | 0.5375 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
runchat/lora-d2034de9-ce83-409a-9ba1-aae6d338d5fd-rock
|
runchat
| 2025-08-14T02:41:35Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"lora",
"text-to-image",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-08-14T02:41:23Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- lora
- diffusers
- text-to-image
widget:
- text: 'a photo of rock style'
output:
url: "placeholder.jpg"
---
# SDXL LoRA: rock
This is a LoRA (Low-Rank Adaptation) model for Stable Diffusion XL fine-tuned on images with the trigger word `rock`.
## Files
- `pytorch_lora_weights.safetensors`: Diffusers format (use with diffusers library)
- `pytorch_lora_weights_webui.safetensors`: Kohya format (use with AUTOMATIC1111, ComfyUI, etc.)
## Usage
### Diffusers Library
```python
from diffusers import StableDiffusionXLPipeline
import torch
# Load base model
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16
)
# Load LoRA weights (diffusers format)
pipe.load_lora_weights("runchat/lora-d2034de9-ce83-409a-9ba1-aae6d338d5fd-rock", weight_name="pytorch_lora_weights.safetensors")
pipe = pipe.to("cuda")
# Generate image
prompt = "a photo of rock style"
image = pipe(prompt, num_inference_steps=25, guidance_scale=7.5).images[0]
image.save("output.png")
```
### WebUI (AUTOMATIC1111, ComfyUI, etc.)
Download the `pytorch_lora_weights_webui.safetensors` file and place it in your WebUI's LoRA directory.
Use the trigger word `rock` in your prompts.
## Training Details
- Base model: stabilityai/stable-diffusion-xl-base-1.0
- Training steps: 1000
- Learning rate: 0.0001
- Batch size: 1
- LoRA rank: 32
- Trigger word: `rock`
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1755137662
|
elmenbillion
| 2025-08-14T02:39:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T02:39:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Talking-Babies/cpo_opt_seqlen_1024_progressive_cefr_parlai_iteration5
|
Talking-Babies
| 2025-08-14T02:31:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T02:30:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Elionbot/elion
|
Elionbot
| 2025-08-14T02:24:27Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-14T02:24:27Z |
---
license: apache-2.0
---
|
ailabstw/bge-m3-reranker-v2_slerp-v0.1
|
ailabstw
| 2025-08-14T02:20:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"mergekit",
"merge",
"base_model:BAAI/bge-reranker-v2-m3",
"base_model:merge:BAAI/bge-reranker-v2-m3",
"base_model:ailabstw/bge-m3-reranker-v2_ft-v0.1",
"base_model:merge:ailabstw/bge-m3-reranker-v2_ft-v0.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-14T02:14:23Z |
---
base_model:
- ailabstw/bge-m3-reranker-v2_ft-v0.1
- BAAI/bge-reranker-v2-m3
library_name: transformers
tags:
- mergekit
- merge
---
# output
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [ailabstw/bge-m3-reranker-v2_ft-v0.1](https://huggingface.co/ailabstw/bge-m3-reranker-v2_ft-v0.1)
* [BAAI/bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: BAAI/bge-reranker-v2-m3
- model: ailabstw/bge-m3-reranker-v2_ft-v0.1
merge_method: slerp
base_model: BAAI/bge-reranker-v2-m3
parameters:
t:
- value: 0.5
dtype: float32
```
|
Bearrr310/ds_train_grpo_1.5B-0813-32acc
|
Bearrr310
| 2025-08-14T02:15:09Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"grpo",
"trl",
"conversational",
"dataset:ds-grpo1.5B-0813-32acc",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T02:10:13Z |
---
datasets: ds-grpo1.5B-0813-32acc
library_name: transformers
model_name: ds_train_grpo_1.5B-0813-32acc
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for ds_train_grpo_1.5B-0813-32acc
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [ds-grpo1.5B-0813-32acc](https://huggingface.co/datasets/ds-grpo1.5B-0813-32acc) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Bearrr310/ds_train_grpo_1.5B-0813-32acc", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1755135927
|
elmenbillion
| 2025-08-14T02:10:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T02:10:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hobson123/blockassist-bc-mammalian_dense_gibbon_1755136729
|
hobson123
| 2025-08-14T02:08:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian dense gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T02:08:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian dense gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
obadx/muaalem-model-v2
|
obadx
| 2025-08-14T02:08:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"multi_level_ctc",
"generated_from_trainer",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T17:48:28Z |
---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
model-index:
- name: muaalem-model-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# muaalem-model-v2
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0082
- Per Phonemes: 0.0025
- Per Hams Or Jahr: 0.0012
- Per Shidda Or Rakhawa: 0.0017
- Per Tafkheem Or Taqeeq: 0.0017
- Per Itbaq: 0.0007
- Per Safeer: 0.0011
- Per Qalqla: 0.0007
- Per Tikraar: 0.0008
- Per Tafashie: 0.0015
- Per Istitala: 0.0006
- Per Ghonna: 0.0010
- Average Per: 0.0012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Per Phonemes | Per Hams Or Jahr | Per Shidda Or Rakhawa | Per Tafkheem Or Taqeeq | Per Itbaq | Per Safeer | Per Qalqla | Per Tikraar | Per Tafashie | Per Istitala | Per Ghonna | Average Per |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:----------------:|:---------------------:|:----------------------:|:---------:|:----------:|:----------:|:-----------:|:------------:|:------------:|:----------:|:-----------:|
| 0.1584 | 0.2 | 712 | 0.0234 | 0.0063 | 0.0024 | 0.0031 | 0.0065 | 0.0017 | 0.0017 | 0.0018 | 0.0017 | 0.0035 | 0.0015 | 0.0020 | 0.0029 |
| 0.0168 | 0.4 | 1424 | 0.0149 | 0.0051 | 0.0020 | 0.0027 | 0.0031 | 0.0014 | 0.0016 | 0.0015 | 0.0013 | 0.0029 | 0.0013 | 0.0016 | 0.0022 |
| 0.0127 | 0.6 | 2136 | 0.0130 | 0.0045 | 0.0042 | 0.0027 | 0.0047 | 0.0014 | 0.0014 | 0.0016 | 0.0013 | 0.0022 | 0.0013 | 0.0019 | 0.0024 |
| 0.0123 | 0.8 | 2848 | 0.0100 | 0.0034 | 0.0017 | 0.0023 | 0.0025 | 0.0013 | 0.0013 | 0.0013 | 0.0012 | 0.0021 | 0.0014 | 0.0015 | 0.0018 |
| 0.0103 | 1.0 | 3560 | 0.0082 | 0.0025 | 0.0012 | 0.0017 | 0.0017 | 0.0007 | 0.0011 | 0.0007 | 0.0008 | 0.0015 | 0.0006 | 0.0010 | 0.0012 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.8.0+cu128
- Datasets 3.3.2
- Tokenizers 0.21.4
|
BootesVoid/cme0ls7oo07u1gwtcpjlxcgfd_cmeaqagsq09fmrts820hkb6mr
|
BootesVoid
| 2025-08-14T02:05:49Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-14T02:05:45Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: CARINA
---
# Cme0Ls7Oo07U1Gwtcpjlxcgfd_Cmeaqagsq09Fmrts820Hkb6Mr
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `CARINA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "CARINA",
"lora_weights": "https://huggingface.co/BootesVoid/cme0ls7oo07u1gwtcpjlxcgfd_cmeaqagsq09fmrts820hkb6mr/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cme0ls7oo07u1gwtcpjlxcgfd_cmeaqagsq09fmrts820hkb6mr', weight_name='lora.safetensors')
image = pipeline('CARINA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cme0ls7oo07u1gwtcpjlxcgfd_cmeaqagsq09fmrts820hkb6mr/discussions) to add images that show off what you’ve made with this LoRA.
|
BootesVoid/cmeap8g1c09dmrts8kik96nf9_cmeaptoal09emrts81mx04tt0
|
BootesVoid
| 2025-08-14T02:02:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-14T02:02:16Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ALEXA2004
---
# Cmeap8G1C09Dmrts8Kik96Nf9_Cmeaptoal09Emrts81Mx04Tt0
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ALEXA2004` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ALEXA2004",
"lora_weights": "https://huggingface.co/BootesVoid/cmeap8g1c09dmrts8kik96nf9_cmeaptoal09emrts81mx04tt0/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmeap8g1c09dmrts8kik96nf9_cmeaptoal09emrts81mx04tt0', weight_name='lora.safetensors')
image = pipeline('ALEXA2004').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmeap8g1c09dmrts8kik96nf9_cmeaptoal09emrts81mx04tt0/discussions) to add images that show off what you’ve made with this LoRA.
|
EYEDOL/FROM_C3_4
|
EYEDOL
| 2025-08-14T01:49:33Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"sw",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:EYEDOL/FROM_C3_3",
"base_model:finetune:EYEDOL/FROM_C3_3",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-14T01:49:16Z |
---
library_name: transformers
language:
- sw
base_model: EYEDOL/FROM_C3_3
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: ASR_FROM_C3
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: sw
split: None
args: 'config: sw, split: test'
metrics:
- name: Wer
type: wer
value: 17.210807176772132
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASR_FROM_C3
This model is a fine-tuned version of [EYEDOL/FROM_C3_3](https://huggingface.co/EYEDOL/FROM_C3_3) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2388
- Wer: 17.2108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0409 | 0.6918 | 2000 | 0.2388 | 17.2108 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
surfiniaburger/maize-health-diagnosis-adapter
|
surfiniaburger
| 2025-08-14T01:43:59Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/gemma-3n-e2b-it-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"region:us"
] |
text-generation
| 2025-08-14T01:43:53Z |
---
base_model: unsloth/gemma-3n-e2b-it-unsloth-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/gemma-3n-e2b-it-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1755134157
|
elmenbillion
| 2025-08-14T01:41:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T01:41:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hathu11088/blockassist-bc-flightless_unseen_parrot_1755133576
|
hathu11088
| 2025-08-14T01:41:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flightless unseen parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T01:22:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flightless unseen parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xudong123/custom-resnet50d
|
xudong123
| 2025-08-14T01:41:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"resnet",
"image-classification",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
image-classification
| 2025-08-14T01:40:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Levi-Heath/Towards-Errorless-Training-ImageNet-1k
|
Levi-Heath
| 2025-08-14T01:38:07Z | 0 | 0 | null |
[
"ImageNet-1k",
"image classification",
"image-classification",
"en",
"dataset:ILSVRC/imagenet-1k",
"arxiv:2508.04941",
"license:cc-by-nc-nd-4.0",
"region:us"
] |
image-classification
| 2025-08-06T17:18:21Z |
---
license: cc-by-nc-nd-4.0
language:
- en
pipeline_tag: image-classification
tags:
- ImageNet-1k
- image classification
datasets:
- ILSVRC/imagenet-1k
metrics:
- accuracy
authors:
- Bo Deng, University of Nebraska-Lincoln
- Levi Heath, University of Colorado Colorado Springs
---
# Towards Errorless Training ImageNet-1k
This repository host MATLAB code and models for the manuscript, *Towards Errorless Training ImageNet-1k*, which is available at https://arxiv.org/abs/2508.04941.
We give 6 models trained on the ImageNet-1k dataset, which we list in the table below.
Each model is featured model of archtecture 17x40x2.
That is, each model is made up of 17x40x2=1360 FNNs, all with homogeneous architecture (900-256-25 or 900-256-77-25),
working in parallel to produce 1360 predictions which determine a final prediction using the majority voting protocol.
We trained the 6 models using the following transformation of the 64x64 downsampled ImageNet-1k dataset:
- downsampled images to 32x32, using the mean values of non-overlapping 2x2 grid cells and
- trimmed off top row, bottom row, left-most column, and right-most column.
This transformed data results in 30x30 images, hence 900-dimensional input vectors.
For a thorough description of our models trained on the ImageNet-1k dataset, please read our preprint linked above.
| Model | Training Method | FNN Architecture | Accuracy (%) |
| ------------- | ------------- | ------------- | ------------- |
| Model_S_h1_m1 | SGD | 900-256-25 | 98.247 |
| Model_S_h1_m2 | SGD | 900-256-25 | 98.299 |
| Model_S_h2_m1 | SGD | 900-256-77-25 | 96.990 |
| Model_T_h1_m1 | SGD followed by GDT | 900-256-25 | 98.289 |
| Model_T_h1_m2 | SGD followed by GDT | 900-256-25 | 98.300 |
| Model_T_h2_m1 | SGD followed by GDT | 900-256-77-25 | 97.770 |
*SGD = stochastic gradient descent
**GDT = gradient descent tunneling
|
wasabuko/blockassist-bc-noisy_zealous_macaw_1755131893
|
wasabuko
| 2025-08-14T01:30:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"noisy zealous macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T01:27:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- noisy zealous macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NORI7/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-crested_sniffing_cockroach
|
NORI7
| 2025-08-14T01:22:20Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am crested_sniffing_cockroach",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-26T13:28:16Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am crested_sniffing_cockroach
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Tarun-ak/TurkishLawLM-Q2_K-GGUF
|
Tarun-ak
| 2025-08-14T01:15:09Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:IcosaComputingHF/TurkishLawLM",
"base_model:quantized:IcosaComputingHF/TurkishLawLM",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T01:14:16Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: IcosaComputingHF/TurkishLawLM
---
# Tarun-ak/TurkishLawLM-Q2_K-GGUF
This model was converted to GGUF format from [`IcosaComputingHF/TurkishLawLM`](https://huggingface.co/IcosaComputingHF/TurkishLawLM) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/IcosaComputingHF/TurkishLawLM) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Tarun-ak/TurkishLawLM-Q2_K-GGUF --hf-file turkishlawlm-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Tarun-ak/TurkishLawLM-Q2_K-GGUF --hf-file turkishlawlm-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Tarun-ak/TurkishLawLM-Q2_K-GGUF --hf-file turkishlawlm-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Tarun-ak/TurkishLawLM-Q2_K-GGUF --hf-file turkishlawlm-q2_k.gguf -c 2048
```
|
FlagRelease/MiniCPM-V-4-FlagOS
|
FlagRelease
| 2025-08-14T01:14:22Z | 0 | 1 | null |
[
"safetensors",
"minicpmv",
"custom_code",
"region:us"
] | null | 2025-08-13T06:23:41Z |
# Introduction
**FlagOS** is a unified heterogeneous computing software stack for large models, co-developed with leading global chip manufacturers. With core technologies such as the **FlagScale** distributed training/inference framework, **FlagGems** universal operator library, **FlagCX** communication library, and **FlagTree** unified compiler, the **FlagRelease** platform leverages the FlagOS stack to automatically produce and release various combinations of <chip + open-source model>. This enables efficient and automated model migration across diverse chips, opening a new chapter for large model deployment and application.
Based on this, the **MiniCPM-V-4-FlagOS** model is adapted for the Nvidia chip using the FlagOS software stack, enabling:
### Integrated Deployment
- Deep integration with the open-source [FlagScale framework](https://github.com/FlagOpen/FlagScale)
- Out-of-the-box inference scripts with pre-configured hardware and software parameters
- Released **FlagOS** container image supporting deployment within minutes
### Consistency Validation
- Rigorously evaluated through benchmark testing: Performance and results from the FlagOS software stack are compared against native stacks on multiple public.
# Technical Overview
## **FlagScale Distributed Training and Inference Framework**
FlagScale is an end-to-end framework for large models across heterogeneous computing resources, maximizing computational efficiency and ensuring model validity through core technologies. Its key advantages include:
- **Unified Deployment Interface:** Standardized command-line tools support one-click service deployment across multiple hardware platforms, significantly reducing adaptation costs in heterogeneous environments.
- **Intelligent Parallel Optimization:** Automatically generates optimal distributed parallel strategies based on chip computing characteristics, achieving dynamic load balancing of computation/communication resources.
- **Seamless Operator Switching:** Deep integration with the FlagGems operator library allows high-performance operators to be invoked via environment variables without modifying model code.
## **FlagGems Universal Large-Model Operator Library**
FlagGems is a Triton-based, cross-architecture operator library collaboratively developed with industry partners. Its core strengths include:
- **Full-stack Coverage**: Over 100 operators, with a broader range of operator types than competing libraries.
- **Ecosystem Compatibility**: Supports 7 accelerator backends. Ongoing optimizations have significantly improved performance.
- **High Efficiency**: Employs unique code generation and runtime optimization techniques for faster secondary development and better runtime performance compared to alternatives.
## **FlagEval Evaluation Framework**
FlagEval (Libra)** is a comprehensive evaluation system and open platform for large models launched in 2023. It aims to establish scientific, fair, and open benchmarks, methodologies, and tools to help researchers assess model and training algorithm performance. It features:
- **Multi-dimensional Evaluation**: Supports 800+ model evaluations across NLP, CV, Audio, and Multimodal fields, covering 20+ downstream tasks including language understanding and image-text generation.
- **Industry-Grade Use Cases**: Has completed horizontal evaluations of mainstream large models, providing authoritative benchmarks for chip-model performance validation.
# Evaluation Results
## Benchmark Result
| Metrics | MiniCPM-V-4-H100-CUDA | MiniCPM-V-4-FlagOS |
| ------------------------- | --------------------- | ------------------ |
| blink_val | 51.13 | 51.45 |
| cii_bench_test | 41.83 | 43.27 |
| cmmmu_val | 37.56 | 35.0 |
| math_vision_test | 21.02 | 21.09 |
| mmmu_pro_standard_test | 25.16 | 25.8 |
| mmmu_pro_vision_test | 20.4 | 20.23 |
| mmmu_val | 40.22 | 42.0 |
| mmvet_v2 | 47.7451 | 50.1373 |
| ocrbench_test | 82.6 | 81.5 |
# User Guide
**Environment Setup**
| Item | Version |
| ------------- | ------------------------------------------------------------ |
| Docker Version | Docker version 27.5.1, build 27.5.1-0ubuntu3~22.04.2|
| Operating System | Ubuntu 22.04.3 LTS |
| FlagScale | Version: 0.8.0 |
| FlagGems | Version: 3.0 |
## Operation Steps
### Download Open-source Model Weights
```bash
pip install modelscope
modelscope download --model OpenBMB/MiniCPM-V-4 --local_dir /share/models/MiniCPM-V-4
```
### Download FlagOS Image
```bash
docker pull harbor.baai.ac.cn/flagrelease-public/flagrelease_nvidia_minicpmv4
```
### Start the inference service
```bash
#Container Startup
docker run --rm --init --detach --net=host --uts=host --ipc=host --security-opt=seccomp=unconfined --privileged=true --ulimit stack=67108864 --ulimit memlock=-1 --ulimit nofile=1048576:1048576 --shm-size=32G -v /share:/share --gpus all --name flagos harbor.baai.ac.cn/flagrelease-public/flagrelease_nvidia_minicpmv4 sleep infinity
```
### Serve
```bash
flagscale serve minicpm_v_4
```
## Service Invocation
### API-based Invocation Script
```bash
import openai
openai.api_key = "EMPTY"
openai.base_url = "http://<server_ip>:9010/v1/"
model = "MiniCPM-V-4-nvidia-flagos"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What's the weather like today?"}
]
response = openai.chat.completions.create(
model=model,
messages=messages,
stream=False,
)
for item in response:
print(item)
```
### AnythingLLM Integration Guide
#### 1. Download & Install
- Visit the official site: https://anythingllm.com/
- Choose the appropriate version for your OS (Windows/macOS/Linux)
- Follow the installation wizard to complete the setup
#### 2. Configuration
- Launch AnythingLLM
- Open settings (bottom left, fourth tab)
- Configure core LLM parameters
- Click "Save Settings" to apply changes
#### 3. Model Interaction
- After model loading is complete:
- Click **"New Conversation"**
- Enter your question (e.g., “Explain the basics of quantum computing”)
- Click the send button to get a response
# Contributing
We warmly welcome global developers to join us:
1. Submit Issues to report problems
2. Create Pull Requests to contribute code
3. Improve technical documentation
4. Expand hardware adaptation support
# License
本模型的权重来源于OpenBMB/MiniCPM-V-4,以apache2.0协议https://www.apache.org/licenses/LICENSE-2.0.txt开源。
|
tandeshao/Llama-3.1-8B-ascii-cat-gguf
|
tandeshao
| 2025-08-14T01:12:44Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T01:11:13Z |
---
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tandeshao
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nofing/qwen3-4B-Thinking-2507-sft-full
|
Nofing
| 2025-08-14T01:12:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T01:06:27Z |
---
base_model: unsloth/qwen3-4b-thinking-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Nofing
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-thinking-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Coaster41/patchtst-sae-32-3
|
Coaster41
| 2025-08-14T01:01:37Z | 0 | 0 |
saelens
|
[
"saelens",
"region:us"
] | null | 2025-08-14T01:01:33Z |
---
library_name: saelens
---
# SAEs for use with the SAELens library
This repository contains the following SAEs:
- blocks.0.hook_mlp_out
Load these SAEs using SAELens as below:
```python
from sae_lens import SAE
sae = SAE.from_pretrained("Coaster41/patchtst-sae-32-3", "<sae_id>")
```
|
lemonhat/Qwen2.5-Coder-1.5B-Instruct-agenttuning_v4_15k_tag4
|
lemonhat
| 2025-08-14T01:00:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T00:59:38Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: agenttuning_v4_15k_tag4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# agenttuning_v4_15k_tag4
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct) on the agenttuning_v4_15k_tag4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.447 | 0.3937 | 100 | 0.5049 |
| 0.4366 | 0.7874 | 200 | 0.4815 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.8.0+cu128
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.