modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
w3en2g/self_ask-Qwen2.5-3B-Instruct
|
w3en2g
| 2025-09-21T14:20:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T13:51:09Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: Qwen2.5-3B-Instruct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-3B-Instruct
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the self_ask_train_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0064 | 0.0889 | 100 | 0.9965 |
| 0.9226 | 0.1778 | 200 | 0.9364 |
| 0.9237 | 0.2667 | 300 | 0.9182 |
| 0.9025 | 0.3556 | 400 | 0.9070 |
| 0.9169 | 0.4444 | 500 | 0.8998 |
| 0.8682 | 0.5333 | 600 | 0.8932 |
| 0.8827 | 0.6222 | 700 | 0.8889 |
| 0.9096 | 0.7111 | 800 | 0.8853 |
| 0.9054 | 0.8 | 900 | 0.8811 |
| 0.86 | 0.8889 | 1000 | 0.8786 |
| 0.9017 | 0.9778 | 1100 | 0.8765 |
| 0.8243 | 1.0667 | 1200 | 0.8799 |
| 0.8015 | 1.1556 | 1300 | 0.8798 |
| 0.7866 | 1.2444 | 1400 | 0.8785 |
| 0.8163 | 1.3333 | 1500 | 0.8744 |
| 0.8066 | 1.4222 | 1600 | 0.8725 |
| 0.8194 | 1.5111 | 1700 | 0.8727 |
| 0.8274 | 1.6 | 1800 | 0.8706 |
| 0.7773 | 1.6889 | 1900 | 0.8697 |
| 0.7985 | 1.7778 | 2000 | 0.8678 |
| 0.7761 | 1.8667 | 2100 | 0.8662 |
| 0.8017 | 1.9556 | 2200 | 0.8655 |
| 0.7595 | 2.0444 | 2300 | 0.8771 |
| 0.7305 | 2.1333 | 2400 | 0.8783 |
| 0.7071 | 2.2222 | 2500 | 0.8790 |
| 0.7342 | 2.3111 | 2600 | 0.8780 |
| 0.7255 | 2.4 | 2700 | 0.8774 |
| 0.7483 | 2.4889 | 2800 | 0.8779 |
| 0.7285 | 2.5778 | 2900 | 0.8776 |
| 0.7462 | 2.6667 | 3000 | 0.8768 |
| 0.7338 | 2.7556 | 3100 | 0.8768 |
| 0.7248 | 2.8444 | 3200 | 0.8769 |
| 0.7053 | 2.9333 | 3300 | 0.8770 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 2.21.0
- Tokenizers 0.20.3
|
luckeciano/Llama-3.1-8B-Instruct-GRPO-Base-v2_4461
|
luckeciano
| 2025-09-21T14:14:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T10:13:50Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Llama-3.1-8B-Instruct-GRPO-Base-v2_4461
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Llama-3.1-8B-Instruct-GRPO-Base-v2_4461
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Llama-3.1-8B-Instruct-GRPO-Base-v2_4461", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/su9dg15c)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
asrar7787/mac_mlx_completion_mistral_7b_instruct_v03_sports_lora_layers_16
|
asrar7787
| 2025-09-21T13:48:34Z | 140 | 1 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"text-generation",
"conversational",
"base_model:mlx-community/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mlx-community/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-16T01:52:09Z |
---
license: apache-2.0
tags:
- mlx
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
base_model: mlx-community/Mistral-7B-Instruct-v0.3
library_name: mlx
pipeline_tag: text-generation
---
# asrar7787/mac_mlx_completion_mistral_7b_instruct_v03_sports_lora_layers_16
This model [asrar7787/mac_mlx_completion_mistral_7b_instruct_v03_sports_lora_layers_16](https://huggingface.co/asrar7787/mac_mlx_completion_mistral_7b_instruct_v03_sports_lora_layers_16) was
converted to MLX format from [mlx-community/Mistral-7B-Instruct-v0.3](https://huggingface.co/mlx-community/Mistral-7B-Instruct-v0.3)
using mlx-lm version **0.27.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("asrar7787/mac_mlx_completion_mistral_7b_instruct_v03_sports_lora_layers_16")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
dsfsi/w2v-bert-2.0-lwazi-gpu
|
dsfsi
| 2025-09-21T13:42:50Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-19T15:12:22Z |
---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v-bert-2.0-lwazi-gpu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-lwazi-gpu
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on a [multilingual Lwazi](https://huggingface.co/datasets/dsfsi/multilingual-lwazi-dataset) dataset.
It achieves the following results on the evaluation set:
- Loss: 66.8561
- Wer: 0.4472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 274.7911 | 0.0848 | 200 | 190.5785 | 0.9105 |
| 125.3483 | 0.1697 | 400 | 122.7743 | 0.7101 |
| 107.2834 | 0.2545 | 600 | 100.6818 | 0.6027 |
| 91.1629 | 0.3393 | 800 | 88.3333 | 0.5560 |
| 90.386 | 0.4242 | 1000 | 81.7339 | 0.5223 |
| 79.5318 | 0.5090 | 1200 | 77.8670 | 0.5000 |
| 76.4624 | 0.5938 | 1400 | 73.4570 | 0.4830 |
| 77.446 | 0.6787 | 1600 | 69.9153 | 0.4550 |
| 73.0826 | 0.7635 | 1800 | 68.6940 | 0.4572 |
| 67.0382 | 0.8484 | 2000 | 66.8561 | 0.4472 |
### Framework versions
- Transformers 4.52.0
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.4
|
bakrybakry/blockassist
|
bakrybakry
| 2025-09-21T13:33:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump gregarious baboon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-10T10:39:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump gregarious baboon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
EkeminiThompson/aviation-llama-mvp
|
EkeminiThompson
| 2025-09-21T13:16:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T10:33:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/NS-12b-DarkSlushCap-GGUF
|
mradermacher
| 2025-09-21T12:50:31Z | 291 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:pot99rta/NS-12b-DarkSlushCap",
"base_model:quantized:pot99rta/NS-12b-DarkSlushCap",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-01T13:41:49Z |
---
base_model: pot99rta/NS-12b-DarkSlushCap
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/pot99rta/NS-12b-DarkSlushCap
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#NS-12b-DarkSlushCap-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/NS-12b-DarkSlushCap-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NS-12b-DarkSlushCap-GGUF/resolve/main/NS-12b-DarkSlushCap.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/NS-12b-DarkSlushCap-GGUF/resolve/main/NS-12b-DarkSlushCap.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/NS-12b-DarkSlushCap-GGUF/resolve/main/NS-12b-DarkSlushCap.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NS-12b-DarkSlushCap-GGUF/resolve/main/NS-12b-DarkSlushCap.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/NS-12b-DarkSlushCap-GGUF/resolve/main/NS-12b-DarkSlushCap.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/NS-12b-DarkSlushCap-GGUF/resolve/main/NS-12b-DarkSlushCap.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NS-12b-DarkSlushCap-GGUF/resolve/main/NS-12b-DarkSlushCap.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NS-12b-DarkSlushCap-GGUF/resolve/main/NS-12b-DarkSlushCap.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/NS-12b-DarkSlushCap-GGUF/resolve/main/NS-12b-DarkSlushCap.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/NS-12b-DarkSlushCap-GGUF/resolve/main/NS-12b-DarkSlushCap.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NS-12b-DarkSlushCap-GGUF/resolve/main/NS-12b-DarkSlushCap.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1758453550
|
kapalbalap
| 2025-09-21T11:20:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T11:20:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
huseyincavus/gemma-3-270m-anti-sycophancy-final-merged
|
huseyincavus
| 2025-09-21T10:54:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T10:54:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
haihp02/d500cff0-99e3-4c42-89c0-180a3f829034
|
haihp02
| 2025-09-21T09:47:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T08:13:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758446793
|
schooncestiaa
| 2025-09-21T09:27:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T09:27:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TAI-Research/GTC-2-Large-nano-Base-Model
|
TAI-Research
| 2025-09-21T09:07:27Z | 0 | 0 | null |
[
"dataset:BEE-spoke-data/fineweb-1M_en-med",
"license:mit",
"region:us"
] | null | 2025-09-21T09:04:35Z |
---
license: mit
datasets:
- BEE-spoke-data/fineweb-1M_en-med
---
|
AlaaWO/ppo-LunarLander-v2
|
AlaaWO
| 2025-09-21T08:01:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-21T08:00:29Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.64 +/- 15.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nharshavardhana/impasto_painting_kontext_new_version-lora
|
nharshavardhana
| 2025-09-21T07:30:36Z | 0 | 0 |
diffusers
|
[
"diffusers",
"image-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev",
"license:creativeml-openrail-m",
"region:us"
] |
image-to-image
| 2025-09-21T07:30:11Z |
---
tags:
- image-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
base_model: black-forest-labs/FLUX.1-Kontext-dev
license: creativeml-openrail-m
inference:
parameters:
width: 1024
height: 1024
---
# impasto_painting_kontext_new_version-lora
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
## Trigger words
No trigger words defined.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](nharshavardhana/impasto_painting_kontext_new_version-lora/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-Kontext-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('nharshavardhana/impasto_painting_kontext_new_version-lora', weight_name='impasto_painting_kontext_new_version_000003000.safetensors')
image = pipeline('a beautiful landscape').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Zelyanoth/wav2vec2-bert-swahili-noise
|
Zelyanoth
| 2025-09-21T07:28:00Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:generator",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-20T17:04:41Z |
---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
datasets:
- generator
metrics:
- wer
model-index:
- name: wav2vec2-bert-swahili-noise
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: generator
type: generator
config: default
split: train
args: default
metrics:
- name: Wer
type: wer
value: 0.2959163543105149
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-bert-swahili-noise
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5771
- Wer: 0.2959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 38
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 900
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 3.142 | 0.0817 | 100 | 3.0798 | 0.9999 |
| 1.9368 | 0.1634 | 200 | 1.4137 | 0.9886 |
| 0.9455 | 0.2451 | 300 | 0.7745 | 0.4115 |
| 0.8236 | 0.3268 | 400 | 0.6644 | 0.3485 |
| 0.7723 | 0.4085 | 500 | 0.6475 | 0.3313 |
| 0.7603 | 0.4902 | 600 | 0.6082 | 0.3097 |
| 0.6848 | 0.5719 | 700 | 0.5972 | 0.3072 |
| 0.683 | 0.6536 | 800 | 0.5762 | 0.2986 |
| 0.6967 | 0.7353 | 900 | 0.5771 | 0.2959 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.2
|
haihp02/db918e9d-cf19-4607-87a2-9929057fe531
|
haihp02
| 2025-09-21T07:13:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T06:19:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sharon-kurant/egfr_full_augmented
|
sharon-kurant
| 2025-09-21T06:33:22Z | 4 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"region:us"
] | null | 2025-09-17T20:19:18Z |
---
tags:
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Anwaarma/edos_taskA_llama_allyears_lora2
|
Anwaarma
| 2025-09-21T06:25:43Z | 21 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"lora",
"transformers",
"base_model:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-09-15T05:08:12Z |
---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- base_model:adapter:meta-llama/Llama-3.2-1B
- lora
- transformers
metrics:
- accuracy
model-index:
- name: edos_taskA_llama_allyears_lora2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# edos_taskA_llama_allyears_lora2
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2633
- Accuracy: 0.9273
- F1 Macro: 0.9002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 1.2
- label_smoothing_factor: 0.05
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|
| 0.9769 | 0.1068 | 100 | 0.4569 | 0.8135 | 0.7665 |
| 0.8868 | 0.2137 | 200 | 0.3733 | 0.8525 | 0.7959 |
| 0.8317 | 0.3205 | 300 | 0.3601 | 0.873 | 0.8254 |
| 0.7603 | 0.4274 | 400 | 0.3358 | 0.8845 | 0.8321 |
| 0.8641 | 0.5342 | 500 | 0.3237 | 0.8925 | 0.8481 |
| 0.6726 | 0.6410 | 600 | 0.2946 | 0.908 | 0.8730 |
| 0.6893 | 0.7479 | 700 | 0.2917 | 0.908 | 0.8621 |
| 0.6251 | 0.8547 | 800 | 0.2781 | 0.916 | 0.8855 |
| 0.636 | 0.9615 | 900 | 0.2657 | 0.9275 | 0.8979 |
| 0.499 | 1.0684 | 1000 | 0.2631 | 0.927 | 0.8991 |
| 0.4897 | 1.1752 | 1100 | 0.2596 | 0.928 | 0.9000 |
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.2
- Pytorch 2.8.0+cu126
- Datasets 4.1.1
- Tokenizers 0.22.0
|
zooai/coder-1
|
zooai
| 2025-09-21T06:23:28Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"zoo",
"coder",
"coding",
"a3b",
"enterprise",
"30b",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T23:13:11Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/zooai/coder-1/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- zoo
- coder
- coding
- a3b
- enterprise
- gguf
- 30b
---
# Zoo Coder-1 (30B-A3B Coding Model)
<a href="https://zoo.ngo/" target="_blank" style="margin: 2px;">
<img alt="Zoo AI" src="https://img.shields.io/badge/💻%20Zoo%20Coder--1%20-EF4444" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://zoo.ngo/" target="_blank" style="margin: 2px;">
<img alt="501(c)(3)" src="https://img.shields.io/badge/501(c)(3)-Nonprofit-blue" style="display: inline-block; vertical-align: middle;"/>
</a>
## Overview
**Zoo Coder-1** is an enterprise-grade AI model specifically optimized for software development tasks. Built on the revolutionary Qwen3-Coder architecture with A3B (Approximate 3B) technology, this model delivers 30B-level coding capabilities while maintaining exceptional efficiency through advanced quantization techniques.
## Key Features
### Architecture Innovations
- **A3B Technology**: Achieves 30B parameter capability with dramatically reduced memory footprint
- **480B Distillation**: Knowledge distilled from a massive 480B parameter teacher model
- **GGUF Quantization**: Multiple quantization options for optimal performance/size tradeoff
- **Production Optimized**: Designed for real-world deployment at scale
### Performance Highlights
- **30B-level coding ability** in a fraction of the size
- **Supports all major programming languages** with emphasis on modern frameworks
- **Advanced code understanding** including complex architectural patterns
- **Intelligent code completion** with context-aware suggestions
- **Bug detection and fixing** with detailed explanations
- **Code refactoring** with best practices enforcement
## Technical Specifications
- **Base Model**: Qwen3-Coder-30B-A3B-Instruct
- **Distillation**: 480B parameter teacher model
- **Format**: GGUF quantized models
- **Context Length**: 32,768 tokens native, extensible to 128K
- **Quantization Options**:
- Q2_K, Q3_K_S/M/L (Ultra-compact, 2-3GB)
- Q4_K_S/M (Balanced, 3-4GB)
- Q5_K_S/M (High quality, 4-5GB)
- Q6_K (Maximum quality, 5-6GB)
- IQ variants for specialized deployments
## Usage
### Quick Start with Ollama/Zoo Node
```bash
# Using Zoo Desktop
zoo model download coder-1
# Using Ollama/Zoo Node API
ollama pull zoo/coder-1
```
### Python Integration
```python
from zoo import CoderModel
# Load the model
model = CoderModel.load("zooai/coder-1")
# Code completion
code = model.complete("""
def fibonacci(n):
# Generate the nth Fibonacci number
""")
# Code review
review = model.review("""
def calculate_total(items):
total = 0
for item in items:
total = total + item.price * item.quantity
return total
""")
# Bug fixing
fixed_code = model.fix("""
def binary_search(arr, target):
left, right = 0, len(arr)
while left < right:
mid = (left + right) / 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
left = mid
else:
right = mid
return -1
""")
```
### API Usage
```bash
curl http://localhost:2000/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "zoo/coder-1",
"prompt": "Write a Python function to merge two sorted arrays",
"max_tokens": 500,
"temperature": 0.7
}'
```
## Supported Languages
Zoo Coder-1 excels at:
- **Python**, **JavaScript/TypeScript**, **Java**, **C++**, **Go**
- **Rust**, **Swift**, **Kotlin**, **C#**, **Ruby**
- **SQL**, **Shell**, **HTML/CSS**, **React**, **Vue**
- And 50+ other programming languages
## Model Variants
Choose the quantization that best fits your needs:
| Variant | Size | Use Case |
|---------|------|----------|
| Q2_K | ~2GB | Edge devices, quick prototyping |
| Q3_K_M | ~2.5GB | Mobile apps, lightweight servers |
| Q4_K_M | ~3.2GB | **Recommended** - Best balance |
| Q5_K_M | ~4GB | High-quality production |
| Q6_K | ~5GB | Maximum quality deployment |
## Benchmarks
Zoo Coder-1 achieves impressive results across coding benchmarks:
- **HumanEval**: 89.2%
- **MBPP**: 78.5%
- **CodeContests**: 42.3%
- **Apps**: 67.8%
## Best Practices
1. **Temperature Settings**
- Code generation: 0.2-0.4
- Creative tasks: 0.6-0.8
- Debugging: 0.1-0.3
2. **Context Management**
- Include relevant imports and dependencies
- Provide clear function signatures
- Use descriptive variable names in prompts
3. **Production Deployment**
- Use Q4_K_M for optimal balance
- Enable caching for repeated queries
- Implement rate limiting for API endpoints
## License
This model is released under the Apache 2.0 License with additional Zoo AI usage terms. See LICENSE file for details.
## Citation
```bibtex
@model{zoo2024coder,
title={Zoo Coder-1: Enterprise-grade Coding AI Model},
author={Zoo AI Team},
year={2024},
publisher={Zoo AI},
url={https://huggingface.co/zooai/coder-1}
}
```
## About Zoo AI
Zoo Labs Foundation Inc, a 501(c)(3) nonprofit organization, is pioneering the next generation of AI infrastructure, focusing on efficiency, accessibility, and real-world performance. Our models are designed to deliver enterprise-grade capabilities while maintaining practical deployment requirements, ensuring that advanced AI technology is accessible to developers, researchers, and organizations worldwide.
- **Website**: [zoo.ngo](https://zoo.ngo)
- **HuggingFace**: [huggingface.co/zooai](https://huggingface.co/zooai)
- **Spaces**: [huggingface.co/spaces/zooai](https://huggingface.co/spaces/zooai)
## Support
- Documentation: [docs.zoo.ngo](https://docs.zoo.ngo)
- GitHub: [github.com/zooai](https://github.com/zooai)
- Discord: [discord.gg/zooai](https://discord.gg/zooai)
- Email: [email protected]
|
haihp02/765107ec-55f9-4f35-8369-85d9e3567e0d
|
haihp02
| 2025-09-21T06:06:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T04:24:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ehartford/VibeVoice-Large
|
ehartford
| 2025-09-21T05:59:01Z | 0 | 0 | null |
[
"safetensors",
"vibevoice",
"Podcast",
"text-to-speech",
"en",
"zh",
"arxiv:2508.19205",
"arxiv:2412.08635",
"license:mit",
"region:us"
] |
text-to-speech
| 2025-09-21T05:59:01Z |
---
license: mit
language:
- en
- zh
pipeline_tag: text-to-speech
tags:
- Podcast
---
## VibeVoice: A Frontier Open-Source Text-to-Speech Model
> This repository contains a copy of model weights obtained from ModelScope([microsoft/VibeVoice-Large](https://www.modelscope.cn/models/microsoft/VibeVoice-Large)).
> The license for this model is the `MIT License`, **which permits redistribution**.
>
> My understanding of the MIT License, which is consistent with the broader open-source community's consensus,
> is that it grants the right to distribute copies of the software and its derivatives.
> Therefore, I am lawfully exercising the right to redistribute this model.
>
> If you are a rights holder and believe this understanding of the license is incorrect, please submit a DMCA complaint to Hugging Face at [email protected]_
VibeVoice is a novel framework designed for generating expressive, long-form, multi-speaker conversational audio, such as podcasts, from text. It addresses significant challenges in traditional Text-to-Speech (TTS) systems, particularly in scalability, speaker consistency, and natural turn-taking.
A core innovation of VibeVoice is its use of continuous speech tokenizers (Acoustic and Semantic) operating at an ultra-low frame rate of 7.5 Hz. These tokenizers efficiently preserve audio fidelity while significantly boosting computational efficiency for processing long sequences. VibeVoice employs a next-token diffusion framework, leveraging a Large Language Model (LLM) to understand textual context and dialogue flow, and a diffusion head to generate high-fidelity acoustic details.
The model can synthesize speech up to **90 minutes** long with up to **4 distinct speakers**, surpassing the typical 1-2 speaker limits of many prior models.
➡️ **Technical Report:** [VibeVoice Technical Report](https://arxiv.org/abs/2508.19205)
➡️ **Project Page:** [microsoft/VibeVoice](https://microsoft.github.io/VibeVoice)
➡️ **Code:** [microsoft/VibeVoice-Code](https://github.com/microsoft/VibeVoice)
<p align="left">
<img src="figures/Fig1.png" alt="VibeVoice Overview" height="250px">
</p>
## Training Details
Transformer-based Large Language Model (LLM) integrated with specialized acoustic and semantic tokenizers and a diffusion-based decoding head.
- LLM: Qwen2.5 for this release.
- Tokenizers:
- Acoustic Tokenizer: Based on a σ-VAE variant (proposed in [LatentLM](https://arxiv.org/pdf/2412.08635)), with a mirror-symmetric encoder-decoder structure featuring 7 stages of modified Transformer blocks. Achieves 3200x downsampling from 24kHz input. Encoder/decoder components are ~340M parameters each.
- Semantic Tokenizer: Encoder mirrors the Acoustic Tokenizer's architecture (without VAE components). Trained with an ASR proxy task.
- Diffusion Head: Lightweight module (4 layers, ~600M parameters) conditioned on LLM hidden states. Predicts acoustic VAE features using a Denoising Diffusion Probabilistic Models (DDPM) process. Uses Classifier-Free Guidance (CFG) and DPM-Solver (and variants) during inference.
- Context Length: Trained with a curriculum increasing up to 32,768 tokens.
- Training Stages:
- Tokenizer Pre-training: Acoustic and Semantic tokenizers are pre-trained separately.
- VibeVoice Training: Pre-trained tokenizers are frozen; only the LLM and diffusion head parameters are trained. A curriculum learning strategy is used for input sequence length (4k -> 16K -> 32K). Text tokenizer not explicitly specified, but the LLM (Qwen2.5) typically uses its own. Audio is "tokenized" via the acoustic and semantic tokenizers.
## Models
| Model | Context Length | Generation Length | Weight |
|-------|----------------|----------|----------|
| VibeVoice-0.5B-Streaming | - | - | On the way |
| VibeVoice-1.5B | 64K | ~90 min | [HF link](https://huggingface.co/microsoft/VibeVoice-1.5B) |
| VibeVoice-Large| 32K | ~45 min | You are here. |
## Installation and Usage
Please refer to [GitHub README](https://github.com/microsoft/VibeVoice?tab=readme-ov-file#installation)
## Responsible Usage
### Direct intended uses
The VibeVoice model is limited to research purpose use exploring highly realistic audio dialogue generation detailed in the [tech report](https://arxiv.org/pdf/2508.19205).
### Out-of-scope uses
Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by MIT License. Use to generate any text transcript. Furthermore, this release is not intended or licensed for any of the following scenarios:
- Voice impersonation without explicit, recorded consent – cloning a real individual’s voice for satire, advertising, ransom, social‑engineering, or authentication bypass.
- Disinformation or impersonation – creating audio presented as genuine recordings of real people or events.
- Real‑time or low‑latency voice conversion – telephone or video‑conference “live deep‑fake” applications.
- Unsupported language – the model is trained only on English and Chinese data; outputs in other languages are unsupported and may be unintelligible or offensive.
- Generation of background ambience, Foley, or music – VibeVoice is speech‑only and will not produce coherent non‑speech audio.
## Risks and limitations
While efforts have been made to optimize it through various techniques, it may still produce outputs that are unexpected, biased, or inaccurate. VibeVoice inherits any biases, errors, or omissions produced by its base model.
Potential for Deepfakes and Disinformation: High-quality synthetic speech can be misused to create convincing fake audio content for impersonation, fraud, or spreading disinformation. Users must ensure transcripts are reliable, check content accuracy, and avoid using generated content in misleading ways. Users are expected to use the generated content and to deploy the models in a lawful manner, in full compliance with all applicable laws and regulations in the relevant jurisdictions. It is best practice to disclose the use of AI when sharing AI-generated content.
English and Chinese only: Transcripts in language other than English or Chinese may result in unexpected audio outputs.
Non-Speech Audio: The model focuses solely on speech synthesis and does not handle background noise, music, or other sound effects.
Overlapping Speech: The current model does not explicitly model or generate overlapping speech segments in conversations.
## Recommendations
We do not recommend using VibeVoice in commercial or real-world applications without further testing and development. This model is intended for research and development purposes only. Please use responsibly.
To mitigate the risks of misuse, we have:
Embedded an audible disclaimer (e.g. “This segment was generated by AI”) automatically into every synthesized audio file.
Added an imperceptible watermark to generated audio so third parties can verify VibeVoice provenance. Please see contact information at the end of this model card.
Logged inference requests (hashed) for abuse pattern detection and publishing aggregated statistics quarterly.
Users are responsible for sourcing their datasets legally and ethically. This may include securing appropriate rights and/or anonymizing data prior to use with VibeVoice. Users are reminded to be mindful of data privacy concerns.
## Contact
This project was conducted by members of Microsoft Research. We welcome feedback and collaboration from our audience. If you have suggestions, questions, or observe unexpected/offensive behavior in our technology, please contact us at [email protected].
If the team receives reports of undesired behavior or identifies issues independently, we will update this repository with appropriate mitigations.
|
jialicheng/cifar10_mobilenet-v2
|
jialicheng
| 2025-09-21T04:38:54Z | 0 | 0 | null |
[
"safetensors",
"mobilenet_v2",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:google/mobilenet_v2_1.0_224",
"base_model:finetune:google/mobilenet_v2_1.0_224",
"license:other",
"region:us"
] |
image-classification
| 2025-09-21T04:36:36Z |
---
license: other
base_model: google/mobilenet_v2_1.0_224
tags:
- image-classification
- vision
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mobilenet_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilenet_v2
This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4749
- Accuracy: 0.8446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 333 | 1.4842 | 0.5297 |
| 1.7515 | 2.0 | 666 | 1.6557 | 0.4453 |
| 1.7515 | 3.0 | 999 | 0.8495 | 0.7062 |
| 1.1908 | 4.0 | 1332 | 0.7553 | 0.747 |
| 1.0051 | 5.0 | 1665 | 0.7284 | 0.7479 |
| 1.0051 | 6.0 | 1998 | 0.8906 | 0.6977 |
| 0.9089 | 7.0 | 2331 | 1.0051 | 0.6587 |
| 0.8441 | 8.0 | 2664 | 0.5889 | 0.8025 |
| 0.8441 | 9.0 | 2997 | 0.6794 | 0.7749 |
| 0.7937 | 10.0 | 3330 | 0.9055 | 0.7074 |
| 0.7578 | 11.0 | 3663 | 0.7539 | 0.7619 |
| 0.7578 | 12.0 | 3996 | 0.6955 | 0.7708 |
| 0.7315 | 13.0 | 4329 | 1.1638 | 0.6383 |
| 0.7048 | 14.0 | 4662 | 0.6883 | 0.7777 |
| 0.7048 | 15.0 | 4995 | 0.8076 | 0.7407 |
| 0.6901 | 16.0 | 5328 | 0.7501 | 0.759 |
| 0.6627 | 17.0 | 5661 | 0.6667 | 0.7834 |
| 0.6627 | 18.0 | 5994 | 0.8337 | 0.7508 |
| 0.6457 | 19.0 | 6327 | 0.8104 | 0.7488 |
| 0.6365 | 20.0 | 6660 | 0.6201 | 0.793 |
| 0.6365 | 21.0 | 6993 | 0.6534 | 0.794 |
| 0.6244 | 22.0 | 7326 | 0.4883 | 0.835 |
| 0.6092 | 23.0 | 7659 | 0.6647 | 0.7898 |
| 0.6092 | 24.0 | 7992 | 0.6831 | 0.777 |
| 0.5978 | 25.0 | 8325 | 0.7547 | 0.7608 |
| 0.5838 | 26.0 | 8658 | 0.5030 | 0.8356 |
| 0.5838 | 27.0 | 8991 | 0.4207 | 0.8573 |
| 0.5828 | 28.0 | 9324 | 0.7332 | 0.7726 |
| 0.5716 | 29.0 | 9657 | 0.3767 | 0.8721 |
| 0.5716 | 30.0 | 9990 | 0.5153 | 0.8394 |
| 0.565 | 31.0 | 10323 | 0.5992 | 0.8111 |
| 0.5496 | 32.0 | 10656 | 0.6761 | 0.7903 |
| 0.5496 | 33.0 | 10989 | 0.6412 | 0.7951 |
| 0.5482 | 34.0 | 11322 | 0.7193 | 0.7872 |
| 0.5346 | 35.0 | 11655 | 0.5146 | 0.8348 |
| 0.5346 | 36.0 | 11988 | 0.9719 | 0.7291 |
| 0.5336 | 37.0 | 12321 | 0.6971 | 0.7816 |
| 0.5381 | 38.0 | 12654 | 0.6219 | 0.8095 |
| 0.5381 | 39.0 | 12987 | 0.8059 | 0.7571 |
| 0.5205 | 40.0 | 13320 | 0.5201 | 0.8323 |
| 0.5182 | 41.0 | 13653 | 0.7611 | 0.7731 |
| 0.5182 | 42.0 | 13986 | 0.4614 | 0.8502 |
| 0.5105 | 43.0 | 14319 | 0.7823 | 0.7874 |
| 0.5051 | 44.0 | 14652 | 0.5006 | 0.8431 |
| 0.5051 | 45.0 | 14985 | 0.4780 | 0.8436 |
| 0.5033 | 46.0 | 15318 | 0.7846 | 0.7505 |
| 0.4989 | 47.0 | 15651 | 0.7369 | 0.7783 |
| 0.4989 | 48.0 | 15984 | 0.6269 | 0.8136 |
| 0.4902 | 49.0 | 16317 | 0.6005 | 0.8187 |
| 0.4899 | 50.0 | 16650 | 0.7436 | 0.7906 |
| 0.4899 | 51.0 | 16983 | 0.8028 | 0.777 |
| 0.4837 | 52.0 | 17316 | 0.4615 | 0.8515 |
| 0.481 | 53.0 | 17649 | 0.7034 | 0.7907 |
| 0.481 | 54.0 | 17982 | 0.5976 | 0.8075 |
| 0.481 | 55.0 | 18315 | 0.5986 | 0.8119 |
| 0.4831 | 56.0 | 18648 | 0.5826 | 0.8211 |
| 0.4831 | 57.0 | 18981 | 1.2071 | 0.6883 |
| 0.4844 | 58.0 | 19314 | 0.5116 | 0.8411 |
| 0.4715 | 59.0 | 19647 | 0.3828 | 0.8749 |
| 0.4715 | 60.0 | 19980 | 0.5963 | 0.8205 |
| 0.4689 | 61.0 | 20313 | 0.5510 | 0.8319 |
| 0.472 | 62.0 | 20646 | 0.7266 | 0.79 |
| 0.472 | 63.0 | 20979 | 0.4501 | 0.8508 |
| 0.4668 | 64.0 | 21312 | 0.9535 | 0.7623 |
| 0.4627 | 65.0 | 21645 | 0.7841 | 0.7753 |
| 0.4627 | 66.0 | 21978 | 0.8179 | 0.7753 |
| 0.4549 | 67.0 | 22311 | 0.4133 | 0.8672 |
| 0.4578 | 68.0 | 22644 | 0.7689 | 0.7905 |
| 0.4578 | 69.0 | 22977 | 0.4337 | 0.8656 |
| 0.4581 | 70.0 | 23310 | 0.3573 | 0.8812 |
| 0.4544 | 71.0 | 23643 | 0.4087 | 0.8698 |
| 0.4544 | 72.0 | 23976 | 0.4307 | 0.8599 |
| 0.4547 | 73.0 | 24309 | 0.8750 | 0.7509 |
| 0.4536 | 74.0 | 24642 | 0.5887 | 0.8163 |
| 0.4536 | 75.0 | 24975 | 0.3848 | 0.8718 |
| 0.4573 | 76.0 | 25308 | 0.8057 | 0.7881 |
| 0.4492 | 77.0 | 25641 | 0.8340 | 0.7727 |
| 0.4492 | 78.0 | 25974 | 0.4320 | 0.8619 |
| 0.4437 | 79.0 | 26307 | 0.6830 | 0.7969 |
| 0.4462 | 80.0 | 26640 | 0.6303 | 0.8152 |
| 0.4462 | 81.0 | 26973 | 0.5285 | 0.8282 |
| 0.4419 | 82.0 | 27306 | 0.3664 | 0.8871 |
| 0.449 | 83.0 | 27639 | 0.9199 | 0.7549 |
| 0.449 | 84.0 | 27972 | 0.4462 | 0.8568 |
| 0.4373 | 85.0 | 28305 | 0.4055 | 0.8645 |
| 0.4454 | 86.0 | 28638 | 0.8410 | 0.7686 |
| 0.4454 | 87.0 | 28971 | 0.3777 | 0.8811 |
| 0.4459 | 88.0 | 29304 | 1.0111 | 0.7445 |
| 0.441 | 89.0 | 29637 | 0.9389 | 0.7426 |
| 0.441 | 90.0 | 29970 | 1.0830 | 0.7328 |
| 0.4396 | 91.0 | 30303 | 0.4384 | 0.8569 |
| 0.4381 | 92.0 | 30636 | 0.7627 | 0.795 |
| 0.4381 | 93.0 | 30969 | 0.8045 | 0.7615 |
| 0.439 | 94.0 | 31302 | 0.6230 | 0.8071 |
| 0.4435 | 95.0 | 31635 | 0.6560 | 0.8117 |
| 0.4435 | 96.0 | 31968 | 0.4749 | 0.8503 |
| 0.4428 | 97.0 | 32301 | 0.4037 | 0.8691 |
| 0.4353 | 98.0 | 32634 | 0.7115 | 0.7903 |
| 0.4353 | 99.0 | 32967 | 0.6069 | 0.8124 |
| 0.4433 | 100.0 | 33300 | 0.4749 | 0.8446 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
|
haihp02/thisissmoldatasamelongggg
|
haihp02
| 2025-09-21T03:41:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T03:41:30Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hitoshura25/webauthn-security-sequential_20250920_211325_stage1_analysis
|
hitoshura25
| 2025-09-21T02:35:28Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"security",
"vulnerability-analysis",
"webauthn",
"mlx-converted",
"license:apache-2.0",
"region:us"
] | null | 2025-09-21T02:35:27Z |
---
base_model: allenai/OLMo-2-1B
base_model_relation: adapter
library_name: peft
peft_type: LORA
tags:
- security
- vulnerability-analysis
- webauthn
- mlx-converted
license: apache-2.0
---
# WebAuthn Security LoRA Adapter
This LoRA adapter specializes the base model for WebAuthn security vulnerability analysis.
**Converted from MLX format to HuggingFace PEFT format for compatibility.**
## Model Details
- **Base Model**: allenai/OLMo-2-1B
- **Adapter Type**: LoRA (Low-Rank Adaptation)
- **Target Modules**: q_proj, v_proj, k_proj, o_proj, gate_proj, up_proj, down_proj
- **LoRA Rank**: 8
- **LoRA Alpha**: 20.0
- **LoRA Dropout**: 0.0
## Training Details
- **Training Framework**: MLX-LM (converted to PEFT format)
- **Training Data**: WebAuthn security vulnerabilities
- **Iterations**: 500
- **Learning Rate**: 5e-06
- **Optimizer**: adamw
- **Fine-tune Type**: lora
## Usage
Load this adapter with the PEFT library:
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load configuration and model
config = PeftConfig.from_pretrained("path/to/this/adapter")
base_model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(base_model, "path/to/this/adapter")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Use for inference
inputs = tokenizer("Analyze this WebAuthn vulnerability:", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## Conversion Notes
This adapter was originally trained using MLX-LM and converted to HuggingFace PEFT format using an evidence-based conversion pipeline that:
1. Converts MLX parameter naming (`lora_a/lora_b`) to PEFT format (`lora_A.weight/lora_B.weight`)
2. Adds proper `base_model.model.` prefixes to parameter names
3. Generates PEFT-compatible configuration with required fields
4. Maintains full compatibility with HuggingFace ecosystem
## Performance
This adapter enhances the base model's capability for:
- WebAuthn security vulnerability analysis
- Code fix generation for security issues
- Security-aware code recommendations
## License
Apache 2.0
|
jontgao/cs546-hw1
|
jontgao
| 2025-09-21T01:07:53Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-20T00:44:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758409879
|
schooncestiaa
| 2025-09-20T23:12:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T23:12:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rainfullclub/rainfallmodel_4m_cog
|
rainfullclub
| 2025-09-20T23:01:01Z | 0 | 0 | null |
[
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-09-20T22:56:57Z |
---
license: apache-2.0
---
由rainfall_4m_base通过自我认知数据集微调而来,由于微调数据集过小,所以主要是回答"你是谁?"等这类问题。
|
nilc-nlp/wang2vec-cbow-100d
|
nilc-nlp
| 2025-09-20T22:57:08Z | 0 | 0 |
safetensors
|
[
"safetensors",
"word-embeddings",
"static",
"portuguese",
"wang2vec",
"cbow",
"100d",
"feature-extraction",
"pt",
"arxiv:1708.06025",
"license:cc-by-4.0",
"region:us"
] |
feature-extraction
| 2025-09-20T21:58:55Z |
---
language: pt
tags:
- word-embeddings
- static
- portuguese
- wang2vec
- cbow
- 100d
license: cc-by-4.0
library_name: safetensors
pipeline_tag: feature-extraction
---
# NILC Portuguese Word Embeddings — Wang2Vec CBOW 100d
This repository contains the **Wang2Vec CBOW 100d** model in **safetensors** format.
## About
NILC-Embeddings is a repository for storing and sharing **word embeddings** for the Portuguese language. The goal is to provide ready-to-use vector resources for **Natural Language Processing (NLP)** and **Machine Learning** tasks.
The embeddings were trained on a large Portuguese corpus (Brazilian + European), composed of 17 corpora (~1.39B tokens). Training was carried out with the following algorithms: **Word2Vec**, **FastText**, **Wang2Vec**, and **GloVe**.
---
## 📂 Files
- `embeddings.safetensors` → embedding matrix (`[vocab_size, 100]`)
- `vocab.txt` → vocabulary (one token per line, aligned with rows)
---
## 🚀 Usage
```python
from huggingface_hub import hf_hub_download
from safetensors.numpy import load_file
path = hf_hub_download(repo_id="nilc-nlp/wang2vec-cbow-100d",
filename="embeddings.safetensors")
data = load_file(path)
vectors = data["embeddings"]
vocab_path = hf_hub_download(repo_id="nilc-nlp/wang2vec-cbow-100d",
filename="vocab.txt")
with open(vocab_path) as f:
vocab = [w.strip() for w in f]
print(vectors.shape)
```
Or in PyTorch:
```python
from safetensors.torch import load_file
tensors = load_file("embeddings.safetensors")
vectors = tensors["embeddings"] # torch.Tensor
```
---
## 📊 Corpus
The embeddings were trained on a combination of 17 corpora (~1.39B tokens):
| Corpus | Tokens | Types | Genre | Description |
|--------|--------|-------|-------|-------------|
| LX-Corpus [Rodrigues et al. 2016] | 714,286,638 | 2,605,393 | Mixed genres | Large collection of texts from 19 sources, mostly European Portuguese |
| Wikipedia | 219,293,003 | 1,758,191 | Encyclopedic | Wikipedia dump (2016-10-20) |
| GoogleNews | 160,396,456 | 664,320 | Informative | News crawled from Google News |
| SubIMDB-PT | 129,975,149 | 500,302 | Spoken | Movie subtitles from IMDb |
| G1 | 105,341,070 | 392,635 | Informative | News from G1 portal (2014–2015) |
| PLN-Br [Bruckschen et al. 2008] | 31,196,395 | 259,762 | Informative | Corpus of PLN-BR project (1994–2005) |
| Domínio Público | 23,750,521 | 381,697 | Prose | 138,268 literary works |
| Lacio-Web [Aluísio et al. 2003] | 8,962,718 | 196,077 | Mixed | Literary, informative, scientific, law, didactic texts |
| Literatura Brasileira | 1,299,008 | 66,706 | Prose | Classical Brazilian fiction e-books |
| Mundo Estranho | 1,047,108 | 55,000 | Informative | Texts from Mundo Estranho magazine |
| CHC | 941,032 | 36,522 | Informative | Texts from Ciência Hoje das Crianças |
| FAPESP | 499,008 | 31,746 | Science communication | Texts from Pesquisa FAPESP magazine |
| Textbooks | 96,209 | 11,597 | Didactic | Elementary school textbooks |
| Folhinha | 73,575 | 9,207 | Informative | Children’s news from Folhinha (Folha de São Paulo) |
| NILC subcorpus | 32,868 | 4,064 | Informative | Children’s texts (3rd–4th grade) |
| Para Seu Filho Ler | 21,224 | 3,942 | Informative | Children’s news from Zero Hora |
| SARESP | 13,308 | 3,293 | Didactic | School evaluation texts |
| **Total** | **1,395,926,282** | **3,827,725** | — | —
---
## 📖 Paper
**Portuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks**
Hartmann, N. et al. (2017), STIL 2017.
[ArXiv Paper](https://arxiv.org/abs/1708.06025)
### BibTeX
```bibtex
@inproceedings{hartmann-etal-2017-portuguese,
title = {{P}ortuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks},
author = {Hartmann, Nathan and Fonseca, Erick and Shulby, Christopher and Treviso, Marcos and Silva, J{'e}ssica and Alu{'i}sio, Sandra},
year = 2017,
month = oct,
booktitle = {Proceedings of the 11th {B}razilian Symposium in Information and Human Language Technology},
publisher = {Sociedade Brasileira de Computa{\c{c}}{\~a}o},
address = {Uberl{\^a}ndia, Brazil},
pages = {122--131},
url = {https://aclanthology.org/W17-6615/},
editor = {Paetzold, Gustavo Henrique and Pinheiro, Vl{'a}dia}
}
```
---
## 📜 License
Creative Commons Attribution 4.0 International (CC BY 4.0)
|
nilc-nlp/word2vec-cbow-1000d
|
nilc-nlp
| 2025-09-20T22:56:55Z | 0 | 0 |
safetensors
|
[
"safetensors",
"word-embeddings",
"static",
"portuguese",
"word2vec",
"cbow",
"1000d",
"feature-extraction",
"pt",
"arxiv:1708.06025",
"license:cc-by-4.0",
"region:us"
] |
feature-extraction
| 2025-09-20T21:54:36Z |
---
language: pt
tags:
- word-embeddings
- static
- portuguese
- word2vec
- cbow
- 1000d
license: cc-by-4.0
library_name: safetensors
pipeline_tag: feature-extraction
---
# NILC Portuguese Word Embeddings — Word2Vec CBOW 1000d
This repository contains the **Word2Vec CBOW 1000d** model in **safetensors** format.
## About
NILC-Embeddings is a repository for storing and sharing **word embeddings** for the Portuguese language. The goal is to provide ready-to-use vector resources for **Natural Language Processing (NLP)** and **Machine Learning** tasks.
The embeddings were trained on a large Portuguese corpus (Brazilian + European), composed of 17 corpora (~1.39B tokens). Training was carried out with the following algorithms: **Word2Vec**, **FastText**, **Wang2Vec**, and **GloVe**.
---
## 📂 Files
- `embeddings.safetensors` → embedding matrix (`[vocab_size, 1000]`)
- `vocab.txt` → vocabulary (one token per line, aligned with rows)
---
## 🚀 Usage
```python
from huggingface_hub import hf_hub_download
from safetensors.numpy import load_file
path = hf_hub_download(repo_id="nilc-nlp/word2vec-cbow-1000d",
filename="embeddings.safetensors")
data = load_file(path)
vectors = data["embeddings"]
vocab_path = hf_hub_download(repo_id="nilc-nlp/word2vec-cbow-1000d",
filename="vocab.txt")
with open(vocab_path) as f:
vocab = [w.strip() for w in f]
print(vectors.shape)
```
Or in PyTorch:
```python
from safetensors.torch import load_file
tensors = load_file("embeddings.safetensors")
vectors = tensors["embeddings"] # torch.Tensor
```
---
## 📊 Corpus
The embeddings were trained on a combination of 17 corpora (~1.39B tokens):
| Corpus | Tokens | Types | Genre | Description |
|--------|--------|-------|-------|-------------|
| LX-Corpus [Rodrigues et al. 2016] | 714,286,638 | 2,605,393 | Mixed genres | Large collection of texts from 19 sources, mostly European Portuguese |
| Wikipedia | 219,293,003 | 1,758,191 | Encyclopedic | Wikipedia dump (2016-10-20) |
| GoogleNews | 160,396,456 | 664,320 | Informative | News crawled from Google News |
| SubIMDB-PT | 129,975,149 | 500,302 | Spoken | Movie subtitles from IMDb |
| G1 | 105,341,070 | 392,635 | Informative | News from G1 portal (2014–2015) |
| PLN-Br [Bruckschen et al. 2008] | 31,196,395 | 259,762 | Informative | Corpus of PLN-BR project (1994–2005) |
| Domínio Público | 23,750,521 | 381,697 | Prose | 138,268 literary works |
| Lacio-Web [Aluísio et al. 2003] | 8,962,718 | 196,077 | Mixed | Literary, informative, scientific, law, didactic texts |
| Literatura Brasileira | 1,299,008 | 66,706 | Prose | Classical Brazilian fiction e-books |
| Mundo Estranho | 1,047,108 | 55,000 | Informative | Texts from Mundo Estranho magazine |
| CHC | 941,032 | 36,522 | Informative | Texts from Ciência Hoje das Crianças |
| FAPESP | 499,008 | 31,746 | Science communication | Texts from Pesquisa FAPESP magazine |
| Textbooks | 96,209 | 11,597 | Didactic | Elementary school textbooks |
| Folhinha | 73,575 | 9,207 | Informative | Children’s news from Folhinha (Folha de São Paulo) |
| NILC subcorpus | 32,868 | 4,064 | Informative | Children’s texts (3rd–4th grade) |
| Para Seu Filho Ler | 21,224 | 3,942 | Informative | Children’s news from Zero Hora |
| SARESP | 13,308 | 3,293 | Didactic | School evaluation texts |
| **Total** | **1,395,926,282** | **3,827,725** | — | —
---
## 📖 Paper
**Portuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks**
Hartmann, N. et al. (2017), STIL 2017.
[ArXiv Paper](https://arxiv.org/abs/1708.06025)
### BibTeX
```bibtex
@inproceedings{hartmann-etal-2017-portuguese,
title = {{P}ortuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks},
author = {Hartmann, Nathan and Fonseca, Erick and Shulby, Christopher and Treviso, Marcos and Silva, J{'e}ssica and Alu{'i}sio, Sandra},
year = 2017,
month = oct,
booktitle = {Proceedings of the 11th {B}razilian Symposium in Information and Human Language Technology},
publisher = {Sociedade Brasileira de Computa{\c{c}}{\~a}o},
address = {Uberl{\^a}ndia, Brazil},
pages = {122--131},
url = {https://aclanthology.org/W17-6615/},
editor = {Paetzold, Gustavo Henrique and Pinheiro, Vl{'a}dia}
}
```
---
## 📜 License
Creative Commons Attribution 4.0 International (CC BY 4.0)
|
nilc-nlp/fasttext-cbow-1000d
|
nilc-nlp
| 2025-09-20T22:56:44Z | 0 | 0 |
safetensors
|
[
"safetensors",
"word-embeddings",
"static",
"portuguese",
"fasttext",
"cbow",
"1000d",
"feature-extraction",
"pt",
"arxiv:1708.06025",
"license:cc-by-4.0",
"region:us"
] |
feature-extraction
| 2025-09-20T21:51:01Z |
---
language: pt
tags:
- word-embeddings
- static
- portuguese
- fasttext
- cbow
- 1000d
license: cc-by-4.0
library_name: safetensors
pipeline_tag: feature-extraction
---
# NILC Portuguese Word Embeddings — FastText CBOW 1000d
This repository contains the **FastText CBOW 1000d** model in **safetensors** format.
## About
NILC-Embeddings is a repository for storing and sharing **word embeddings** for the Portuguese language. The goal is to provide ready-to-use vector resources for **Natural Language Processing (NLP)** and **Machine Learning** tasks.
The embeddings were trained on a large Portuguese corpus (Brazilian + European), composed of 17 corpora (~1.39B tokens). Training was carried out with the following algorithms: **Word2Vec**, **FastText**, **Wang2Vec**, and **GloVe**.
---
## 📂 Files
- `embeddings.safetensors` → embedding matrix (`[vocab_size, 1000]`)
- `vocab.txt` → vocabulary (one token per line, aligned with rows)
---
## 🚀 Usage
```python
from huggingface_hub import hf_hub_download
from safetensors.numpy import load_file
path = hf_hub_download(repo_id="nilc-nlp/fasttext-cbow-1000d",
filename="embeddings.safetensors")
data = load_file(path)
vectors = data["embeddings"]
vocab_path = hf_hub_download(repo_id="nilc-nlp/fasttext-cbow-1000d",
filename="vocab.txt")
with open(vocab_path) as f:
vocab = [w.strip() for w in f]
print(vectors.shape)
```
Or in PyTorch:
```python
from safetensors.torch import load_file
tensors = load_file("embeddings.safetensors")
vectors = tensors["embeddings"] # torch.Tensor
```
---
## 📊 Corpus
The embeddings were trained on a combination of 17 corpora (~1.39B tokens):
| Corpus | Tokens | Types | Genre | Description |
|--------|--------|-------|-------|-------------|
| LX-Corpus [Rodrigues et al. 2016] | 714,286,638 | 2,605,393 | Mixed genres | Large collection of texts from 19 sources, mostly European Portuguese |
| Wikipedia | 219,293,003 | 1,758,191 | Encyclopedic | Wikipedia dump (2016-10-20) |
| GoogleNews | 160,396,456 | 664,320 | Informative | News crawled from Google News |
| SubIMDB-PT | 129,975,149 | 500,302 | Spoken | Movie subtitles from IMDb |
| G1 | 105,341,070 | 392,635 | Informative | News from G1 portal (2014–2015) |
| PLN-Br [Bruckschen et al. 2008] | 31,196,395 | 259,762 | Informative | Corpus of PLN-BR project (1994–2005) |
| Domínio Público | 23,750,521 | 381,697 | Prose | 138,268 literary works |
| Lacio-Web [Aluísio et al. 2003] | 8,962,718 | 196,077 | Mixed | Literary, informative, scientific, law, didactic texts |
| Literatura Brasileira | 1,299,008 | 66,706 | Prose | Classical Brazilian fiction e-books |
| Mundo Estranho | 1,047,108 | 55,000 | Informative | Texts from Mundo Estranho magazine |
| CHC | 941,032 | 36,522 | Informative | Texts from Ciência Hoje das Crianças |
| FAPESP | 499,008 | 31,746 | Science communication | Texts from Pesquisa FAPESP magazine |
| Textbooks | 96,209 | 11,597 | Didactic | Elementary school textbooks |
| Folhinha | 73,575 | 9,207 | Informative | Children’s news from Folhinha (Folha de São Paulo) |
| NILC subcorpus | 32,868 | 4,064 | Informative | Children’s texts (3rd–4th grade) |
| Para Seu Filho Ler | 21,224 | 3,942 | Informative | Children’s news from Zero Hora |
| SARESP | 13,308 | 3,293 | Didactic | School evaluation texts |
| **Total** | **1,395,926,282** | **3,827,725** | — | —
---
## 📖 Paper
**Portuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks**
Hartmann, N. et al. (2017), STIL 2017.
[ArXiv Paper](https://arxiv.org/abs/1708.06025)
### BibTeX
```bibtex
@inproceedings{hartmann-etal-2017-portuguese,
title = {{P}ortuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks},
author = {Hartmann, Nathan and Fonseca, Erick and Shulby, Christopher and Treviso, Marcos and Silva, J{'e}ssica and Alu{'i}sio, Sandra},
year = 2017,
month = oct,
booktitle = {Proceedings of the 11th {B}razilian Symposium in Information and Human Language Technology},
publisher = {Sociedade Brasileira de Computa{\c{c}}{\~a}o},
address = {Uberl{\^a}ndia, Brazil},
pages = {122--131},
url = {https://aclanthology.org/W17-6615/},
editor = {Paetzold, Gustavo Henrique and Pinheiro, Vl{'a}dia}
}
```
---
## 📜 License
Creative Commons Attribution 4.0 International (CC BY 4.0)
|
shiva-sai123/finetuned-gemma-3-270m-medical
|
shiva-sai123
| 2025-09-20T21:35:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T21:35:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
takara-ai/qwen_rwkv_projection
|
takara-ai
| 2025-09-20T20:56:10Z | 0 | 0 | null |
[
"safetensors",
"en",
"dataset:takara-ai/micropajama",
"license:mit",
"region:us"
] | null | 2025-09-20T20:36:04Z |
---
license: mit
datasets:
- takara-ai/micropajama
language:
- en
---
<img src="https://takara.ai/images/logo-24/TakaraAi.svg" width="200" alt="Takara.ai Logo" />
From the Frontier Research Team at takara.ai we present a linear projection model that maps Qwen embeddings to RWKV embeddings for enhanced cross-model compatibility.
## Model Details
- **Input Dimensions**: 4096 (Qwen embeddings)
- **Output Dimensions**: 768 (RWKV embeddings)
- **Architecture**: Linear layer (no bias)
- **Training**: Cosine similarity loss on L2-normalized pairs
- **Dataset**: takara-ai/micropajama_embedded_concat
## Usage
### Quick Start
```python
import torch
from huggingface_hub import PyTorchModelHubMixin
# Define the model class (copy this exactly)
class QwenRwkvProjection(torch.nn.Module, PyTorchModelHubMixin,
library_name="takara-ai",
tags=["embedding", "projection", "qwen", "rwkv"],
license="mit"):
def __init__(self, din=4096, dout=768):
super().__init__()
self.linear = torch.nn.Linear(din, dout, bias=False)
def forward(self, x):
return self.linear(x)
# Load from Hub
model = QwenRwkvProjection.from_pretrained("takara-ai/qwen_rwkv_projection")
model.eval()
# Project embeddings (don't forget to normalize!)
normalized_qwen_embeddings = torch.nn.functional.normalize(your_qwen_embeddings, p=2, dim=-1, eps=1e-8)
projected_embeddings = model(normalized_qwen_embeddings)
```
### Important Notes
- **Dimensions**: Input must be (batch_size, 4096), output will be (batch_size, 768)
- **Bias**: Model uses no bias term (trained on normalized pairs)
|
havoic206/merged2
|
havoic206
| 2025-09-20T19:53:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-to-text",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-20T19:42:02Z |
---
base_model: unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** havoic206
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-vl-7b-instruct-unsloth-bnb-4bit
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
abagade/gemma-3-270m-bhagavad-gita-v1-Q8_0-GGUF
|
abagade
| 2025-09-20T19:34:03Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"gemma",
"text-generation",
"bhagavad-gita",
"conversational",
"spiritual-guidance",
"llama-cpp",
"gguf-my-repo",
"base_model:abagade/gemma-3-270m-bhagavad-gita-v1",
"base_model:quantized:abagade/gemma-3-270m-bhagavad-gita-v1",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T19:33:53Z |
---
library_name: transformers
tags:
- gemma
- text-generation
- bhagavad-gita
- conversational
- spiritual-guidance
- llama-cpp
- gguf-my-repo
base_model: abagade/gemma-3-270m-bhagavad-gita-v1
---
# abagade/gemma-3-270m-bhagavad-gita-v1-Q8_0-GGUF
This model was converted to GGUF format from [`abagade/gemma-3-270m-bhagavad-gita-v1`](https://huggingface.co/abagade/gemma-3-270m-bhagavad-gita-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/abagade/gemma-3-270m-bhagavad-gita-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo abagade/gemma-3-270m-bhagavad-gita-v1-Q8_0-GGUF --hf-file gemma-3-270m-bhagavad-gita-v1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo abagade/gemma-3-270m-bhagavad-gita-v1-Q8_0-GGUF --hf-file gemma-3-270m-bhagavad-gita-v1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo abagade/gemma-3-270m-bhagavad-gita-v1-Q8_0-GGUF --hf-file gemma-3-270m-bhagavad-gita-v1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo abagade/gemma-3-270m-bhagavad-gita-v1-Q8_0-GGUF --hf-file gemma-3-270m-bhagavad-gita-v1-q8_0.gguf -c 2048
```
|
hitoshura25/webauthn-security-sequential_20250920_132710_stage1_analysis
|
hitoshura25
| 2025-09-20T18:49:23Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"security",
"vulnerability-analysis",
"webauthn",
"mlx-converted",
"license:apache-2.0",
"region:us"
] | null | 2025-09-20T18:49:22Z |
---
base_model: allenai/OLMo-2-1B
base_model_relation: adapter
library_name: peft
peft_type: LORA
tags:
- security
- vulnerability-analysis
- webauthn
- mlx-converted
license: apache-2.0
---
# WebAuthn Security LoRA Adapter
This LoRA adapter specializes the base model for WebAuthn security vulnerability analysis.
**Converted from MLX format to HuggingFace PEFT format for compatibility.**
## Model Details
- **Base Model**: allenai/OLMo-2-1B
- **Adapter Type**: LoRA (Low-Rank Adaptation)
- **Target Modules**: q_proj, v_proj, k_proj, o_proj, gate_proj, up_proj, down_proj
- **LoRA Rank**: 8
- **LoRA Alpha**: 20.0
- **LoRA Dropout**: 0.0
## Training Details
- **Training Framework**: MLX-LM (converted to PEFT format)
- **Training Data**: WebAuthn security vulnerabilities
- **Iterations**: 500
- **Learning Rate**: 5e-06
- **Optimizer**: adamw
- **Fine-tune Type**: lora
## Usage
Load this adapter with the PEFT library:
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load configuration and model
config = PeftConfig.from_pretrained("path/to/this/adapter")
base_model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(base_model, "path/to/this/adapter")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Use for inference
inputs = tokenizer("Analyze this WebAuthn vulnerability:", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## Conversion Notes
This adapter was originally trained using MLX-LM and converted to HuggingFace PEFT format using an evidence-based conversion pipeline that:
1. Converts MLX parameter naming (`lora_a/lora_b`) to PEFT format (`lora_A.weight/lora_B.weight`)
2. Adds proper `base_model.model.` prefixes to parameter names
3. Generates PEFT-compatible configuration with required fields
4. Maintains full compatibility with HuggingFace ecosystem
## Performance
This adapter enhances the base model's capability for:
- WebAuthn security vulnerability analysis
- Code fix generation for security issues
- Security-aware code recommendations
## License
Apache 2.0
|
cpatonn/Ling-flash-2.0-AWQ-8bit
|
cpatonn
| 2025-09-20T17:30:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation",
"conversational",
"custom_code",
"arxiv:2507.17702",
"base_model:inclusionAI/Ling-flash-2.0",
"base_model:quantized:inclusionAI/Ling-flash-2.0",
"license:mit",
"autotrain_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-09-20T16:40:16Z |
---
license: mit
base_model:
- inclusionAI/Ling-flash-2.0
pipeline_tag: text-generation
library_name: transformers
---
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
<p>
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
## Introduction
Today, __Ling-flash-2.0__ is officially open-sourced! 🚀
Following the release of the __language model [Ling-mini-2.0](https://huggingface.co/inclusionAI/Ling-mini-2.0)__ and the __thinking model [Ring-mini-2.0](https://huggingface.co/inclusionAI/Ring-mini-2.0)__, we are now open-sourcing the third MoE LLM under the __Ling 2.0 architecture: Ling-flash-2.0__, a language model with __100B total parameters__ and __6.1B activated parameters (4.8B non-embedding)__.
Trained on __20T+ tokens of high-quality data__, together with __supervised fine-tuning__ and __multi-stage reinforcement learning__, Ling-flash-2.0 achieves __SOTA performance among dense models under 40B parameters__, despite activating only ~6B parameters. Compared to MoE models with larger activation/total parameters, it also demonstrates strong competitiveness. Notably, it delivers outstanding performance in __complex reasoning, code generation, and frontend development__.
### Powerful Complex Reasoning Abilities
We conducted a comprehensive evaluation of Ling-flash-2.0’s reasoning capabilities, reporting strong results on representative benchmarks:
* __Multi-disciplinary knowledge reasoning__: GPQA-Diamond, MMLU-Pro
* __Advanced mathematical reasoning__: AIME 2025, Omni-MATH, OptMATH (advanced mathematical optimization tasks)
* __Challenging code generation__: LiveCodeBench v6, CodeForces-Elo
* __Logical reasoning__: KOR-Bench, ARC-Prize
* __Key regulated industries (Finance, Healthcare)__: FinanceReasoning, HealthBench
Compared with __dense models under 40B__ (e.g., Qwen3-32B-Non-Thinking, Seed-OSS-36B-Instruct (think budget=0)) and __larger-activation/total-parameter MoE models__ (e.g., Hunyuan-A13B-Instruct, GPT-OSS-120B/low), __Ling-flash-2.0__ demonstrates stronger complex reasoning power. Moreover, it shows high competitiveness on __creative tasks__ (Creative Writing v3).
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/zxAvQ7QtrAwAAAAAQqAAAAgADkZ7AQFr/fmt.webp"/>
<p>
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/qQ_sTqrxiesAAAAAQuAAAAgADkZ7AQFr/original"/>
<p>
### Efficient Architecture, High-Speed Inference
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/fMdiQZqYKSAAAAAAVdAAAAgADkZ7AQFr/fmt.avif"/>
<p>
Guided by [Ling Scaling Laws](https://arxiv.org/abs/2507.17702), Ling 2.0 adopts a __1/32 activation-ratio MoE architecture__, optimized across multiple design choices: expert granularity, shared-expert ratio, attention balance, __aux-loss-free + sigmoid routing strategy__, MTP layers, QK-Norm, Partial-RoPE, and more. These refinements enable __small-activation MoE__ models to achieve __7× efficiency gains__ over equivalent dense architectures.
In other words, with just __6.1B activated parameters (4.8B non-embedding)__, __Ling-flash-2.0__ can match the performance of ~40B dense models. Thanks to its small activation size, it also delivers major inference speed advantages:
* On __H20 hardware__, Ling-flash-2.0 achieves __200+ tokens/s__, offering __3× speedups__ compared to 36B dense models in everyday use.
* With __YaRN extrapolation__, it supports __128K context length__, and as output length grows, its relative speedup can reach __7× or more__.
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/oR9UTY7S0QgAAAAAgKAAAAgADkZ7AQFr/original"/>
<p>
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/Hid1RrgsCUAAAAAAQYAAAAgADkZ7AQFr/fmt.webp"/>
<p>
## Model Downloads
You can download the following table to see the various stage of Ling-flash-2.0 models. If you are located in mainland China, we also provide the model on ModelScope.cn to speed up the download process.
<center>
| **Model** | **Context Length** | **Download** |
|:----------------------:| :----------------: |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| Ling-flash-base-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-flash-base-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-flash-base-2.0) |
| Ling-flash-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-flash-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-flash-2.0) |
</center>
Note: If you are interested in previous version, please visit the past model collections in [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
## Quickstart
### 🤗 Hugging Face Transformers
Here is a code snippet to show you how to use the chat model with `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "inclusionAI/Ling-flash-2.0"
model = AutoModelForCausalLM.from_pretrained(
model_name,
dtype="auto",
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### 🤖 ModelScope
If you're in mainland China, we strongly recommend you to use our model from 🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>.
## Deployment
### vLLM
vLLM supports offline batched inference or launching an OpenAI-Compatible API Service for online inference.
#### Environment Preparation
Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below:
```bash
git clone -b v0.10.0 https://github.com/vllm-project/vllm.git
cd vllm
wget https://raw.githubusercontent.com/inclusionAI/Ling-V2/refs/heads/main/inference/vllm/bailing_moe_v2.patch
git apply bailing_moe_v2.patch
pip install -e .
```
#### Offline Inference:
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ling-flash-2.0")
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384)
llm = LLM(model="inclusionAI/Ling-flash-2.0", dtype='bfloat16')
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = llm.generate([text], sampling_params)
```
#### Online Inference:
```bash
vllm serve inclusionAI/Ling-flash-2.0 \
--tensor-parallel-size 2 \
--pipeline-parallel-size 1 \
--use-v2-block-manager \
--gpu-memory-utilization 0.90
```
To handle long context in vLLM using YaRN, we need to follow these two steps:
1. Add a `rope_scaling` field to the model's `config.json` file, for example:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
### SGLang
#### Environment Preparation
We will later submit our model to SGLang official release, now we can prepare the environment following steps:
```shell
pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
```
You can use docker image as well:
```shell
docker pull lmsysorg/sglang:v0.5.2rc0-cu126
```
Then you should apply patch to sglang installation:
```shell
# patch command is needed, run `yum install -y patch` if needed
patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
```
#### Run Inference
BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
- Start server:
```shell
python -m sglang.launch_server \
--model-path $MODLE_PATH \
--host 0.0.0.0 --port $PORT \
--trust-remote-code \
--attention-backend fa3
```
MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
to start command.
- Client:
```shell
curl -s http://localhost:${PORT}/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
```
More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
### Finetuning
We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to [finetune Ling](https://github.com/inclusionAI/Ling-V2/blob/main/docs/llamafactory_finetuning.md).
## License
This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ling-V2/blob/master/LICENCE).
|
robert253/mental-health-assistant
|
robert253
| 2025-09-20T16:38:21Z | 0 | 1 | null |
[
"safetensors",
"gpt2",
"region:us"
] | null | 2025-09-20T16:36:35Z |
# Mental Health Counseling Assistant
This is a fine-tuned DistilGPT2 model trained on mental health counseling conversations.
## Training Details
- Base Model: distilgpt2
- Dataset: Synthetic mental health counseling conversations
- Training Epochs: 2
- Final Training Loss: 1.49
- Final Validation Loss: 1.46
## Usage
The model is designed to provide empathetic responses to mental health concerns.
## Limitations
- May generate verbose or repetitive responses
- Should not be used as a substitute for professional mental health care
- Responses should be reviewed by qualified professionals
|
mradermacher/Alpha-Model-1-105B-GGUF
|
mradermacher
| 2025-09-20T14:33:46Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bruhzair/Alpha-Model-1-105B",
"base_model:quantized:bruhzair/Alpha-Model-1-105B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-20T11:43:37Z |
---
base_model: bruhzair/Alpha-Model-1-105B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/bruhzair/Alpha-Model-1-105B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Alpha-Model-1-105B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q2_K.gguf) | Q2_K | 38.9 | |
| [GGUF](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q3_K_S.gguf) | Q3_K_S | 45.5 | |
| [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q3_K_M.gguf.part2of2) | Q3_K_M | 50.7 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q3_K_L.gguf.part2of2) | Q3_K_L | 55.2 | |
| [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.IQ4_XS.gguf.part2of2) | IQ4_XS | 56.8 | |
| [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q4_K_S.gguf.part2of2) | Q4_K_S | 59.8 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q4_K_M.gguf.part2of2) | Q4_K_M | 63.1 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q5_K_S.gguf.part2of2) | Q5_K_S | 72.3 | |
| [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q5_K_M.gguf.part2of2) | Q5_K_M | 74.2 | |
| [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q6_K.gguf.part2of2) | Q6_K | 86.1 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Alpha-Model-1-105B-GGUF/resolve/main/Alpha-Model-1-105B.Q8_0.gguf.part3of3) | Q8_0 | 111.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Aeronicr/lora_model_lpwan
|
Aeronicr
| 2025-09-20T14:24:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T14:24:18Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Aeronicr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
WenFengg/RE20Sat_14_23
|
WenFengg
| 2025-09-20T13:05:15Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-20T13:04:34Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
summerstars/Summer-s1-rc
|
summerstars
| 2025-09-20T12:43:17Z | 0 | 0 | null |
[
"safetensors",
"llama4_text",
"llm",
"fast-inference",
"en",
"text-generation",
"conversational",
"base_model:summerstars/Summer-s1-rc",
"base_model:finetune:summerstars/Summer-s1-rc",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-20T10:30:21Z |
---
license: apache-2.0
base_model:
- summerstars/Summer-s1-rc
pipeline_tag: text-generation
tags:
- llm
- fast-inference
- en
language:
- en
---
# Summer-S1-RC

A large language model (LLM) based on **`summerstars/Summer-s1-rc`**, optimized for faster inference through a custom acceleration technique.
Developed by high school student **summer**.
## Features
- **Base model:** `summerstars/Summer-s1-rc`
- **Low-latency inference**
- **Supports English text generation**
## Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("summerstars/Summer-s1-rc")
tokenizer = AutoTokenizer.from_pretrained("summerstars/Summer-s1-rc")
inputs = tokenizer("hello!!", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
BabuskaKukold/wan2model
|
BabuskaKukold
| 2025-09-20T12:39:38Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-20T12:34:55Z |
---
license: apache-2.0
---
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758365320
|
schooncestiaa
| 2025-09-20T10:50:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T10:49:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Choco1994/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-prickly_cunning_ladybug
|
Choco1994
| 2025-09-20T10:03:16Z | 175 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am prickly_cunning_ladybug",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-07T00:57:34Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am prickly_cunning_ladybug
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kecoakya/Qwen3-0.6B-Gensyn-Swarm-miniature_ferocious_turkey
|
kecoakya
| 2025-09-20T09:53:41Z | 44 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am miniature_ferocious_turkey",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T14:55:35Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am miniature_ferocious_turkey
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
asadullah797/my-wav2vec2-fine
|
asadullah797
| 2025-09-20T09:37:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T09:34:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/dvine-v54-sdxl
|
John6666
| 2025-09-20T08:29:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"hentai",
"girls",
"cute",
"detailed",
"illustrious",
"en",
"base_model:BoRnNo0b/files-mirror",
"base_model:finetune:BoRnNo0b/files-mirror",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-09-20T08:27:31Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- hentai
- girls
- cute
- detailed
- illustrious
base_model: BoRnNo0b/files-mirror
---
Original model is [here](https://huggingface.co/BoRnNo0b/files-mirror) and on [Civitai](https://civitai.com/models/1098213/dvine?modelVersionId=2209815).
The author is [here](https://huggingface.co/BoRnNo0b)
This model created by [BoRnNo0b](https://civitai.com/user/BoRnNo0b).
|
mradermacher/Clowncar-dev-v3-RP-ERP-post-training-v0.3-GGUF
|
mradermacher
| 2025-09-20T08:27:19Z | 99 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:SuperbEmphasis/Clowncar-dev-v3-RP-ERP-post-training-v0.3",
"base_model:quantized:SuperbEmphasis/Clowncar-dev-v3-RP-ERP-post-training-v0.3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-19T07:24:49Z |
---
base_model: SuperbEmphasis/Clowncar-dev-v3-RP-ERP-post-training-v0.3
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/SuperbEmphasis/Clowncar-dev-v3-RP-ERP-post-training-v0.3
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Clowncar-dev-v3-RP-ERP-post-training-v0.3-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Clowncar-dev-v3-RP-ERP-post-training-v0.3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Clowncar-dev-v3-RP-ERP-post-training-v0.3-GGUF/resolve/main/Clowncar-dev-v3-RP-ERP-post-training-v0.3.Q2_K.gguf) | Q2_K | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/Clowncar-dev-v3-RP-ERP-post-training-v0.3-GGUF/resolve/main/Clowncar-dev-v3-RP-ERP-post-training-v0.3.Q3_K_S.gguf) | Q3_K_S | 17.0 | |
| [GGUF](https://huggingface.co/mradermacher/Clowncar-dev-v3-RP-ERP-post-training-v0.3-GGUF/resolve/main/Clowncar-dev-v3-RP-ERP-post-training-v0.3.Q3_K_M.gguf) | Q3_K_M | 18.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Clowncar-dev-v3-RP-ERP-post-training-v0.3-GGUF/resolve/main/Clowncar-dev-v3-RP-ERP-post-training-v0.3.Q3_K_L.gguf) | Q3_K_L | 20.3 | |
| [GGUF](https://huggingface.co/mradermacher/Clowncar-dev-v3-RP-ERP-post-training-v0.3-GGUF/resolve/main/Clowncar-dev-v3-RP-ERP-post-training-v0.3.IQ4_XS.gguf) | IQ4_XS | 21.1 | |
| [GGUF](https://huggingface.co/mradermacher/Clowncar-dev-v3-RP-ERP-post-training-v0.3-GGUF/resolve/main/Clowncar-dev-v3-RP-ERP-post-training-v0.3.Q4_K_S.gguf) | Q4_K_S | 22.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Clowncar-dev-v3-RP-ERP-post-training-v0.3-GGUF/resolve/main/Clowncar-dev-v3-RP-ERP-post-training-v0.3.Q4_K_M.gguf) | Q4_K_M | 23.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Clowncar-dev-v3-RP-ERP-post-training-v0.3-GGUF/resolve/main/Clowncar-dev-v3-RP-ERP-post-training-v0.3.Q5_K_S.gguf) | Q5_K_S | 26.8 | |
| [GGUF](https://huggingface.co/mradermacher/Clowncar-dev-v3-RP-ERP-post-training-v0.3-GGUF/resolve/main/Clowncar-dev-v3-RP-ERP-post-training-v0.3.Q5_K_M.gguf) | Q5_K_M | 27.6 | |
| [GGUF](https://huggingface.co/mradermacher/Clowncar-dev-v3-RP-ERP-post-training-v0.3-GGUF/resolve/main/Clowncar-dev-v3-RP-ERP-post-training-v0.3.Q6_K.gguf) | Q6_K | 31.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Clowncar-dev-v3-RP-ERP-post-training-v0.3-GGUF/resolve/main/Clowncar-dev-v3-RP-ERP-post-training-v0.3.Q8_0.gguf) | Q8_0 | 41.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758353064
|
schooncestiaa
| 2025-09-20T07:25:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T07:25:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
UnifiedHorusRA/Aether_Crash_Zoom_-_Wan_2.2_5b_i2v_LoRA
|
UnifiedHorusRA
| 2025-09-20T07:11:53Z | 9 | 0 | null |
[
"custom",
"region:us"
] | null | 2025-09-10T06:09:09Z |
<!-- CIVITAI_MODEL_ID: 1830265 -->
<!-- TITLE_BLOCK_START -->
# Aether Crash Zoom - Wan 2.2 5b i2v LoRA
**Creator**: [joachim_s](https://civitai.com/user/joachim_s)
**Civitai Model Page**: [https://civitai.com/models/1830265](https://civitai.com/models/1830265)
<!-- TITLE_BLOCK_END -->
<!-- VERSIONS_TABLE_START -->
## Versions Included
| Preview | Version Name | Folder on Hugging Face | Civitai Link |
|---|---|---|---|
| <img src="https://huggingface.co/UnifiedHorusRA/Aether_Crash_Zoom_-_Wan_2.2_5b_i2v_LoRA/resolve/main/v1.0/previews/91780799.jpg" width="150" alt="Preview for v1.0"> | v1.0 | [`v1.0`](https://huggingface.co/UnifiedHorusRA/Aether_Crash_Zoom_-_Wan_2.2_5b_i2v_LoRA/tree/main/v1.0) | [Link](https://civitai.com/models/1830265?modelVersionId=2071223) |
<!-- VERSIONS_TABLE_END -->
|
UnifiedHorusRA/Lesbian_Analingus_Wan_2.2_5B_LoRA
|
UnifiedHorusRA
| 2025-09-20T07:09:44Z | 22 | 0 | null |
[
"custom",
"region:us"
] | null | 2025-09-04T05:29:03Z |
<!-- CIVITAI_MODEL_ID: 1837989 -->
<!-- TITLE_BLOCK_START -->
# Lesbian Analingus Wan 2.2 5B LoRA
**Creator**: [vfx_ai](https://civitai.com/user/vfx_ai)
**Civitai Model Page**: [https://civitai.com/models/1837989](https://civitai.com/models/1837989)
<!-- TITLE_BLOCK_END -->
<!-- VERSIONS_TABLE_START -->
## Versions Included
| Preview | Version Name | Folder on Hugging Face | Civitai Link |
|---|---|---|---|
| <img src="https://huggingface.co/UnifiedHorusRA/Lesbian_Analingus_Wan_2.2_5B_LoRA/resolve/main/v1.0/previews/92242318.jpg" width="150" alt="Preview for v1.0"> | v1.0 | [`v1.0`](https://huggingface.co/UnifiedHorusRA/Lesbian_Analingus_Wan_2.2_5B_LoRA/tree/main/v1.0) | [Link](https://civitai.com/models/1837989?modelVersionId=2079932) |
<!-- VERSIONS_TABLE_END -->
|
UnifiedHorusRA/Realistic_Fire_Wan_2.2_I2V_5B
|
UnifiedHorusRA
| 2025-09-20T07:08:05Z | 23 | 0 | null |
[
"custom",
"region:us"
] | null | 2025-09-04T05:07:56Z |
<!-- CIVITAI_MODEL_ID: 1922135 -->
<!-- TITLE_BLOCK_START -->
# Realistic Fire Wan 2.2 I2V 5B
**Creator**: [T_E_S1](https://civitai.com/user/T_E_S1)
**Civitai Model Page**: [https://civitai.com/models/1922135](https://civitai.com/models/1922135)
<!-- TITLE_BLOCK_END -->
<!-- VERSIONS_TABLE_START -->
## Versions Included
| Preview | Version Name | Folder on Hugging Face | Civitai Link |
|---|---|---|---|
| <img src="https://huggingface.co/UnifiedHorusRA/Realistic_Fire_Wan_2.2_I2V_5B/resolve/main/v1.0/previews/97971339.jpg" width="150" alt="Preview for v1.0"> | v1.0 | [`v1.0`](https://huggingface.co/UnifiedHorusRA/Realistic_Fire_Wan_2.2_I2V_5B/tree/main/v1.0) | [Link](https://civitai.com/models/1922135?modelVersionId=2175490) |
<!-- VERSIONS_TABLE_END -->
|
AnkitaSondhi/finetuned-gtp-MedicalQA-instruct
|
AnkitaSondhi
| 2025-09-20T06:40:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T06:40:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-mix-low-tweet-1m-en-gpt
|
AmberYifan
| 2025-09-20T03:55:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T00:43:10Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2.5-0.5b-instruct-full-pretrain-mix-low-tweet-1m-en-gpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2.5-0.5b-instruct-full-pretrain-mix-low-tweet-1m-en-gpt
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the mix_low_tweet_1m_en_gpt dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
seraphimzzzz/112905
|
seraphimzzzz
| 2025-09-20T02:32:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:32:06Z |
[View on Civ Archive](https://civarchive.com/models/137401?modelVersionId=151673)
|
amethyst9/726558
|
amethyst9
| 2025-09-20T02:09:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:09:00Z |
[View on Civ Archive](https://civarchive.com/models/726514?modelVersionId=812381)
|
luckeciano/Qwen-2.5-7B-DrGRPO-SGD-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_8360
|
luckeciano
| 2025-09-19T22:23:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T18:42:27Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-DrGRPO-SGD-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_8360
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-DrGRPO-SGD-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_8360
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-DrGRPO-SGD-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_8360", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/3tzkzxrg)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
onnxmodelzoo/mobilenetv2_100_Opset18
|
onnxmodelzoo
| 2025-09-19T21:12:38Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T21:12:33Z |
---
language: en
license: apache-2.0
model_name: mobilenetv2_100_Opset18.onnx
tags:
- Computer_Vision
---
|
aamijar/Llama-2-7b-hf-lora-r32-boolq-epochs1
|
aamijar
| 2025-09-19T20:28:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T20:27:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jinx2321/byt5-tagged-all-araea-1e4
|
jinx2321
| 2025-09-19T19:04:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T11:44:35Z |
---
library_name: transformers
license: apache-2.0
base_model: google/byt5-small
tags:
- generated_from_trainer
model-index:
- name: byt5-tagged-all-araea-1e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-tagged-all-araea-1e4
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
onnxmodelzoo/efficientnet_em_Opset17
|
onnxmodelzoo
| 2025-09-19T16:53:56Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T16:53:51Z |
---
language: en
license: apache-2.0
model_name: efficientnet_em_Opset17.onnx
tags:
- Computer_Vision
---
|
thibaultmaho/medgemma-4b-it-sft-lora-crc100k-1500subset-max448-bis
|
thibaultmaho
| 2025-09-19T11:10:28Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T20:59:41Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-crc100k-1500subset-max448-bis
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for medgemma-4b-it-sft-lora-crc100k-1500subset-max448-bis
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="thibaultmaho/medgemma-4b-it-sft-lora-crc100k-1500subset-max448-bis", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.6.0
- Datasets: 4.1.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
merve/Isaac-0.1
|
merve
| 2025-09-19T10:35:18Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"isaac",
"text-generation",
"perceptron",
"issac-0.1",
"conversational",
"custom_code",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-09-19T09:31:22Z |
---
license: cc-by-nc-4.0
base_model:
- Qwen/Qwen3-1.7B
- google/siglip2-so400m-patch14-384
library_name: transformers
tags:
- perceptron
- issac-0.1
---
# [Isaac-0.1 by Perceptron](https://www.perceptron.inc/blog/introducing-isaac-0-1)
*Note this is the Post-trained model* [Try out the model on our playground](https://www.perceptron.inc/demo)
We're introducing Isaac 0.1, our first perceptive-language model and a major step toward building AI systems that can understand and interact with the physical world. Isaac 0.1 is an open-source, 2B-parameter model built for real-world applications. It sets a new standard for efficiency, delivering capabilities that meet or exceed those of models over 50 times its size.
Founded by the team behind Meta's Chameleon multimodal models, Perceptron is tackling a fundamental challenge: bringing the power of physical AI to the dynamic, multimodal, and real-time environments we live and work in.
Isaac 0.1 is the first in our family of models built to be the intelligence layer for the physical world. It's now available open source for researchers and developers everywhere.
## What’s new in Isaac 0.1
**Visual QA, simply trained**
Strong results on standard understanding benchmarks with a straightforward, reproducible training recipe.
**Grounded spatial intelligence**
Precise pointing and localization with robust spatial reasoning. Ask “what’s broken in this machine?” and get grounded answers with highlighted regions—handling occlusions, relationships, and object interactions.
**In-context learning for perception**
Show a few annotated examples (defects, safety conditions, etc.) in the prompt and the model adapts—no YOLO-style fine-tuning or custom detector stacks required.
**OCR & fine-grained detail**
Reads small text and dense scenes reliably, across resolutions, with dynamic image handling for tiny features and cluttered layouts.
**Conversational Pointing**
A new interaction pattern where language and vision stay in lockstep: every claim is grounded and visually cited, reducing hallucinations and making reasoning auditable.
## Benchmarks


## Example
```bash
pip install perceptron
```
## Example using transformers
Learn more: [Huggingface Example Repo](https://github.com/perceptron-ai-inc/perceptron/tree/main/huggingface)
```bash
!git clone https://github.com/perceptron-ai-inc/perceptron.git
!cp -r perceptron/huggingface ./huggingface
```
```python
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM
from huggingface.modular_isaac import IsaacProcessor
tokenizer = AutoTokenizer.from_pretrained("PerceptronAI/Isaac-0.1", trust_remote_code=True, use_fast=False)
config = AutoConfig.from_pretrained("PerceptronAI/Isaac-0.1", trust_remote_code=True)
processor = IsaacProcessor(tokenizer=tokenizer, config=config)
model = AutoModelForCausalLM.from_pretrained("PerceptronAI/Isaac-0.1", trust_remote_code=True)
```
|
ImproverLabs/tracing_1
|
ImproverLabs
| 2025-09-19T02:36:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T19:18:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
r-three/moose-object_counting
|
r-three
| 2025-09-17T19:46:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] |
text-generation
| 2025-09-17T19:46:04Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.1-8B-Instruct
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
luckeciano/Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-6-v3_5357
|
luckeciano
| 2025-09-17T07:48:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T03:20:04Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-6-v3_5357
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-6-v3_5357
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-6-v3_5357", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/mn7inaca)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
qualcomm/PidNet
|
qualcomm
| 2025-09-16T06:20:08Z | 82 | 0 |
pytorch
|
[
"pytorch",
"tflite",
"real_time",
"android",
"image-segmentation",
"arxiv:2206.02066",
"license:other",
"region:us"
] |
image-segmentation
| 2025-03-13T22:55:54Z |
---
library_name: pytorch
license: other
tags:
- real_time
- android
pipeline_tag: image-segmentation
---

# PidNet: Optimized for Mobile Deployment
## Segment images or video by class in real-time on device
PIDNet (Proportional-Integral-Derivative Network) is a real-time semantic segmentation model based on PID controllers
This model is an implementation of PidNet found [here](https://github.com/XuJiacong/PIDNet).
This repository provides scripts to run PidNet on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/pidnet).
### Model Details
- **Model Type:** Model_use_case.semantic_segmentation
- **Model Stats:**
- Model checkpoint: PIDNet_S_Cityscapes_val.pt
- Inference latency: RealTime
- Input resolution: 1024x2048
- Number of output classes: 19
- Number of parameters: 8.06M
- Model size (float): 29.1 MB
- Model size (w8a8): 8.02 MB
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| PidNet | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 136.672 ms | 2 - 57 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.tflite) |
| PidNet | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 117.007 ms | 24 - 94 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.dlc) |
| PidNet | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 57.713 ms | 2 - 66 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.tflite) |
| PidNet | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 68.364 ms | 22 - 106 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.dlc) |
| PidNet | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 48.068 ms | 2 - 34 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.tflite) |
| PidNet | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 36.924 ms | 24 - 55 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.dlc) |
| PidNet | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 31.313 ms | 24 - 59 MB | NPU | [PidNet.onnx.zip](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.onnx.zip) |
| PidNet | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 58.102 ms | 0 - 54 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.tflite) |
| PidNet | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 45.859 ms | 24 - 93 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.dlc) |
| PidNet | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 136.672 ms | 2 - 57 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.tflite) |
| PidNet | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 117.007 ms | 24 - 94 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.dlc) |
| PidNet | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 48.367 ms | 2 - 34 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.tflite) |
| PidNet | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 36.747 ms | 24 - 51 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.dlc) |
| PidNet | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 65.641 ms | 2 - 58 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.tflite) |
| PidNet | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 50.384 ms | 24 - 101 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.dlc) |
| PidNet | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 48.809 ms | 2 - 29 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.tflite) |
| PidNet | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 36.901 ms | 26 - 56 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.dlc) |
| PidNet | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 58.102 ms | 0 - 54 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.tflite) |
| PidNet | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 45.859 ms | 24 - 93 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.dlc) |
| PidNet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 48.204 ms | 2 - 32 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.tflite) |
| PidNet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 36.981 ms | 24 - 51 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.dlc) |
| PidNet | float | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 31.317 ms | 24 - 86 MB | NPU | [PidNet.onnx.zip](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.onnx.zip) |
| PidNet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 31.622 ms | 2 - 66 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.tflite) |
| PidNet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 25.174 ms | 23 - 99 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.dlc) |
| PidNet | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 21.495 ms | 30 - 90 MB | NPU | [PidNet.onnx.zip](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.onnx.zip) |
| PidNet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 30.244 ms | 2 - 59 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.tflite) |
| PidNet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 22.594 ms | 24 - 103 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.dlc) |
| PidNet | float | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 17.804 ms | 31 - 98 MB | NPU | [PidNet.onnx.zip](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.onnx.zip) |
| PidNet | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 38.1 ms | 24 - 24 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.dlc) |
| PidNet | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 31.078 ms | 24 - 24 MB | NPU | [PidNet.onnx.zip](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet.onnx.zip) |
| PidNet | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 103.402 ms | 1 - 44 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.tflite) |
| PidNet | w8a8 | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 124.571 ms | 6 - 69 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.dlc) |
| PidNet | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 52.882 ms | 1 - 60 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.tflite) |
| PidNet | w8a8 | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 68.899 ms | 6 - 83 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.dlc) |
| PidNet | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 52.251 ms | 0 - 23 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.tflite) |
| PidNet | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 64.786 ms | 6 - 30 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.dlc) |
| PidNet | w8a8 | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 102.938 ms | 85 - 163 MB | NPU | [PidNet.onnx.zip](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.onnx.zip) |
| PidNet | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 53.094 ms | 0 - 44 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.tflite) |
| PidNet | w8a8 | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 65.304 ms | 5 - 68 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.dlc) |
| PidNet | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | TFLITE | 179.694 ms | 1 - 121 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.tflite) |
| PidNet | w8a8 | RB3 Gen 2 (Proxy) | Qualcomm® QCS6490 (Proxy) | ONNX | 755.587 ms | 236 - 252 MB | CPU | [PidNet.onnx.zip](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.onnx.zip) |
| PidNet | w8a8 | RB5 (Proxy) | Qualcomm® QCS8250 (Proxy) | ONNX | 753.186 ms | 219 - 230 MB | CPU | [PidNet.onnx.zip](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.onnx.zip) |
| PidNet | w8a8 | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 103.402 ms | 1 - 44 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.tflite) |
| PidNet | w8a8 | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 124.571 ms | 6 - 69 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.dlc) |
| PidNet | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 52.557 ms | 0 - 16 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.tflite) |
| PidNet | w8a8 | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 64.604 ms | 6 - 29 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.dlc) |
| PidNet | w8a8 | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 61.301 ms | 1 - 51 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.tflite) |
| PidNet | w8a8 | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 75.103 ms | 6 - 73 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.dlc) |
| PidNet | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 52.345 ms | 0 - 18 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.tflite) |
| PidNet | w8a8 | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 64.592 ms | 6 - 27 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.dlc) |
| PidNet | w8a8 | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 53.094 ms | 0 - 44 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.tflite) |
| PidNet | w8a8 | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 65.304 ms | 5 - 68 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.dlc) |
| PidNet | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | TFLITE | 52.322 ms | 0 - 19 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.tflite) |
| PidNet | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | QNN_DLC | 64.469 ms | 6 - 23 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.dlc) |
| PidNet | w8a8 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 Mobile | ONNX | 107.317 ms | 87 - 167 MB | NPU | [PidNet.onnx.zip](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.onnx.zip) |
| PidNet | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 39.369 ms | 0 - 58 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.tflite) |
| PidNet | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 48.68 ms | 6 - 79 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.dlc) |
| PidNet | w8a8 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 80.552 ms | 98 - 423 MB | NPU | [PidNet.onnx.zip](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.onnx.zip) |
| PidNet | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | TFLITE | 47.749 ms | 1 - 49 MB | NPU | [PidNet.tflite](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.tflite) |
| PidNet | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN_DLC | 51.033 ms | 6 - 81 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.dlc) |
| PidNet | w8a8 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | ONNX | 76.755 ms | 98 - 436 MB | NPU | [PidNet.onnx.zip](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.onnx.zip) |
| PidNet | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 67.749 ms | 30 - 30 MB | NPU | [PidNet.dlc](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.dlc) |
| PidNet | w8a8 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 105.363 ms | 123 - 123 MB | NPU | [PidNet.onnx.zip](https://huggingface.co/qualcomm/PidNet/blob/main/PidNet_w8a8.onnx.zip) |
## Installation
Install the package via pip:
```bash
pip install qai-hub-models
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.pidnet.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.pidnet.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.pidnet.export
```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/pidnet/qai_hub_models/models/PidNet/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.pidnet import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S24")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
```
Step 2: **Performance profiling on cloud-hosted device**
After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
model=target_model,
device=device,
)
```
Step 3: **Verify on-device accuracy**
To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
model=target_model,
device=device,
inputs=input_data,
)
on_device_output = inference_job.download_output_data()
```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.
**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
## Run demo on a cloud-hosted device
You can also run the demo on-device.
```bash
python -m qai_hub_models.models.pidnet.demo --eval-mode on-device
```
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.pidnet.demo -- --eval-mode on-device
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on PidNet's performance across various devices [here](https://aihub.qualcomm.com/models/pidnet).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
* The license for the original implementation of PidNet can be found
[here](https://github.com/XuJiacong/PIDNet/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [PIDNet A Real-time Semantic Segmentation Network Inspired from PID Controller Segmentation of Road Scenes](https://arxiv.org/abs/2206.02066)
* [Source Model Implementation](https://github.com/XuJiacong/PIDNet)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
swnometal666/finnish-healthcare-terminology-v4
|
swnometal666
| 2025-08-26T00:35:17Z | 0 | 0 | null |
[
"safetensors",
"healthcare",
"finnish",
"medical-terminology",
"lora",
"text-generation",
"fi",
"dataset:custom-finnish-healthcare-articles",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-26T00:35:12Z |
---
language: fi
license: apache-2.0
tags:
- healthcare
- finnish
- medical-terminology
- lora
- text-generation
datasets:
- custom-finnish-healthcare-articles
metrics:
- perplexity
model_type: gpt
---
# Finnish Healthcare Terminology Model v4
This model is a fine-tuned version of TurkuNLP/gpt3-finnish-small, specialized for Finnish healthcare terminology and medical knowledge.
## Model Details
- **Base Model**: TurkuNLP/gpt3-finnish-small
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **Training Data**: Finnish healthcare articles and medical terminology
- **Language**: Finnish (fi)
- **Use Case**: Healthcare terminology, medical Q&A, educational content
## Training Configuration
- **LoRA rank (r)**: 6
- **LoRA alpha**: 12
- **Target modules**: query_key_value
- **Learning rate**: 5e-6 (ultra-conservative)
- **Training approach**: Early stopping with quality focus
## Performance
Based on comparative testing, this Final Model version outperformed intermediate checkpoints in:
- **Text diversity**: 0.809 vs 0.804
- **Healthcare terminology usage**: 13 vs 8 medical terms
- **Medical knowledge accuracy**: Better responses on healthcare topics
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load the model
tokenizer = AutoTokenizer.from_pretrained("TurkuNLP/gpt3-finnish-small")
base_model = AutoModelForCausalLM.from_pretrained("TurkuNLP/gpt3-finnish-small")
model = PeftModel.from_pretrained(base_model, "swnometal666/finnish-healthcare-terminology-v4")
# Generate healthcare-related text
prompt = "Kysymys: Mitä on diabetes? Vastaus:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Limitations
⚠️ **Important Notice**: This model is designed for educational and terminology purposes only. It should NOT be used for:
- Medical diagnosis
- Treatment recommendations
- Clinical decision-making
- Emergency medical situations
Always consult qualified healthcare professionals for medical advice.
## Training Data
The model was trained on carefully filtered Finnish healthcare articles focusing on:
- Medical terminology
- Healthcare system information
- Public health topics
- Disease prevention and education
## Ethical Considerations
- Model outputs should be verified by medical professionals
- Not suitable for clinical applications
- Designed for educational and linguistic purposes
- May contain inaccuracies - use with appropriate caution
|
AnerYubo/blockassist-bc-armored_climbing_rooster_1756168506
|
AnerYubo
| 2025-08-26T00:35:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored climbing rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:35:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored climbing rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-shaggy_elusive_giraffe_1756168499
|
AnerYubo
| 2025-08-26T00:35:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shaggy elusive giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:35:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shaggy elusive giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vangard703/output_full_no_multiview
|
vangard703
| 2025-08-26T00:32:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-26T00:26:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1756166837
|
mang3dd
| 2025-08-26T00:32:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:32:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MMS-uppal-farm-leak-Official-videos/uppal.farm.leak.viral.video.Clip
|
MMS-uppal-farm-leak-Official-videos
| 2025-08-26T00:32:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-26T00:32:13Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/mdfprj9k?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
qinuoitu/blockassist-bc-dappled_purring_bobcat_1756168204
|
qinuoitu
| 2025-08-26T00:30:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dappled purring bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:30:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dappled purring bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
opkamne/blockassist-bc-crested_clawed_wasp_1756168121
|
opkamne
| 2025-08-26T00:29:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"crested clawed wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:29:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- crested clawed wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
uppal-farm-leak-viral-video-Orginal-Clip/New.full.videos.uppal.farm.Viral.Video.Official.Tutorial
|
uppal-farm-leak-viral-video-Orginal-Clip
| 2025-08-26T00:26:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-26T00:26:48Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/mdfprj9k?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
motza0025/blockassist-bc-crested_flightless_dove_1756166400
|
motza0025
| 2025-08-26T00:26:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"crested flightless dove",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:26:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- crested flightless dove
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sa7270/harm10_fin56_l9
|
sa7270
| 2025-08-26T00:25:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T22:49:25Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
opkamne/blockassist-bc-crested_clawed_wasp_1756167817
|
opkamne
| 2025-08-26T00:24:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"crested clawed wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:24:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- crested clawed wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756167768
|
Dejiat
| 2025-08-26T00:23:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:23:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1756166050
|
indoempatnol
| 2025-08-26T00:23:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:22:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1756166113
|
katanyasekolah
| 2025-08-26T00:22:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:22:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
maxibillion1975/blockassist-bc-iridescent_squeaky_sandpiper_1756166087
|
maxibillion1975
| 2025-08-26T00:21:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent squeaky sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:21:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent squeaky sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DreamGallery/task-14-microsoft-Phi-4-mini-instruct
|
DreamGallery
| 2025-08-26T00:20:17Z | 622 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:adapter:microsoft/Phi-4-mini-instruct",
"region:us"
] | null | 2025-08-17T03:04:11Z |
---
base_model: microsoft/Phi-4-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
bottaz/chat-sentiment-finetuned-English
|
bottaz
| 2025-08-26T00:19:16Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:tabularisai/multilingual-sentiment-analysis",
"base_model:finetune:tabularisai/multilingual-sentiment-analysis",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-25T23:50:04Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: tabularisai/multilingual-sentiment-analysis
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: chat-sentiment-finetuned-English
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chat-sentiment-finetuned-English
This model is a fine-tuned version of [tabularisai/multilingual-sentiment-analysis](https://huggingface.co/tabularisai/multilingual-sentiment-analysis) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8225
- Accuracy: 0.6932
- F1: 0.6947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 89 | 0.9510 | 0.625 | 0.6122 |
| No log | 2.0 | 178 | 0.8132 | 0.6932 | 0.6970 |
| No log | 3.0 | 267 | 0.8225 | 0.6932 | 0.6947 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
ChenWu98/numina_qwen_2.5_sft_cluster_weighted_alpha4.0_split_0
|
ChenWu98
| 2025-08-26T00:18:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-26T00:17:37Z |
---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: numina_qwen_2.5_sft_cluster_weighted_alpha4.0_split_0
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for numina_qwen_2.5_sft_cluster_weighted_alpha4.0_split_0
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/32mmcp0z)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nnilayy/dreamer-binary-valence-LOSO-Subject-17
|
nnilayy
| 2025-08-26T00:18:28Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-26T00:18:25Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
dgsilvia/ppo-SnowballTarget
|
dgsilvia
| 2025-08-26T00:17:11Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-08-26T00:17:03Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dgsilvia/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756167365
|
Dejiat
| 2025-08-26T00:16:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:16:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dlrbcks/deepfake-vit
|
dlrbcks
| 2025-08-26T00:15:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-26T00:15:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1756165683
|
coelacanthxyz
| 2025-08-26T00:15:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:14:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zerofata/MS3.2-PaintedFantasy-v2-24B
|
zerofata
| 2025-08-26T00:13:18Z | 218 | 24 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:zerofata/Roleplay-Anime-Characters",
"dataset:zerofata/Instruct-Anime-CreativeWriting",
"dataset:zerofata/Summaries-Anime-FandomPages",
"dataset:zerofata/Instruct-Anime",
"base_model:ConicCat/Mistral-Small-3.2-AntiRep-24B",
"base_model:finetune:ConicCat/Mistral-Small-3.2-AntiRep-24B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-27T05:05:42Z |
---
library_name: transformers
license: apache-2.0
datasets:
- zerofata/Roleplay-Anime-Characters
- zerofata/Instruct-Anime-CreativeWriting
- zerofata/Summaries-Anime-FandomPages
- zerofata/Instruct-Anime
base_model:
- ConicCat/Mistral-Small-3.2-AntiRep-24B
---
<!DOCTYPE html>
<style>
body {
font-family: 'Georgia', 'Times New Roman', serif;
color: #dce4f0; /* Soft off-white */
line-height: 1.6;
margin: 0;
padding: 0;
background-color: #161a25; /* Deep blue from dark sky */
}
.lemonade-text {
color: #89d8ff; /* Bright blue from city lights */
position: relative;
z-index: 2;
margin-left: 0.2em;
text-shadow: 0 0 15px #89d8ff;
}
/* Section styling */
.section-container {
background-color: rgba(32, 40, 56, 0.7); /* Slightly transparent dark blue */
margin-bottom: 30px;
position: relative;
overflow: hidden;
border-bottom: 1px solid #ff9966; /* Sunset orange */
box-shadow: 0 4px 15px rgba(255, 153, 102, 0.05);
}
.section-header {
display: flex;
align-items: center;
background-color: rgba(255, 153, 102, 0.12);
padding: 10px 20px;
}
.section-indicator {
width: 8px;
height: 20px;
background-color: #ff9966; /* Sunset orange */
margin-right: 15px;
box-shadow: 0 0 8px rgba(255, 153, 102, 0.2);
}
.section-title {
font-family: 'Playfair Display', serif; /* Using the new font */
color: #ffb399; /* Lighter sunset shade */
font-size: 1.4rem;
margin: 0;
letter-spacing: 1px;
font-weight: 400;
text-transform: capitalize;
}
.section-content {
padding: 20px;
font-family: 'Crimson Text', serif; /* Using the new font */
color: #dce4f0;
line-height: 1.6;
}
/* Title styling */
.title-container {
background-color: #202838;
position: relative;
overflow: hidden;
margin-bottom: 40px;
border-left: 3px solid #ff9966; /* Sunset orange */
box-shadow: 0 6px 20px rgba(255, 153, 102, 0.07);
}
.title-wrapper {
position: relative;
z-index: 2;
padding: 25px 20px 30px 30px;
font-family: 'Playfair Display', serif;
}
.title-main {
color: #ffb399; /* Lighter sunset shade */
font-size: 2.5rem;
font-weight: 700;
margin: 0;
letter-spacing: 2px;
display: inline-block;
position: relative;
text-transform: uppercase;
}
.title-prefix {
position: relative;
z-index: 2;
}
.title-subtitle {
padding-left: 15px;
margin-top: 5px;
margin-left: 5px;
}
.subtitle-text {
color: #a6c8e0; /* Muted sky blue */
font-size: 1.2rem;
font-family: 'Crimson Text', serif;
font-weight: 300;
letter-spacing: 3px;
text-transform: uppercase;
display: inline-block;
}
.glitchy-overlay {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-image: repeating-linear-gradient(0deg, rgba(0,0,0,0) 0, rgba(137, 216, 255, 0.08) 1px, rgba(0,0,0,0) 2px); /* Rain effect with blue tint */
z-index: 1;
}
/* Data box styling */
.data-box {
background-color: rgba(22, 26, 37, 0.6);
padding: 15px;
border-left: 2px solid #ff9966; /* Sunset orange */
margin-bottom: 20px;
box-shadow: 0 2px 10px rgba(255, 153, 102, 0.05);
}
.data-row {
display: flex;
margin-bottom: 8px;
}
.data-arrow {
color: #ff9966; /* Sunset orange */
width: 20px;
display: inline-block;
}
.data-label {
color: #a6c8e0; /* Muted sky blue */
width: 80px;
display: inline-block;
}
/* Subheading styling */
.subheading {
color: #a6c8e0; /* Muted sky blue */
font-size: 1.1rem;
margin-top: 20px;
margin-bottom: 15px;
font-weight: 400;
border-bottom: 1px dashed rgba(166, 200, 224, 0.4);
display: inline-block;
text-transform: uppercase;
letter-spacing: 1px;
font-family: 'Playfair Display', serif;
}
/* Links */
a {
color: #89d8ff; /* Bright blue from city lights */
text-decoration: none;
}
a:hover {
text-decoration: underline;
color: #ffb399; /* Lighter sunset shade on hover */
}
/* Container */
.container {
max-width: 1200px;
margin: 20px auto;
padding: 40px 20px;
background-color: #202838; /* Darker container background */
background-image:
radial-gradient(circle at 20% 80%, rgba(255, 153, 102, 0.04) 0%, transparent 50%), /* Sunset glow */
radial-gradient(circle at 80% 20%, rgba(137, 216, 255, 0.04) 0%, transparent 50%), /* Blue glow */
radial-gradient(circle at 40% 40%, rgba(224, 230, 241, 0.02) 0%, transparent 50%); /* Faint cloud/light glow */
min-height: calc(100vh - 40px);
border: 1px solid #ff9966; /* Sunset orange */
border-radius: 8px;
box-shadow: 0 8px 32px rgba(255, 153, 102, 0.07);
}
/* Dropdown styling */
.dropdown-container {
margin-top: 20px;
}
.dropdown-summary {
cursor: pointer;
padding: 10px 0;
border-bottom: 1px dashed rgba(166, 200, 224, 0.4);
color: #a6c8e0; /* Muted sky blue */
font-size: 1.1rem;
font-weight: 400;
text-transform: uppercase;
letter-spacing: 1px;
font-family: 'Playfair Display', serif;
list-style: none;
display: flex;
align-items: center;
}
.dropdown-summary::-webkit-details-marker {
display: none;
}
.dropdown-arrow {
color: #ff9966; /* Sunset orange */
margin-right: 10px;
transition: transform 0.3s ease;
}
.dropdown-container[open] .dropdown-arrow {
transform: rotate(90deg);
}
.dropdown-content {
margin-top: 15px;
padding: 15px;
background-color: rgba(22, 26, 37, 0.6);
border-left: 2px solid #ff9966; /* Sunset orange */
box-shadow: 0 2px 10px rgba(255, 153, 102, 0.05);
}
.config-title {
color: #a6c8e0; /* Muted sky blue */
font-size: 1rem;
margin-bottom: 10px;
font-family: 'Playfair Display', serif;
text-transform: uppercase;
letter-spacing: 1px;
}
</style>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Painted Fantasy</title>
<link href="https://fonts.googleapis.com/css2?family=Crimson+Text:wght@400;600;700&family=Playfair+Display:wght@400;700&display=swap" rel="stylesheet">
</head>
<body>
<div class="container">
<div class="title-container">
<!-- Glitchy overlay -->
<div class="glitchy-overlay"></div>
<!-- Main title -->
<div class="title-wrapper">
<h1 class="title-main">
<span class="title-prefix">PAINTED FANTASY</span>
<span class="lemonade-text">v2</span>
</h1>
<div class="title-subtitle">
<span class="subtitle-text">MS3.2-24B</span>
</div>
</div>
</div>

<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Overview</h2>
</div>
<div class="section-content">
<p>This is an uncensored creative model intended to excel at character driven RP / ERP.</p>
<p>Version 2 feels quite different from the original, with a heavy focus on reducing repetition across conversations and improving instruction following.</p>
<p>Has a pretty unique writing style and sense of creativity (IMO). Pays the price with intermittent brain farts though.</p>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">SillyTavern Settings</h2>
</div>
<div class="section-content">
<h3 class="subheading">Recommended Roleplay Format</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Actions:</span>
<span>In plaintext</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Dialogue:</span>
<span>"In quotes"</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Thoughts:</span>
<span>*In asterisks*</span>
</div>
</div>
<h3 class="subheading">Suggested Samplers</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Temp:</span>
<span>0.5-0.6</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">MinP:</span>
<span>0.1</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">TopP:</span>
<span>0.95</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Dry:</span>
<span>0.8, 1.75, 4</span>
</div>
</div>
<h3 class="subheading">Instruct</h3>
<div class="data-box">
<p style="margin: 0;">Mistral v7 Tekken</p>
</div>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Quantizations</h2>
</div>
<div class="section-content">
<div style="margin-bottom: 20px;">
<h3 class="subheading">GGUF</h3>
<div class="data-box">
<div class="data-row">
<span style="color: #ff9966; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/mradermacher/MS3.2-PaintedFantasy-v2-24B-GGUF">Static (mradermacher)</a>
</div>
<div class="data-row">
<span style="color: #ff9966; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/mradermacher/MS3.2-PaintedFantasy-v2-24B-i1-GGUF">iMatrix (mradermacher)</a>
</div>
</div>
</div>
<div>
<h3 class="subheading">EXL3</h3>
<div class="data-box">
<div class="data-row">
<span style="color: #ff9966; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-v2-24b-exl3-3bpw">3bpw</a>
</div>
<div class="data-row">
<span style="color: #ff9966; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-v2-24b-exl3-3.5bpw">3.5bpw</a>
</div>
<div class="data-row">
<span style="color: #ff9966; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-v2-24b-exl3-4bpw">4bpw</a>
</div>
<div class="data-row">
<span style="color: #ff9966; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-v2-24b-exl3-5bpw">5bpw</a>
</div>
<div class="data-row">
<span style="color: #ff9966; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-v2-24b-exl3-6bpw">6bpw</a>
</div>
</div>
</div>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Training Process</h2>
</div>
<div class="section-content">
<p>Training process: SFT > DPO > KTO</p>
<p>SFT with RP/ERP, Stories and in character assistant data.</p>
<p>DPO focused on reducing repetition, misgendered characters and slop.</p>
<p>KTO focused on further reducing repetition and slop.</p>
<div class="dropdown-container">
<details>
<summary class="dropdown-summary">
<span class="dropdown-arrow">></span>
Axolotl configs
</summary>
<div class="dropdown-content">
<p>Not optimized for cost / performance efficiency, YMMV.</p>
<div class="config-title">SFT 1*H100</div>
<pre><code># ====================
# MODEL CONFIGURATION
# ====================
base_model: ConicCat/Mistral-Small-3.2-AntiRep-24B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
chat_template: mistral_v7_tekken
<br>
# ====================
# DATASET CONFIGURATION
# ====================
datasets:
- path: ./dataset.jsonl
type: chat_template
split: train
chat_template_strategy: tokenizer
field_messages: messages
message_property_mappings:
role: role
content: content
roles:
user: ["user"]
assistant: ["assistant"]
system: ["system"]
dataset_prepared_path:
train_on_inputs: false # Only train on assistant responses
# ====================
# QLORA CONFIGURATION
# ====================
adapter: qlora
load_in_4bit: true
lora_r: 128
lora_alpha: 128
lora_dropout: 0.1
lora_target_linear: true
# lora_modules_to_save: # Uncomment only if you added NEW tokens
# ====================
# TRAINING PARAMETERS
# ====================
num_epochs: 3
micro_batch_size: 8
gradient_accumulation_steps: 1
learning_rate: 1e-5
optimizer: paged_adamw_8bit
lr_scheduler: rex
warmup_ratio: 0.05
weight_decay: 0.01
max_grad_norm: 1.0
# ====================
# SEQUENCE & PACKING
# ====================
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
# ====================
# HARDWARE OPTIMIZATIONS
# ====================
bf16: auto
flash_attention: true
gradient_checkpointing: true
# ====================
# EVALUATION & CHECKPOINTING
# ====================
save_strategy: steps
save_steps: 20
save_total_limit: 5 # Keep best + last few checkpoints
load_best_model_at_end: true
metric_for_best_model: eval_loss
greater_is_better: false
# ====================
# LOGGING & OUTPUT
# ====================
output_dir: ./PT-SFT_1
logging_steps: 2
save_safetensors: true
# ====================
# WANDB TRACKING
# ====================
wandb_project: PF-SFT
wandb_entity: your_entity
wandb_name: run_name<p></p></code></pre>
</div>
</details>
</div>
</div>
</div>
</div>
</body>
</html>
|
Astrall2007/Qwen3-0.6B-Gensyn-Swarm-mammalian_snappy_weasel
|
Astrall2007
| 2025-08-26T00:13:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am mammalian_snappy_weasel",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-26T00:12:55Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am mammalian_snappy_weasel
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lilTAT/blockassist-bc-gentle_rugged_hare_1756167140
|
lilTAT
| 2025-08-26T00:13:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:12:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756167077
|
Dejiat
| 2025-08-26T00:11:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:11:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
marinebark/blockassist-bc-durable_wary_alligator_1756164681
|
marinebark
| 2025-08-26T00:10:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"durable wary alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:10:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- durable wary alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/llama3.2-ColdBrew-4B-Discovery-f16-GGUF
|
mradermacher
| 2025-08-26T00:10:04Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:SvalTek/llama3.2-ColdBrew-4B-Discovery-f16",
"base_model:quantized:SvalTek/llama3.2-ColdBrew-4B-Discovery-f16",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-25T23:35:17Z |
---
base_model: SvalTek/llama3.2-ColdBrew-4B-Discovery-f16
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/SvalTek/llama3.2-ColdBrew-4B-Discovery-f16
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#llama3.2-ColdBrew-4B-Discovery-f16-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama3.2-ColdBrew-4B-Discovery-f16-GGUF/resolve/main/llama3.2-ColdBrew-4B-Discovery-f16.Q2_K.gguf) | Q2_K | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-ColdBrew-4B-Discovery-f16-GGUF/resolve/main/llama3.2-ColdBrew-4B-Discovery-f16.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-ColdBrew-4B-Discovery-f16-GGUF/resolve/main/llama3.2-ColdBrew-4B-Discovery-f16.Q3_K_M.gguf) | Q3_K_M | 2.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-ColdBrew-4B-Discovery-f16-GGUF/resolve/main/llama3.2-ColdBrew-4B-Discovery-f16.Q3_K_L.gguf) | Q3_K_L | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-ColdBrew-4B-Discovery-f16-GGUF/resolve/main/llama3.2-ColdBrew-4B-Discovery-f16.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-ColdBrew-4B-Discovery-f16-GGUF/resolve/main/llama3.2-ColdBrew-4B-Discovery-f16.Q4_K_S.gguf) | Q4_K_S | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-ColdBrew-4B-Discovery-f16-GGUF/resolve/main/llama3.2-ColdBrew-4B-Discovery-f16.Q4_K_M.gguf) | Q4_K_M | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-ColdBrew-4B-Discovery-f16-GGUF/resolve/main/llama3.2-ColdBrew-4B-Discovery-f16.Q5_K_S.gguf) | Q5_K_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-ColdBrew-4B-Discovery-f16-GGUF/resolve/main/llama3.2-ColdBrew-4B-Discovery-f16.Q5_K_M.gguf) | Q5_K_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-ColdBrew-4B-Discovery-f16-GGUF/resolve/main/llama3.2-ColdBrew-4B-Discovery-f16.Q6_K.gguf) | Q6_K | 3.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-ColdBrew-4B-Discovery-f16-GGUF/resolve/main/llama3.2-ColdBrew-4B-Discovery-f16.Q8_0.gguf) | Q8_0 | 3.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama3.2-ColdBrew-4B-Discovery-f16-GGUF/resolve/main/llama3.2-ColdBrew-4B-Discovery-f16.f16.gguf) | f16 | 7.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1756165430
|
quantumxnode
| 2025-08-26T00:09:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:09:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1756166936
|
Dejiat
| 2025-08-26T00:09:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:09:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
opkamne/blockassist-bc-crested_clawed_wasp_1756166889
|
opkamne
| 2025-08-26T00:08:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"crested clawed wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-26T00:08:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- crested clawed wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.