modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-23 18:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 573
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-23 18:28:01
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
melsiddieg/qwen3-4b-arud-full-880-v1
|
melsiddieg
| 2025-09-23T06:13:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:finetune:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T06:11:14Z |
---
base_model: unsloth/Qwen3-4B-Instruct-2507
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** melsiddieg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Instruct-2507
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Alicia22/23SAT_KY10_l17
|
Alicia22
| 2025-09-23T06:13:00Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-23T05:54:25Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758607868
|
poolkiltzn
| 2025-09-23T06:12:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T06:12:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
athenahq/ACE-classifier-doc2vec-2025_09_22
|
athenahq
| 2025-09-23T06:12:06Z | 0 | 0 | null |
[
"doc2vec-classifier",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T06:11:53Z |
---
title: ACE Classifier Doc2Vec
emoji: 🤖
colorFrom: blue
colorTo: green
sdk: custom
app_port: 8080
---
# ACE Content Attribution Classifier (Doc2Vec)
This model classifies content as either "attributed" or "unattributed" using Doc2Vec embeddings and machine learning classifiers.
## Model Details
- **Training Date**: 2025_09_22
- **Architecture**: Doc2Vec + Machine Learning Classifier
- **Task**: Binary text classification
- **Classes**: attributed, unattributed
## Usage
### API Format
Send POST requests to the inference endpoint:
```json
{
"inputs": {
"content": "Your content text here",
"meta_description": "Optional meta description"
}
}
```
### Response Format
```json
[
{
"label": "attributed",
"score": 0.75
},
{
"label": "unattributed",
"score": 0.25
}
]
```
### Python Example
```python
import requests
api_url = "https://api-inference.huggingface.co/models/athenahq/ACE-classifier-doc2vec"
headers = {"Authorization": "Bearer YOUR_HF_TOKEN"}
data = {
"inputs": {
"content": "Machine learning models for content attribution analysis",
"meta_description": "A comprehensive guide to ML-based content classification"
}
}
response = requests.post(api_url, headers=headers, json=data)
result = response.json()
print(result)
```
### cURL Example
```bash
curl -X POST \
https://api-inference.huggingface.co/models/athenahq/ACE-classifier-doc2vec \
-H "Authorization: Bearer YOUR_HF_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"inputs": {
"content": "Your content text here",
"meta_description": "Optional meta description"
}
}'
```
## Model Performance
The model uses the best-performing combination from extensive hyperparameter tuning across multiple Doc2Vec configurations and classifiers.
## Files
- `handler.py`: Custom inference handler
- `best_model_summary.json` or `model_summary.json`: Model overview (optional - handler can work without it)
- `*_classifier.pkl`: Best performing classifier
- `*_doc2vec.model`: Best performing Doc2Vec model
- `*_metadata.json`: Model metadata and configuration
## Technical Details
- **Doc2Vec**: Uses both PV-DM and PV-DBOW algorithms
- **Preprocessing**: Text cleaning, tokenization, and filtering
- **Classifiers**: Random Forest, SVM, Logistic Regression, Neural Networks
- **Evaluation**: Comprehensive accuracy and confidence analysis
|
amitkp621/AR-1-lora
|
amitkp621
| 2025-09-23T06:11:35Z | 0 | 0 |
diffusers
|
[
"diffusers",
"image-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev",
"license:creativeml-openrail-m",
"region:us"
] |
image-to-image
| 2025-09-23T06:11:23Z |
---
tags:
- image-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
base_model: black-forest-labs/FLUX.1-Kontext-dev
license: creativeml-openrail-m
inference:
parameters:
width: 256
height: 256
instance_prompt: tryon
---
# AR-1-lora
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
## Trigger words
You should use `tryon` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](amitkp621/AR-1-lora/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-Kontext-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('amitkp621/AR-1-lora', weight_name='AR-1_000000250.safetensors')
image = pipeline('tryon').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
okezieowen/garrulous_chipmunk
|
okezieowen
| 2025-09-23T06:07:11Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:okezieowen/germane_cinder",
"base_model:finetune:okezieowen/germane_cinder",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T11:30:04Z |
---
library_name: transformers
license: apache-2.0
base_model: okezieowen/germane_cinder
tags:
- generated_from_trainer
model-index:
- name: garrulous_chipmunk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# garrulous_chipmunk
This model is a fine-tuned version of [okezieowen/germane_cinder](https://huggingface.co/okezieowen/germane_cinder) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
shubhamprshr/Llama-3.2-3B-Instruct_blocksworld1246_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200
|
shubhamprshr
| 2025-09-23T06:06:58Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"dataset:blocksworld-dataset",
"arxiv:2402.03300",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T02:07:51Z |
---
base_model: meta-llama/Llama-3.2-3B-Instruct
datasets: blocksworld-dataset
library_name: transformers
model_name: Llama-3.2-3B-Instruct_blocksworld1246_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Llama-3.2-3B-Instruct_blocksworld1246_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the [blocksworld-dataset](https://huggingface.co/datasets/blocksworld-dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shubhamprshr/Llama-3.2-3B-Instruct_blocksworld1246_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shubhamprshr27-tamu/auto/runs/li3aqg9k)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.1
- Pytorch: 2.7.0
- Datasets: 4.1.1
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
husjfry/blockassist-bc-climbing_pouncing_dragonfly_1758607399
|
husjfry
| 2025-09-23T06:06:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"climbing pouncing dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T06:04:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- climbing pouncing dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
george2cool36/hw2_image_automl_autogluon
|
george2cool36
| 2025-09-23T06:05:33Z | 0 | 0 |
autogluon
|
[
"autogluon",
"automl",
"image-classification",
"neural-network",
"dataset:ecopus/sign_identification",
"license:mit",
"region:us"
] |
image-classification
| 2025-09-22T06:17:16Z |
---
license: mit
tags:
- automl
- autogluon
- image-classification
- neural-network
library_name: autogluon
datasets:
- ecopus/sign_identification # replace if needed
---
# HW2 Neural AutoML — AutoGluon MultiModalPredictor
Artifacts:
- `ag_image_predictor.pkl` — predictor pickled with cloudpickle
- `ag_image_predictor_dir.zip` — zipped native AutoGluon predictor directory
Trained for HW2 (image classification) using a classmate's dataset.
|
xinyan233333/test
|
xinyan233333
| 2025-09-23T06:04:15Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T06:04:15Z |
---
license: apache-2.0
---
|
shui1010/shui1010_epoch10
|
shui1010
| 2025-09-23T06:04:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T06:04:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yushangru/uuu_fine_tune_gpt2
|
yushangru
| 2025-09-23T06:00:48Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:20:32Z |
---
license: apache-2.0
---
|
yuanlinwen/tcp2023
|
yuanlinwen
| 2025-09-23T05:55:57Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:55:56Z |
---
license: apache-2.0
---
|
f0857057/uuu_fine_tune_gpt2
|
f0857057
| 2025-09-23T05:55:38Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:20:38Z |
---
license: apache-2.0
---
|
ying718/uuu_fine_tune_gpt2
|
ying718
| 2025-09-23T05:52:19Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:24:16Z |
---
license: apache-2.0
---
|
Alicia22/23SAT_KY10_l16
|
Alicia22
| 2025-09-23T05:51:36Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-23T05:46:40Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
ChenWu98/numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_split_1_2048_0.5
|
ChenWu98
| 2025-09-23T05:50:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T05:48:58Z |
---
base_model: Qwen/Qwen2.5-0.5B
library_name: transformers
model_name: numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_split_1_2048_0.5
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_split_1_2048_0.5
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/cltc6d50)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mihirr01/gemma3-270M-it
|
mihirr01
| 2025-09-23T05:49:15Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/gemma-3-270m-it",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/gemma-3-270m-it",
"region:us"
] |
text-generation
| 2025-09-23T05:40:00Z |
---
base_model: unsloth/gemma-3-270m-it
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/gemma-3-270m-it
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
chengtaoyang/uuu_glora
|
chengtaoyang
| 2025-09-23T05:48:48Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:48:48Z |
---
license: apache-2.0
---
|
OPEA/Qwen2.5-1.5B-Instruct-int4-sym-inc
|
OPEA
| 2025-09-23T05:48:15Z | 1,132 | 0 | null |
[
"safetensors",
"qwen2",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:NeelNanda/pile-10k",
"arxiv:2309.05516",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"4-bit",
"auto-round",
"region:us"
] | null | 2024-11-29T08:19:59Z |
---
license: apache-2.0
datasets:
- NeelNanda/pile-10k
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
## Model Details
This model is an int4 model with group_size 128 and symmetric quantization of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with `revision="14dbc8"` to use AutoGPTQ format
⚠️ Important: This model is used for internal testing with Hugginface and VLLM. Please do not delete or modify without approval.
## How To Use
### INT4 Inference(CPU/HPU/CUDA)
CPU requires auto-round version>0.3.1
```python
from auto_round import AutoRoundConfig ##must import for auto-round format
from transformers import AutoModelForCausalLM,AutoTokenizer
quantized_model_dir = "OPEA/Qwen2.5-1.5B-Instruct-int4-inc"
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)
model = AutoModelForCausalLM.from_pretrained(
quantized_model_dir,
torch_dtype='auto',
device_map="auto",
##revision="14dbc8" ## AutoGPTQ format
)
##import habana_frameworks.torch.core as htcore ## uncommnet it for HPU
##import habana_frameworks.torch.hpu as hthpu ## uncommnet it for HPU
##model = model.to(torch.bfloat16).to("hpu") ## uncommnet it for HPU
prompt = "There is a girl who likes adventure,"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=200, ##change this to align with the official usage
do_sample=False ##change this to align with the official usage
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
prompt = "There is a girl who likes adventure,"
##INT4:
"""That's great! What kind of adventures do you like to go on? Do you prefer outdoor activities or indoor ones? Maybe we could come up with some ideas together!
"""
##BF16:
"""That's great! Adventure can be an exciting and fulfilling experience for many people. What kind of adventures do you like to go on? Do you enjoy hiking, camping, or exploring new places? Or maybe you prefer more extreme activities like skydiving or bungee jumping? Whatever your interests may be, there are plenty of opportunities out there for someone who loves adventure.
"""
prompt = "9.11和9.8哪个数字大"
#INT4:
"""
9.11 和 9.8 都是小数,它们的大小比较如下:
- 9.11 大于 9.8
具体来说:
- 9.11 的十位和个位都是 9,十分位是 1。
- 9.8 的十位和个位都是 9,十分位也是 8。
由于 1 > 8,在相同的小数部分相同时,较大的数字在十位上。因此,9.11 比 9.8 更大。
"""
##BF16:
"""9.11 和 9.8 都是小数,比较它们的大小需要从左到右逐位进行比较。
首先看整数部分:
- 9.11 的整数部分是 9。
- 9.8 的整数部分也是 9。
因为两者的整数部分相同,所以继续比较小数部分:
- 9.11 的小数部分是 0.11。
- 9.8 的小数部分是 0.8。
现在我们来比较这两个小数点后的数字:
- 0.11 和 0.8
显然,0.11 小于 0.8。因此,9.11 比 9.8 大。
所以,答案是:9.11 > 9.8。
"""
prompt = "Once upon a time,"
##INT4:
"""I'm sorry, but I don't understand what you're asking. Could you please provide more context or clarify your question?"""
##BF16:
"""I'm ready to listen! What's the story you'd like me to hear?"""
prompt = "请简短介绍一下阿里巴巴公司"
##INT4:
"""阿里巴巴集团是一家中国领先的电子商务和数字技术公司,成立于1999年。它最初是淘宝网的运营者,后来发展成为全球最大的电商平台之一,并且在云计算、金融科技等领域也取得了显著成就。
阿里巴巴旗下的主要业务包括:
1. **淘宝**:一个在线购物平台,提供各种商品和服务。
2. **天猫**:另一个大型电商平台,专注于销售品牌商品。
3. **阿里云**:提供云计算服务,帮助企业和个人实现数字化转型。
4. **蚂蚁金服**:提供金融服务,如支付宝支付系统、余额宝等。
5. **菜鸟网络**:负责物流配送,支持电商交易和快递服务。
6. **Lazada** 和 **Global Markets**:分别是中国和东南亚地区的电商平台。
阿里巴巴通过这些业务为消费者提供了便捷的购物体验,同时也为企业和个人提供了多种商业解决方案和技术支持。此外,阿里巴巴还积极参与社会公益事业,致力于推动数字经济的发展和社会的可持续性。。"""
##BF16:
"""阿里巴巴集团是一家中国领先的电子商务和数字技术公司,成立于1999年。它以B2C(企业对消费者)和B2B(企业对企业)为主要业务模式,提供包括淘宝、天猫、速卖通等在内的在线购物平台,并通过支付宝为用户提供支付服务。阿里巴巴还涉足云计算、人工智能等领域,致力于推动数字经济的发展。
"""
```
### Evaluate the model
pip3 install lm-eval==0.4.5
```bash
auto-round --model "OPEA/Qwen2.5-1.5B-Instruct-int4-inc" --eval --eval_bs 16 --tasks leaderboard_ifeval,leaderboard_mmlu_pro,gsm8k,lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,cmmlu,ceval-valid
```
| Metric | BF16 | INT4 |
| :----------------------------------------- | :----: | :----: |
| Avg | 0.5203 | 0.5133 |
| leaderboard_mmlu_pro 5 shots | 0.2930 | 0.2771 |
| leaderboard_ifeval inst_level_strict_acc | 0.4173 | 0.3765 |
| leaderboard_ifeval prompt_level_strict_acc | 0.2847 | 0.2440 |
| mmlu | 0.6016 | 0.5903 |
| cmmlu | 0.6482 | 0.6092 |
| ceval-valid | 0.6568 | 0.6181 |
| gsm8k 5 shots | 0.3086 | 0.4306 |
| lambada_openai | 0.6033 | 0.5882 |
| hellaswag | 0.5086 | 0.4979 |
| winogrande | 0.6259 | 0.6361 |
| piqa | 0.7650 | 0.7557 |
| truthfulqa_mc1 | 0.3133 | 0.3195 |
| openbookqa | 0.3180 | 0.3120 |
| boolq | 0.7804 | 0.7526 |
| arc_easy | 0.7647 | 0.7622 |
| arc_challenge | 0.4352 | 0.4420 |
### Generate the model
Here is the sample command to generate the model. We observed a larger accuracy drop in Chinese tasks and recommend using a high-quality Chinese dataset for calibration or smaller group_size like 32.
```bash
auto-round \
--model Qwen/Qwen2.5-1.5B-Instruct \
--device 0 \
--group_size 128 \
--nsamples 512 \
--bits 4 \
--iter 1000 \
--disable_eval \
--model_dtype "fp16" \
--format 'auto_gptq,auto_round' \
--output_dir "./tmp_autoround"
```
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
|
harry56183/uuu_fine_tune_gpt2
|
harry56183
| 2025-09-23T05:48:06Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:16:05Z |
---
license: apache-2.0
---
|
kb24ysh/uuu_fine_tune_taipower
|
kb24ysh
| 2025-09-23T05:47:52Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:47:15Z |
---
license: apache-2.0
---
|
CHIHAO-LIN/llama2_uuu_news_qlora
|
CHIHAO-LIN
| 2025-09-23T05:47:39Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:47:39Z |
---
license: apache-2.0
---
|
CHIHAO-LIN/tcp2023
|
CHIHAO-LIN
| 2025-09-23T05:47:21Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:47:21Z |
---
license: apache-2.0
---
|
CynthChen/uuu_fine_tune_taipower
|
CynthChen
| 2025-09-23T05:39:06Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:33:36Z |
---
license: apache-2.0
---
|
EllenLin/uuu_fine_tune_taipower
|
EllenLin
| 2025-09-23T05:38:35Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:38:00Z |
---
license: apache-2.0
---
|
finfinder/uuu_fine_tune_taipower
|
finfinder
| 2025-09-23T05:38:25Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:16:08Z |
---
license: apache-2.0
---
|
katachang/llama2_uuu_news_qlora
|
katachang
| 2025-09-23T05:37:43Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:37:43Z |
---
license: apache-2.0
---
|
21et/uuu_fine_tune_taipower
|
21et
| 2025-09-23T05:35:02Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:20:12Z |
---
license: apache-2.0
---
|
uujjdd/llama2_uuu_news_qlora
|
uujjdd
| 2025-09-23T05:34:24Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:34:24Z |
---
license: apache-2.0
---
|
keystats/whisper-swahili-finetuned
|
keystats
| 2025-09-23T05:33:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T04:39:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Alicia22/23SAT_KY10_l12
|
Alicia22
| 2025-09-23T05:32:17Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-23T05:19:56Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758605397
|
poolkiltzn
| 2025-09-23T05:31:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T05:30:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kagyvro48/pi0fast_finetuned_so101_dataset1_arracher_la_mauvaise_herbe_policy
|
kagyvro48
| 2025-09-23T05:30:02Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"pi0fast",
"robotics",
"dataset:kagyvro48/so101_dataset1_arracher_la_mauvaise_herbe",
"arxiv:2501.09747",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-23T05:27:59Z |
---
datasets: kagyvro48/so101_dataset1_arracher_la_mauvaise_herbe
library_name: lerobot
license: apache-2.0
model_name: pi0fast
pipeline_tag: robotics
tags:
- pi0fast
- robotics
- lerobot
---
# Model Card for pi0fast
<!-- Provide a quick summary of what the model is/does. -->
[Pi0-Fast](https://huggingface.co/papers/2501.09747) is a variant of Pi0 that uses a new tokenization method called FAST, which enables training of an autoregressive vision-language-action policy for high-frequency robotic tasks with improved performance and reduced training time.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
poonai/imagenet-caption
|
poonai
| 2025-09-23T05:26:22Z | 0 | 0 | null |
[
"en",
"dataset:visual-layer/imagenet-1k-vl-enriched",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T04:04:10Z |
---
license: apache-2.0
datasets:
- visual-layer/imagenet-1k-vl-enriched
language:
- en
metrics:
- bleu
base_model:
- timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k
- openai-community/gpt2
results:
- tasks:
type: text-generation
metrics:
- name: bleu
type: bleu
value: 0.040
verified: true
---
# About
This project provides an image captioning model trained on the [visual-layer/imagenet-1k-vl-enriched](https://huggingface.co/datasets/visual-layer/imagenet-1k-vl-enriched) dataset. The model architecture combines a ViT backbone [timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) for image feature extraction and a GPT-2 language model [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) for text generation.
A custom projection layer was implemented to map the image features from the vision backbone to the input space of the language model, enabling seamless integration between the two modalities.
## How to use
To run this app, follow these steps:
## Install dependencies
This project uses uv for fast dependency management.
To install all dependencies, run:
`uv sync`
Run inference
To test the model and generate captions, run:
`uv run inference.py`
This will process your input images and output captions using the trained model.
## Example
#### Input

#### Output
`a boy holding a fish in the woods`
|
chia-lung/uuu_fine_tune_taipower
|
chia-lung
| 2025-09-23T05:24:57Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:24:57Z |
---
license: apache-2.0
---
|
kennydaglish/Qwen3-0.6B-Gensyn-Swarm-pensive_elusive_stingray
|
kennydaglish
| 2025-09-23T05:22:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am pensive_elusive_stingray",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T14:22:16Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am pensive_elusive_stingray
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aamijar/llm-streamline-Llama-2-4.7B-lora-r8-sst2-epochs2
|
aamijar
| 2025-09-23T05:22:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T05:22:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jakemu/gemma-3-finetune
|
Jakemu
| 2025-09-23T05:22:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-23T05:11:03Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Jakemu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yuanlinwen/test
|
yuanlinwen
| 2025-09-23T05:21:32Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:21:32Z |
---
license: apache-2.0
---
|
f0857057/llama2_uuu_news_qlora
|
f0857057
| 2025-09-23T05:20:49Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:20:49Z |
---
license: apache-2.0
---
|
finfinder/lama2_uuu_news_qlora
|
finfinder
| 2025-09-23T05:20:26Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:20:26Z |
---
license: apache-2.0
---
|
chengtaoyang/tcp2023
|
chengtaoyang
| 2025-09-23T05:20:19Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:20:19Z |
---
license: apache-2.0
---
|
Lien-an/tcp2023
|
Lien-an
| 2025-09-23T05:18:20Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:18:20Z |
---
license: apache-2.0
---
|
tomal66/qwen2.5-1.5b-emotion-sft
|
tomal66
| 2025-09-23T05:17:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T05:17:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vijithsai/my-vit-deep-fake-detection-finetuned
|
vijithsai
| 2025-09-23T05:16:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-09-23T05:07:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lansy/llama2_uuu_news_qlora
|
Lansy
| 2025-09-23T05:16:13Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:16:13Z |
---
license: apache-2.0
---
|
katachang/tcp2023
|
katachang
| 2025-09-23T05:16:06Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:16:06Z |
---
license: apache-2.0
---
|
f0857057/tcp2023
|
f0857057
| 2025-09-23T05:16:04Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:16:04Z |
---
license: apache-2.0
---
|
finfinder/tcp2023
|
finfinder
| 2025-09-23T05:15:56Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:15:56Z |
---
license: apache-2.0
---
|
harry56183/tcp2023
|
harry56183
| 2025-09-23T05:15:35Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:15:35Z |
---
license: apache-2.0
---
|
Lansy/tcp2023
|
Lansy
| 2025-09-23T05:15:28Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T05:15:28Z |
---
license: apache-2.0
---
|
stewy33/Llama-3.2-1B-Instruct-original_augmented_original_pkc_kansas_abortion-2fd6c80a
|
stewy33
| 2025-09-23T05:15:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.2-1B-Instruct-Reference__TOG__FT",
"base_model:adapter:togethercomputer/Meta-Llama-3.2-1B-Instruct-Reference__TOG__FT",
"region:us"
] | null | 2025-09-23T05:14:39Z |
---
base_model: togethercomputer/Meta-Llama-3.2-1B-Instruct-Reference__TOG__FT
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
GANGodfather/Affine-5HMkezj9CE9X1LpCybtCfvRUNYP1e3x8PBnG4WF1yBr64n8N
|
GANGodfather
| 2025-09-23T05:07:56Z | 0 | 0 | null |
[
"safetensors",
"gpt_oss",
"8-bit",
"mxfp4",
"region:us"
] | null | 2025-09-22T13:23:01Z |
GANGodfather/Affine-5HMkezj9CE9X1LpCybtCfvRUNYP1e3x8PBnG4WF1yBr64n8N
|
forstseh/blockassist
|
forstseh
| 2025-09-23T05:06:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic soaring heron",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T12:50:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic soaring heron
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thewh1teagle/whisper-heb-ipa-large-v3-turbo-ct2
|
thewh1teagle
| 2025-09-23T05:05:03Z | 0 | 1 | null |
[
"he",
"region:us"
] | null | 2025-09-23T04:56:48Z |
---
language:
- he
---
Ctranslate2 version of Fine-tuned Whisper small that transcribe Hebrew into IPA
For training and inference code, See https://github.com/thewh1teagle/whisper-heb-ipa
For original model weights, See https://huggingface.co/thewh1teagle/whisper-heb-ipa
This project is part of Phonikud project. See https://phonikud.github.io
|
Kei-Sanada/task-15-Qwen-Qwen2.5-3B-Instruct-trial2
|
Kei-Sanada
| 2025-09-23T05:00:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-09-23T05:00:34Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF
|
mradermacher
| 2025-09-23T05:00:09Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"moe",
"mixture-of-experts",
"compression",
"top-k-reduction",
"qwen3",
"30b",
"en",
"base_model:kyne0127/Qwen3-30B-A3B-TopK4-Compressed",
"base_model:quantized:kyne0127/Qwen3-30B-A3B-TopK4-Compressed",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-22T20:25:53Z |
---
base_model: kyne0127/Qwen3-30B-A3B-TopK4-Compressed
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- moe
- mixture-of-experts
- compression
- top-k-reduction
- qwen3
- 30b
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/kyne0127/Qwen3-30B-A3B-TopK4-Compressed
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.imatrix.gguf) | imatrix | 0.2 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ1_S.gguf) | i1-IQ1_S | 6.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ1_M.gguf) | i1-IQ1_M | 7.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ2_S.gguf) | i1-IQ2_S | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ2_M.gguf) | i1-IQ2_M | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q2_K_S.gguf) | i1-Q2_K_S | 10.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q2_K.gguf) | i1-Q2_K | 11.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 11.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ3_XS.gguf) | i1-IQ3_XS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q3_K_S.gguf) | i1-Q3_K_S | 13.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ3_S.gguf) | i1-IQ3_S | 13.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ3_M.gguf) | i1-IQ3_M | 13.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q3_K_M.gguf) | i1-Q3_K_M | 14.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q3_K_L.gguf) | i1-Q3_K_L | 16.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-IQ4_XS.gguf) | i1-IQ4_XS | 16.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q4_0.gguf) | i1-Q4_0 | 17.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q4_K_S.gguf) | i1-Q4_K_S | 17.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q4_K_M.gguf) | i1-Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q4_1.gguf) | i1-Q4_1 | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q5_K_S.gguf) | i1-Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q5_K_M.gguf) | i1-Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-TopK4-Compressed-i1-GGUF/resolve/main/Qwen3-30B-A3B-TopK4-Compressed.i1-Q6_K.gguf) | i1-Q6_K | 25.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
caphe/paa15
|
caphe
| 2025-09-23T05:00:00Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-23T04:57:38Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
alberto-lorente/Meta-Llama-3_1-8B-Instruct-bnb-4bit-GENERAL-TASK-all_inmigants
|
alberto-lorente
| 2025-09-23T04:56:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"sft",
"trl",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T08:06:34Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
library_name: transformers
model_name: Meta-Llama-3_1-8B-Instruct-bnb-4bit-GENERAL-TASK-all_inmigants
tags:
- generated_from_trainer
- unsloth
- sft
- trl
licence: license
---
# Model Card for Meta-Llama-3_1-8B-Instruct-bnb-4bit-GENERAL-TASK-all_inmigants
This model is a fine-tuned version of [unsloth/meta-llama-3.1-8b-instruct-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="alberto-lorente/Meta-Llama-3_1-8B-Instruct-bnb-4bit-GENERAL-TASK-all_inmigants", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nambn0321/ASR_french_3
|
nambn0321
| 2025-09-23T04:54:47Z | 14 | 0 | null |
[
"safetensors",
"whisper",
"automatic-speech-recognition",
"fr",
"dataset:facebook/multilingual_librispeech",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2025-09-21T04:18:03Z |
---
license: mit
datasets:
- facebook/multilingual_librispeech
language:
- fr
base_model:
- openai/whisper-small
pipeline_tag: automatic-speech-recognition
---
# Fine-Tuned Whisper-small Model for French ASR
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small), trained on french version of [CV17 dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0)
# Live demo
Click [here](https://huggingface.co/spaces/nambn0321/ASR_french) (press restart to run the space)
- Then you have two options: Either upload a French audio or record yourself speaking French by clicking on the mic and then the orange dot.
- Hit submit and the model will output the transcription.
# Performance and Evaluation
# Usage
```python
import torch
from datasets import load_dataset
from transformers import pipeline
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Load pipeline
pipe = pipeline("automatic-speech-recognition", model="nambn0321/ASR_french_3", device=device)
pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language="fr", task="transcribe")
# Load data (this is an example but when you load your own data, make sure to use torchaudio or librosa to load the audio into the dataset)
ds_mcv_test = load_dataset("mozilla-foundation/common_voice_11_0", "fr", split="test", streaming=True)
test_segment = next(iter(ds_mcv_test))
waveform = test_segment["audio"]
# Run
generated_sentences = pipe(waveform, max_new_tokens=225)["text"] # greedy
# generated_sentences = pipe(waveform, max_new_tokens=225, generate_kwargs={"num_beams": 5})["text"] # beam search
```
**NOM**
|
maidacundo/annie-lite-v0.3.6-sft-qwen3-8b
|
maidacundo
| 2025-09-23T04:54:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T04:47:25Z |
---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** maidacundo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zjhhhh/qwen2.5_3B_Instruct_fixed_beta_1_eta_2.5e3_step_312_final
|
zjhhhh
| 2025-09-23T04:46:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T04:45:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ChenWu98/numina_qwen_2.5_3b_sft_numina_10k_cluster2_split_1
|
ChenWu98
| 2025-09-23T04:44:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T04:42:43Z |
---
base_model: Qwen/Qwen2.5-3B
library_name: transformers
model_name: numina_qwen_2.5_3b_sft_numina_10k_cluster2_split_1
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for numina_qwen_2.5_3b_sft_numina_10k_cluster2_split_1
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/x2651zy3)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
rbcurzon/opus-ph-ph
|
rbcurzon
| 2025-09-23T04:42:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T03:11:10Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: opus-ph-ph
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-ph-ph
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9171
- Bleu Global: 28.0878
- Gen Len: 7.4973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Bleu Global | Gen Len | Validation Loss |
|:-------------:|:-----:|:----:|:-----------:|:-------:|:---------------:|
| 0.1176 | 1.0 | 634 | 27.2355 | 7.6117 | 2.3737 |
| 0.1024 | 2.0 | 1268 | 28.4868 | 7.5173 | 2.4315 |
| 0.0788 | 3.0 | 1902 | 28.2414 | 7.5385 | 2.5622 |
| 0.0438 | 4.0 | 2536 | 27.6541 | 7.4658 | 2.6708 |
| 0.0344 | 5.0 | 3170 | 28.4412 | 7.5115 | 2.6867 |
| 0.0294 | 6.0 | 3804 | 28.9421 | 7.5008 | 2.7144 |
| 0.0253 | 7.0 | 4438 | 28.5901 | 7.5542 | 2.8013 |
| 0.0176 | 8.0 | 5072 | 28.4891 | 7.5348 | 2.8497 |
| 0.0155 | 9.0 | 5706 | 28.5233 | 7.5419 | 2.8761 |
| 0.014 | 10.0 | 6340 | 28.3278 | 7.5328 | 2.8908 |
| 0.0167 | 11.0 | 6974 | 2.8892 | 28.2921 | 7.502 |
| 0.0161 | 12.0 | 7608 | 2.9171 | 28.0878 | 7.4973 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
Kei-Sanada/task-15-Qwen-Qwen2.5-3B-Instruct-trial1
|
Kei-Sanada
| 2025-09-23T04:42:01Z | 104 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-09-18T13:18:48Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
ChenWu98/numina_qwen_2.5_3b_sft_numina_10k_cluster2_split_0
|
ChenWu98
| 2025-09-23T04:41:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T04:39:38Z |
---
base_model: Qwen/Qwen2.5-3B
library_name: transformers
model_name: numina_qwen_2.5_3b_sft_numina_10k_cluster2_split_0
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for numina_qwen_2.5_3b_sft_numina_10k_cluster2_split_0
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/alfdcn4i)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bankimds/blockassist
|
bankimds
| 2025-09-23T04:41:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"padded scented otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-11T12:05:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- padded scented otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758602311
|
poolkiltzn
| 2025-09-23T04:39:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T04:39:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/zeta-30b-a3b-i1-GGUF
|
mradermacher
| 2025-09-23T04:38:23Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3_moe",
"en",
"dataset:zed-industries/zeta",
"base_model:Woutermans/zeta-30b-a3b",
"base_model:quantized:Woutermans/zeta-30b-a3b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-22T20:25:15Z |
---
base_model: Woutermans/zeta-30b-a3b
datasets:
- zed-industries/zeta
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3_moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Woutermans/zeta-30b-a3b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#zeta-30b-a3b-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/zeta-30b-a3b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.imatrix.gguf) | imatrix | 0.2 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ1_S.gguf) | i1-IQ1_S | 6.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ1_M.gguf) | i1-IQ1_M | 7.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ2_S.gguf) | i1-IQ2_S | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ2_M.gguf) | i1-IQ2_M | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 10.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q2_K.gguf) | i1-Q2_K | 11.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 11.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 13.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ3_S.gguf) | i1-IQ3_S | 13.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ3_M.gguf) | i1-IQ3_M | 13.6 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 14.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 16.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 16.5 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q4_0.gguf) | i1-Q4_0 | 17.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 17.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q4_1.gguf) | i1-Q4_1 | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q6_K.gguf) | i1-Q6_K | 25.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ZZZ1223/vlm_rl
|
ZZZ1223
| 2025-09-23T04:38:10Z | 0 | 0 | null |
[
"safetensors",
"qwen2_5_vl",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T04:33:40Z |
---
license: apache-2.0
---
|
nikhil958/my-tesing-model
|
nikhil958
| 2025-09-23T04:37:14Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T04:37:14Z |
---
license: apache-2.0
---
|
ultratopaz/1910464
|
ultratopaz
| 2025-09-23T04:33:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:33:26Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=2013547)
|
amethyst9/1116666
|
amethyst9
| 2025-09-23T04:33:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:33:09Z |
[View on Civ Archive](https://civarchive.com/models/86466?modelVersionId=1211309)
|
seraphimzzzz/1199640
|
seraphimzzzz
| 2025-09-23T04:32:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:32:51Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1295401)
|
crystalline7/1206481
|
crystalline7
| 2025-09-23T04:32:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:32:33Z |
[View on Civ Archive](https://civarchive.com/models/1157765?modelVersionId=1302210)
|
ultratopaz/987605
|
ultratopaz
| 2025-09-23T04:32:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:32:26Z |
[View on Civ Archive](https://civarchive.com/models/44600?modelVersionId=1082621)
|
crystalline7/55795
|
crystalline7
| 2025-09-23T04:32:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:32:20Z |
[View on Civ Archive](https://civarchive.com/models/76646?modelVersionId=81420)
|
ultratopaz/1893890
|
ultratopaz
| 2025-09-23T04:31:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:31:46Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1996727)
|
a3ilab-llm-uncertainty/new_2560_3_epoch_xlam_FC
|
a3ilab-llm-uncertainty
| 2025-09-23T04:31:38Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:Salesforce/Llama-xLAM-2-8b-fc-r",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:Salesforce/Llama-xLAM-2-8b-fc-r",
"region:us"
] |
text-generation
| 2025-09-23T04:13:38Z |
---
base_model: Salesforce/Llama-xLAM-2-8b-fc-r
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:Salesforce/Llama-xLAM-2-8b-fc-r
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
sumukha2002/carnatic-raga-classifier-lgbm
|
sumukha2002
| 2025-09-23T04:31:31Z | 0 | 0 | null |
[
"joblib",
"audio-classification",
"music",
"carnatic-music",
"raga-identification",
"lightgbm",
"license:mit",
"region:us"
] |
audio-classification
| 2025-09-23T04:31:26Z |
---
license: mit
tags: [audio-classification, music, carnatic-music, raga-identification, lightgbm]
---
# Carnatic Raga Identification Model
This is a LightGBM model trained to classify 15 Carnatic Ragas from statistical features derived from pitch contours.
- **Model Type:** LightGBM
- **Accuracy:** Achieved an average cross-validation accuracy of **84.44%**.
- **Ragas:** Behāg, Bhairavi, Bēgaḍa, Kamās, Kāmavardani, Madhyamāvati, Mukhāri, Mōhanaṁ, Sindhubhairavi, Suraṭi, Sāvēri, Tōḍi, Varāḷi, Ānandabhairavi, Ṣanmukhapriya
|
ultratopaz/1225833
|
ultratopaz
| 2025-09-23T04:31:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:31:07Z |
[View on Civ Archive](https://civarchive.com/models/1125770?modelVersionId=1321909)
|
ultratopaz/1171389
|
ultratopaz
| 2025-09-23T04:30:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:30:56Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1266410)
|
Pacovit/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_prehistoric_clam
|
Pacovit
| 2025-09-23T04:30:51Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am vigilant_prehistoric_clam",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T22:35:54Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am vigilant_prehistoric_clam
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
crystalline7/1199519
|
crystalline7
| 2025-09-23T04:30:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:30:49Z |
[View on Civ Archive](https://civarchive.com/models/76646?modelVersionId=1295270)
|
jhuapl-bio/microbert
|
jhuapl-bio
| 2025-09-23T04:30:44Z | 0 | 0 | null |
[
"joblib",
"safetensors",
"metagenomics",
"taxonomic-classification",
"antimicrobial-resistance",
"pathogen-detection",
"text-classification",
"en",
"base_model:InstaDeepAI/nucleotide-transformer-v2-50m-multi-species",
"base_model:finetune:InstaDeepAI/nucleotide-transformer-v2-50m-multi-species",
"license:mit",
"region:us"
] |
text-classification
| 2025-09-11T01:30:50Z |
---
license: mit
language:
- en
base_model:
- LongSafari/hyenadna-large-1m-seqlen-hf
- zhihan1996/DNABERT-2-117M
- InstaDeepAI/nucleotide-transformer-v2-50m-multi-species
pipeline_tag: text-classification
tags:
- metagenomics
- taxonomic-classification
- antimicrobial-resistance
- pathogen-detection
---
# Genomic Language Models for Metagenomic Sequence Analysis
We provide genomic language models fine-tuned for the following tasks:
- **Taxonomic hierarchical classification**
- **Anti-microbial resistance gene identification**
- **Pathogenicity detection**
See [code](https://github.com/jhuapl-bio/microbert) for details on fine-tuning, evaluation, and implementation.
These are the official models implemented in [Evaluating the Effectiveness of Parameter-Efficient Fine-Tuning in Genomic Classification Tasks](https://www.biorxiv.org/content/10.1101/2025.08.21.671544v1).
---
## Pretrained Foundation Models
Our models are built upon several pretrained genomic foundation models:
### Nucleotide Transformer (NT)
- [InstaDeepAI/nucleotide-transformer-v2-50m-multi-species](https://huggingface.co/InstaDeepAI/nucleotide-transformer-v2-50m-multi-species)
- [InstaDeepAI/nucleotide-transformer-v2-100m-multi-species](https://huggingface.co/InstaDeepAI/nucleotide-transformer-v2-100m-multi-species)
- [InstaDeepAI/nucleotide-transformer-v2-250m-multi-species](https://huggingface.co/InstaDeepAI/nucleotide-transformer-v2-250m-multi-species)
### DNABERT
- [zhihan1996/DNABERT-2-117M](https://huggingface.co/zhihan1996/DNABERT-2-117M)
- [zhihan1996/DNABERT-S](https://huggingface.co/zhihan1996/DNABERT-S)
### HyenaDNA
- [LongSafari/hyenadna-large-1m-seqlen-hf](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen-hf)
- [LongSafari/hyenadna-medium-450k-seqlen-hf](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen-hf)
- [LongSafari/hyenadna-medium-160k-seqlen-hf](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen-hf)
- [LongSafari/hyenadna-small-32k-seqlen-hf](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen-hf)
We sincerely thank the teams behind NT, DNABERT, and HyenaDNA for making their tokenizers and pre-trained models available for use :)
---
## Available Fine-Tuned Models
We provide the following available models for use.
- `taxonomy/DNABERT-2-117M-taxonomy`
- `taxonomy/hyenadna-large-1m-seqlen-hf-taxonomy`
- `taxonomy/nucleotide-transformer-v2-50m-multi-species-taxonomy`
- `amr/binary/hyenadna-small-32k-seqlen-hf`
- `amr/binary/nucleotide-transformer-v2-100m-multi-species`
- `amr/multiclass/DNABERT-S`
- `amr/multiclass/hyenadna-medium-450k-seqlen-hf`
- `amr/multiclass/nucleotide-transformer-v2-250m-multi-species`
- `pathogenicity/hyenadna-small-32k-seqlen-hf-DeePaC-fungal`
- `pathogenicity/hyenadna-small-32k-seqlen-hf-DeePaC-viral`
- `pathogenicity/hyenadna-small-32k-seqlen-hf-DeepSim-bacterial`
- `pathogenicity/hyenadna-small-32k-seqlen-hf-DeepSim-viral`
- `pathogenicity/nucleotide-transformer-v2-50m-multi-species-DeePaC-fungal`
- `pathogenicity/nucleotide-transformer-v2-50m-multi-species-DeePaC-viral`
- `pathogenicity/nucleotide-transformer-v2-50m-multi-species-DeepSim-bacterial`
- `pathogenicity/nucleotide-transformer-v2-50m-multi-species-DeepSim-viral`
To use these models, download the directories available here.
You should also follow the installation instructions available at our [code](https://github.com/jhuapl-bio/microbert).
There are two available modes of operation: setup from source code and setup from our pre-built [docker image](https://hub.docker.com/r/jhuaplbio/microbert-classify).
Given that you have followed the setup instructions from source code and have downloaded the model directories here, here is sample code to run inference:
```
import json
from pathlib import Path
import torch
import torch.nn.functional as F
from transformers import (
AutoTokenizer,
)
from safetensors.torch import load_file
from analysis.experiment.utils.data_processor import DataProcessor
from analysis.experiment.models.hierarchical_model import (
HierarchicalClassificationModel,
)
# Replace with base directory containing all data processor, base model tokenizers, and trained model weights files
model_dir = Path('data/LongSafari__hyenadna-large-1m-seqlen-hf')
data_processor_dir = model_dir / "data_processor" # replace with directory containing your data processor
metadata_path = data_processor_dir / "metadata.json"
base_model_dir = model_dir / "base_model" # replace with directory containing your base model files
trained_model_dir = model_dir / "model" # replace with directory containing your trained model files
trained_model_path = trained_model_dir / "model.safetensors"
# Load metadata
with open(metadata_path, "r") as f:
metadata = json.load(f)
sequence_column = metadata["sequence_column"]
labels = metadata["labels"]
data_processor_filename = 'data_processor.pkl'
# load data processor
data_processor = DataProcessor(
sequence_column=sequence_column,
labels=labels,
save_file=data_processor_filename,
)
data_processor.load_processor(data_processor_dir)
# Get metadata-driven values
num_labels = data_processor.num_labels
class_weights = data_processor.class_weights
# Load tokenizer from Hugging Face Hub or local path
tokenizer = AutoTokenizer.from_pretrained(
pretrained_model_name_or_path=base_model_dir.as_posix(),
trust_remote_code=True,
local_files_only=True,
)
# Load fine-tuned model weights
model = HierarchicalClassificationModel(base_model_dir.as_posix(), num_labels, class_weights)
state_dict = load_file(trained_model_path)
model.load_state_dict(state_dict, strict=False)
input = "ATCG"
# Run inference
tokenized_input = tokenizer(
input,
return_tensors="pt", # Return results as PyTorch tensors
)
with torch.no_grad():
outputs = model(**tokenized_input)
for idx, col in enumerate(labels):
logits = outputs['logits'][idx] # [num_classes]
probs = F.softmax(logits, dim=-1).cpu()
topk = torch.topk(probs, k=1, dim=-1)
topk_index = topk.indices.numpy().ravel()
topk_prob = topk.values
topk_label = data_processor.encoders[col].inverse_transform(topk_index)
```
---
## Authors & Contact
- Daniel Berman — [email protected]
- Daniel Jimenez — [email protected]
- Stanley Ta — [email protected]
- Brian Merritt — [email protected]
- Jeremy Ratcliff — [email protected]
- Vijay Narayan — [email protected]
- Molly Gallaghar - [email protected]
---
## Acknowledgement
This work was supported by funding from the **U.S. Centers for Disease Control and Prevention** through the **Office of Readiness and Response** under **Contract # 75D30124C20202**.
|
seraphimzzzz/1013161
|
seraphimzzzz
| 2025-09-23T04:30:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:30:41Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1108254)
|
crystalline7/933625
|
crystalline7
| 2025-09-23T04:29:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:29:49Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1027833)
|
crystalline7/36253
|
crystalline7
| 2025-09-23T04:29:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:29:39Z |
[View on Civ Archive](https://civarchive.com/models/44600?modelVersionId=49231)
|
crystalline7/1130561
|
crystalline7
| 2025-09-23T04:29:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:29:32Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1225191)
|
crystalline7/1137213
|
crystalline7
| 2025-09-23T04:29:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:29:25Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1231755)
|
chuxuan/gpt-4o-sql-tool3
|
chuxuan
| 2025-09-23T04:28:56Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-8B",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-8B",
"region:us"
] |
text-generation
| 2025-09-23T04:28:51Z |
---
base_model: Qwen/Qwen3-8B
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:Qwen/Qwen3-8B
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
amethyst9/1226593
|
amethyst9
| 2025-09-23T04:28:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:28:25Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1322666)
|
ultratopaz/936507
|
ultratopaz
| 2025-09-23T04:27:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:27:19Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1030703)
|
Anhlq/qwen-2.5-1b-exercise-instruct-23.09
|
Anhlq
| 2025-09-23T04:24:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T04:21:38Z |
---
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Anhlq
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lmtri0312/tramy-text-generation
|
lmtri0312
| 2025-09-23T04:23:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T04:22:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wjhuah/SikuRoBERTa_Bronze
|
wjhuah
| 2025-09-23T04:19:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"bronze",
"paleography",
"epigraphy",
"zh",
"dataset:wjhuah/BIRD",
"base_model:SIKU-BERT/sikuroberta",
"base_model:finetune:SIKU-BERT/sikuroberta",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-09-23T03:51:24Z |
---
license: cc-by-4.0
datasets:
- wjhuah/BIRD
language:
- zh
metrics:
- f1
- exact_match
- accuracy
base_model:
- SIKU-BERT/sikuroberta
pipeline_tag: fill-mask
library_name: transformers
tags:
- bronze
- paleography
- epigraphy
---
# SikuRoBERTa (GN) for Bronze Inscription Restoration and Dating
## Model Description
This model is adapted from [SIKU-BERT/sikuroberta](https://huggingface.co/SIKU-BERT/sikuroberta), with additional **domain-adaptive pretraining (DAPT)**, **task-adaptive pretraining (TAPT)**, and integration of a **Glyph Net (GN)** module.
It is trained on the **BIRD dataset** (Bronze Inscription Restoration and Dating), the first fully encoded bronze inscription corpus with chronological labels.
- **Backbone**: RoBERTa trained on the Siku Quanshu corpus
- **Enhancements**: Glyph Net (GN), glyph-biased sampling, DAPT, TAPT
- **Tasks**:
- Masked language modeling (inscription restoration)
- Dynasty- and period-level classification (dating)
---
## Intended Use
- Restoration of damaged or missing characters in bronze inscriptions
- Chronological classification (dynasty / period dating)
- Research in **digital humanities**, **ancient Chinese NLP**, and **paleography**
---
## Training Data
The model was trained on the **BIRD dataset**:
- **41k tokens** of transcribed and chronologically labeled bronze inscriptions
- Deduplicated, filtered, and corrected based on *Complete Collection of Yin and Zhou Bronze Inscriptions* (CASS, 2007)
- `[UNK]` placeholders for undeciphered glyphs
- Supplemented with **2M tokens of Pre-Qin texts** (Analects, Mencius, Zuo Zhuan, Mozi, Guanzi, etc.) for domain-adaptive pretraining
Dataset repo: [wjhuah/BIRD](https://huggingface.co/datasets/wjhuah/BIRD)
---
## Case Study: Hu Ding Restoration
We employed the **SikuRoBERTa (GN)** model with two decoding strategies:
parallel mask filling and greedy iterative decoding.
The table below compares predicted tokens with expert gold restorations.
### Top-1 and Top-5 predictions versus gold characters
*(excerpt of the first six damaged positions in the Hu Ding inscription)*
| Mask Position | Gold | Pred@1 | Top-5 Predictions |
|---------------|------|--------|-----------------------|
| 01 | 室 | 廟 | 廟, 室, 宮, 寢, 廷 |
| 02 | 王 | 王 | 王, 公, 君, 伯, 尹 |
| 03 | 芾 | 芾 | 芾, 純, 衡, 衣, 韍 |
| 05 | 命 | 於 | 於, 于, 揚, 無, 多 |
| 06 | 于 | 于 | 于, 揚, 穆, 於, 侑 |
| 07 | 年 | 年 | 年, 人, 世, 壽, 歲 |
On 22 expert restorations (Huang, 2022), the model achieved:
- **Exact@1**: 50.00% (11/22)
- **Exact@5**: 59.09% (13/22)
- **Exact@10**: 68.18% (15/22)
Greedy decoding yielded comparable coverage, though with slightly lower accuracy.
In addition to reproducing expert restorations, the system also generated **plausible candidates for undeciphered characters**, providing potential references for paleographic analysis.
### Model completions for undeciphered positions (Top-10 shown)
| Mask Position | Top-10 Predictions |
|---------------|--------------------|
| 04 | 鑾, 旂, 舄, 筆, 㫃, 金, 矢, 黃, 弓, 璋 |
| 08 | 介, 伯, 市, 限, 客, 期, 制, 政, 宰, 人 |
| 15 | 之, 外, 一, 若, 內, 賜, 邑, 大, 下, 又 |
| 16 | 賜, 折, 喬, 杜, 乘, 造, 擇, 柞, 之, 于 |
| 17 | 則, 許, 弗, 不, 人, 亦, 也, 而, 帛, 乃 |
| 18 | 則, 曰, 不, 弗, 許, 告, 厥, 多, 有, 用 |
| 28 | 其, 厥, 若, 越, 乃, 我, 以, 汝, 如, 余 |
---
## How to Use
### Quick start
A minimal example for masked language modeling:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_name = "wjhuah/SikuRoBERTa_Bronze"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name)
text = "唯王元年六月既朢乙亥,王在周穆王太[mask]"
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
```
---
## Citation
If you use this model, please cite our **EMNLP 2025** paper:
```bibtex
@inproceedings{hua2025bird,
title = {BIRD: Bronze Inscription Restoration and Dating},
author = {Hua, Wenjie and Nguyen, Hoang H. and Ge, Gangyan},
booktitle = {Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing},
year = {2025},
publisher = {Association for Computational Linguistics}
}
|
mradermacher/zeta-30b-a3b-GGUF
|
mradermacher
| 2025-09-23T04:18:33Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3_moe",
"en",
"dataset:zed-industries/zeta",
"base_model:Woutermans/zeta-30b-a3b",
"base_model:quantized:Woutermans/zeta-30b-a3b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T13:55:45Z |
---
base_model: Woutermans/zeta-30b-a3b
datasets:
- zed-industries/zeta
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3_moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Woutermans/zeta-30b-a3b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#zeta-30b-a3b-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-GGUF/resolve/main/zeta-30b-a3b.Q2_K.gguf) | Q2_K | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-GGUF/resolve/main/zeta-30b-a3b.Q3_K_S.gguf) | Q3_K_S | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-GGUF/resolve/main/zeta-30b-a3b.Q3_K_M.gguf) | Q3_K_M | 14.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-GGUF/resolve/main/zeta-30b-a3b.Q3_K_L.gguf) | Q3_K_L | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-GGUF/resolve/main/zeta-30b-a3b.IQ4_XS.gguf) | IQ4_XS | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-GGUF/resolve/main/zeta-30b-a3b.Q4_K_S.gguf) | Q4_K_S | 17.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-GGUF/resolve/main/zeta-30b-a3b.Q4_K_M.gguf) | Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-GGUF/resolve/main/zeta-30b-a3b.Q5_K_S.gguf) | Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-GGUF/resolve/main/zeta-30b-a3b.Q5_K_M.gguf) | Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-GGUF/resolve/main/zeta-30b-a3b.Q6_K.gguf) | Q6_K | 25.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-GGUF/resolve/main/zeta-30b-a3b.Q8_0.gguf) | Q8_0 | 32.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nikilr/zephyr_catorig
|
nikilr
| 2025-09-23T04:18:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T04:17:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Bobedge/deepseek_r1_GRPO_finetune
|
Bobedge
| 2025-09-23T04:17:28Z | 0 | 0 | null |
[
"safetensors",
"unsloth",
"license:mit",
"region:us"
] | null | 2025-09-23T03:41:20Z |
---
license: mit
tags:
- unsloth
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.