modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-23 18:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 573
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-23 18:28:01
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
onnxmodelzoo/nfnet_l0_Opset17
|
onnxmodelzoo
| 2025-09-22T05:18:13Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:17:57Z |
---
language: en
license: apache-2.0
model_name: nfnet_l0_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/nf_resnet50_Opset17
|
onnxmodelzoo
| 2025-09-22T05:17:44Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:17:32Z |
---
language: en
license: apache-2.0
model_name: nf_resnet50_Opset17.onnx
tags:
- Computer_Vision
---
|
hoan17/saving_LAVilas50e1_100
|
hoan17
| 2025-09-22T05:16:23Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"trl",
"o2o",
"reinforcement-learning",
"text-to-image",
"stable-diffusion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-09-22T05:15:57Z |
---
license: apache-2.0
tags:
- trl
- o2o
- diffusers
- reinforcement-learning
- text-to-image
- stable-diffusion
---
# TRL O2O Model
This is a diffusion model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
|
onnxmodelzoo/mobilevitv2_125_Opset18
|
onnxmodelzoo
| 2025-09-22T05:16:10Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:16:05Z |
---
language: en
license: apache-2.0
model_name: mobilevitv2_125_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/mobilevitv2_100_Opset18
|
onnxmodelzoo
| 2025-09-22T05:15:55Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:15:51Z |
---
language: en
license: apache-2.0
model_name: mobilevitv2_100_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/mobilevitv2_100_Opset16
|
onnxmodelzoo
| 2025-09-22T05:15:46Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:15:42Z |
---
language: en
license: apache-2.0
model_name: mobilevitv2_100_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/mobilevitv2_075_Opset18
|
onnxmodelzoo
| 2025-09-22T05:15:42Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:15:38Z |
---
language: en
license: apache-2.0
model_name: mobilevitv2_075_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/mobilevitv2_075_Opset16
|
onnxmodelzoo
| 2025-09-22T05:15:33Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:15:30Z |
---
language: en
license: apache-2.0
model_name: mobilevitv2_075_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/mobilevitv2_050_Opset16
|
onnxmodelzoo
| 2025-09-22T05:15:22Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:15:18Z |
---
language: en
license: apache-2.0
model_name: mobilevitv2_050_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/mobilevit_xxs_Opset18
|
onnxmodelzoo
| 2025-09-22T05:15:18Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T05:15:14Z |
---
language: en
license: apache-2.0
model_name: mobilevit_xxs_Opset18.onnx
tags:
- Computer_Vision
---
|
ChenWu98/numina_qwen_2.5_7b_sft_teachers_no_reasoning_source_condition_2048_0.5
|
ChenWu98
| 2025-09-22T05:14:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T04:42:56Z |
---
base_model: Qwen/Qwen2.5-7B
library_name: transformers
model_name: numina_qwen_2.5_7b_sft_teachers_no_reasoning_source_condition_2048_0.5
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for numina_qwen_2.5_7b_sft_teachers_no_reasoning_source_condition_2048_0.5
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/34mfen5r)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/llama70B-3.1-40layer-GGUF
|
mradermacher
| 2025-09-22T05:00:11Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:japawblob/llama70B-3.1-40layer",
"base_model:quantized:japawblob/llama70B-3.1-40layer",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T00:28:15Z |
---
base_model: japawblob/llama70B-3.1-40layer
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/japawblob/llama70B-3.1-40layer
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#llama70B-3.1-40layer-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama70B-3.1-40layer-GGUF/resolve/main/llama70B-3.1-40layer.Q2_K.gguf) | Q2_K | 13.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama70B-3.1-40layer-GGUF/resolve/main/llama70B-3.1-40layer.Q3_K_S.gguf) | Q3_K_S | 16.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama70B-3.1-40layer-GGUF/resolve/main/llama70B-3.1-40layer.Q3_K_M.gguf) | Q3_K_M | 17.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama70B-3.1-40layer-GGUF/resolve/main/llama70B-3.1-40layer.Q3_K_L.gguf) | Q3_K_L | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama70B-3.1-40layer-GGUF/resolve/main/llama70B-3.1-40layer.IQ4_XS.gguf) | IQ4_XS | 19.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama70B-3.1-40layer-GGUF/resolve/main/llama70B-3.1-40layer.Q4_K_S.gguf) | Q4_K_S | 21.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama70B-3.1-40layer-GGUF/resolve/main/llama70B-3.1-40layer.Q4_K_M.gguf) | Q4_K_M | 22.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama70B-3.1-40layer-GGUF/resolve/main/llama70B-3.1-40layer.Q5_K_S.gguf) | Q5_K_S | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama70B-3.1-40layer-GGUF/resolve/main/llama70B-3.1-40layer.Q5_K_M.gguf) | Q5_K_M | 25.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama70B-3.1-40layer-GGUF/resolve/main/llama70B-3.1-40layer.Q6_K.gguf) | Q6_K | 29.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama70B-3.1-40layer-GGUF/resolve/main/llama70B-3.1-40layer.Q8_0.gguf) | Q8_0 | 38.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
luckeciano/Qwen-2.5-7B-GRPO-Adam-FisherMaskToken-1e-4-HessianMaskToken-0.01-CAPOOnly-v2_4646
|
luckeciano
| 2025-09-22T04:53:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T00:02:37Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-Adam-FisherMaskToken-1e-4-HessianMaskToken-0.01-CAPOOnly-v2_4646
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-Adam-FisherMaskToken-1e-4-HessianMaskToken-0.01-CAPOOnly-v2_4646
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Adam-FisherMaskToken-1e-4-HessianMaskToken-0.01-CAPOOnly-v2_4646", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/muqtkocz)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
a3ilab-llm-uncertainty/new_2560_3_epoch_xlam_apigen
|
a3ilab-llm-uncertainty
| 2025-09-22T04:27:11Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:Salesforce/Llama-xLAM-2-8b-fc-r",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:Salesforce/Llama-xLAM-2-8b-fc-r",
"region:us"
] |
text-generation
| 2025-09-22T04:12:07Z |
---
base_model: Salesforce/Llama-xLAM-2-8b-fc-r
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:Salesforce/Llama-xLAM-2-8b-fc-r
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
rl-rag/qwen3-8B-v20250915_sampled_ablations
|
rl-rag
| 2025-09-22T04:13:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T04:12:25Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen3-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen3-8B-v20250915_sampled_ablations
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen3-8B-v20250915_sampled_ablations
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the rl-rag/v20250915_sampled_ablations dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Yuchiwang02/DelaySentinel
|
Yuchiwang02
| 2025-09-22T04:11:51Z | 4 | 0 | null |
[
"safetensors",
"llama",
"text-classification",
"en",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2025-09-20T03:09:59Z |
---
license: apache-2.0
language:
- en
base_model:
- meta-llama/Llama-3.2-1B
pipeline_tag: text-classification
---
# Model Card for DelaySentinel: AI-Powered Logistics Delay Prediction
## Model Details
### Model Description
- **Developed by:** Yuchi Wang
- **Model type:** Instruction fine-tuned large language model (LLM) for binary classification
- **Language(s):** English (structured prompts with business/logistics features)
- **License:** Apache-2.0
- **Finetuned from model:** `meta-llama/Llama-3.2-1B-Instruct`
This model, **DelaySentinel**, was fine-tuned to predict whether a logistics order will be delayed (`1`) or not (`0`) before shipment, using structured order-level features. The project demonstrates how **instruction fine-tuning of LLMs** can be applied to **supply chain risk management**.
### Model Sources
- **Repository:** [Hugging Face Model Repo]
- **Demo:** Hugging Face Gradio Space (interactive CSV/Excel upload)
---
## Uses
### Direct Use
- Pre-shipment **logistics delay prediction**
- Business analytics demos for **supply chain risk management**
- Educational showcase of **LLM instruction fine-tuning** on structured business data
### Downstream Use
- Adaptation to other supply chain KPIs (e.g., demand forecasting, lead-time prediction)
- Further fine-tuning on proprietary logistics datasets
### Out-of-Scope Use
- Not intended for sensitive decision-making in live operations without validation
- Not suitable for medical, legal, or financial advisory
---
## Bias, Risks, and Limitations
- Dataset comes from Kaggle (synthetic/aggregated logistics data), so real-world generalization may be limited.
- Model outputs are strictly `0`/`1` and do not provide uncertainty estimates unless re-trained for probabilities.
- Risk of **data drift** if deployed in real supply chains with different carriers/regions.
**Recommendations:**
Users should validate predictions against recent operational data before deployment in practice.
---
## How to Get Started with the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("yuchi/DelaySentinel")
tokenizer = AutoTokenizer.from_pretrained("yuchi/DelaySentinel")
system = "You are a supply-chain analyst. Output only 0 or 1: 1=Delay, 0=Not delay."
user = "order_id: 123\norigin_region: OH\ndest_region: CA\ncarrier: A1\nservice_level: ground\nweight_kg: 10.5\ndistance_km: 3500\nholiday_flag: 0"
prompt = f"<|system|>{system}\n<|user|>{user}\n<|assistant|>"
out = model.generate(**tokenizer(prompt, return_tensors="pt"), max_new_tokens=2)
print(tokenizer.decode(out[0], skip_special_tokens=True).split("<|assistant|>")[-1].strip())
|
lemonhat/Qwen2.5-7B-Instruct-SEvolve3_re_21k_tag5_progress_processed
|
lemonhat
| 2025-09-22T04:08:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T03:55:52Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: SEvolve3_re_21k_tag5_progress_processed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEvolve3_re_21k_tag5_progress_processed
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the SEvolve3_re_21k_tag5_progress_processed dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2647 | 0.8230 | 300 | 0.2442 |
| 0.2147 | 1.6447 | 600 | 0.2330 |
### Framework versions
- Transformers 4.51.0
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
jonaji/blockassist
|
jonaji
| 2025-09-22T03:55:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prowling waddling chinchilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T15:25:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prowling waddling chinchilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
anhtuan15082023/gemma-3n-vneid-merged
|
anhtuan15082023
| 2025-09-22T03:48:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"vietnamese",
"gemma",
"fine-tuned",
"unsloth",
"lora",
"conversational",
"vi",
"base_model:google/gemma-2-2b",
"base_model:adapter:google/gemma-2-2b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T03:31:45Z |
---
language: vi
license: apache-2.0
base_model: google/gemma-2-2b
tags:
- vietnamese
- gemma
- fine-tuned
- unsloth
- lora
- text-generation
library_name: transformers
pipeline_tag: text-generation
model_type: gemma
---
# gemma-3n-vneid-merged
🇻🇳 **Vietnamese Fine-tuned Gemma Model**
This is a Vietnamese fine-tuned version of Google's Gemma 2B model using Unsloth and LoRA adapters, optimized for Vietnamese text generation.
## 📊 Model Details
- **Base Model**: [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)
- **Language**: Vietnamese (vi)
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **Framework**: Unsloth
- **Model Type**: Causal Language Model
- **License**: Apache 2.0
## 🚀 Quick Start
### Using Transformers
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load model and tokenizer
model_name = "anhtuan15082023/gemma-3n-vneid-merged"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True
)
# Generate Vietnamese text
def generate_vietnamese_text(prompt, max_length=100):
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_length=max_length,
temperature=0.7,
do_sample=True,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response[len(prompt):].strip()
# Example usage
prompt = "Xin chào, tôi là"
result = generate_vietnamese_text(prompt)
print(f"Input: {prompt}")
print(f"Output: {result}")
```
### Using Inference API
```python
import requests
API_URL = "https://api-inference.huggingface.co/models/anhtuan15082023/gemma-3n-vneid-merged"
headers = {"Authorization": f"Bearer {YOUR_HF_TOKEN}"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
# Generate text
output = query({
"inputs": "Việt Nam là",
"parameters": {
"max_length": 100,
"temperature": 0.7
}
})
print(output)
```
## 🎯 Use Cases
- Vietnamese text completion
- Creative writing in Vietnamese
- Chatbot responses in Vietnamese
- Content generation for Vietnamese applications
## ⚙️ Training Details
- **Dataset**: Vietnamese text corpus
- **Training Framework**: Unsloth (optimized training)
- **Fine-tuning Method**: LoRA adapters merged into base model
- **Base Model**: Google Gemma 2B
## 🏷️ Model Tags
- Vietnamese language model
- Text generation
- Fine-tuned Gemma
- LoRA adaptation
## 📜 License
This model inherits the Apache 2.0 license from the base Gemma model.
## 🤝 Citation
If you use this model, please consider citing:
```bibtex
@model{vietnamese-gemma-finetuned,
title={Vietnamese Fine-tuned Gemma Model},
author={anhtuan15082023},
year={2024},
url={https://huggingface.co/anhtuan15082023/gemma-3n-vneid-merged}
}
```
## 📞 Contact
For questions or issues, please open an issue on the model's repository page.
|
pandoradox/qwen2.5-7b-instruct_stressstrain_200
|
pandoradox
| 2025-09-22T03:40:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"grpo",
"lora",
"transformers",
"trl",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | null | 2025-09-22T03:40:09Z |
---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: peft
tags:
- base_model:adapter:Qwen/Qwen2.5-7B-Instruct
- grpo
- lora
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
aamijar/ReplaceME-Mistral-7B-Instruct-v0.3-lora-r8-sst2-epochs3
|
aamijar
| 2025-09-22T03:27:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T03:27:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ChenWu98/numina_qwen_2.5_3b_sft_teachers_no_reasoning_2048_0.5
|
ChenWu98
| 2025-09-22T03:22:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T03:18:47Z |
---
base_model: Qwen/Qwen2.5-3B
library_name: transformers
model_name: numina_qwen_2.5_3b_sft_teachers_no_reasoning_2048_0.5
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for numina_qwen_2.5_3b_sft_teachers_no_reasoning_2048_0.5
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/fnieqw78)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yuuki367/llama-3-8B-chat-lawyer-webtest2
|
yuuki367
| 2025-09-22T03:07:05Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T03:29:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chantomkit/qwen3-risk-mnr-education
|
chantomkit
| 2025-09-22T02:51:19Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"qwen3",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:31656",
"loss:MultipleNegativesSymmetricRankingLoss",
"arxiv:1908.10084",
"base_model:Qwen/Qwen3-Embedding-0.6B",
"base_model:finetune:Qwen/Qwen3-Embedding-0.6B",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-22T02:51:00Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:31656
- loss:MultipleNegativesSymmetricRankingLoss
base_model: Qwen/Qwen3-Embedding-0.6B
widget:
- source_sentence: '**Email to Classmate:** "Yo, can you send me your calculus homework
answers? I’ll just tweak the numbers so it doesn’t look obvious. My mom’s on my
case about grades, and I didn’t have time to do it myself."'
sentences:
- '**Teacher’s Assignment Prompt:** "Final Paper: ''Critique a modern innovation.''
No page limit, citation rules, or submission checks. Be imaginative!"'
- '**Discussion Board Post (Class Forum):** "For the AI-Free essay due tomorrow,
I just had ChatGPT draft my thesis and main arguments. Saved me hours—anyone else
try this?"'
- "**Lab Report Submission:** \n\"Results section fully written by an AI science\
\ assistant. It claims our chemistry experiment produced 200% yield, which is\
\ impossible, but I’m too tired to fix it. Maybe the TA won’t read closely.\""
- source_sentence: 'Parent email to teacher: "Why did Mia get a zero on her history
paper? She worked so hard with that new homework app all weekend! We paid good
money for EssayGenius Premium!"'
sentences:
- 'Peer review comment: "Your essay on ‘leadership’ feels robotic. Did you use an
AI? Mine did the same thing because the rubric didn’t ask for real-life examples."'
- 'Parent email to teacher: *"My daughter’s AI writing assistant developed her science
fair hypothesis and experimental design. Since the tool is school-approved, we
assumed this was acceptable. Please confirm."*'
- '**Group Chat Message (Student):** "Bro, just use ChatGPT for the calculus project.
It doesn’t just solve the problems—it explains the concepts in the ‘show your
work’ section. Zero brainpower needed."'
- source_sentence: '**Academic Forum Comment:** "PM me for a link to a bot that writes
flawless philosophy essays. It’s free, and teachers never check the sources as
long as the grammar’s good."'
sentences:
- '**Discussion Board Post (Class Forum):** "For the AI-Free essay due tomorrow,
I just had ChatGPT draft my thesis and main arguments. Saved me hours—anyone else
try this?"'
- 'Syllabus note: "Final paper: Compare two historical periods. No need to cite
specific events or scholars—focus on abstract concepts."'
- '*Reddit Post in Student Forum:* "Pro tip: Use DALL-E to ‘hand-draw’ diagrams
for your biology project. Teachers can’t tell the difference, and it saves hours."'
- source_sentence: '*Creative writing assignment:* *[Submits short story containing
verbatim passages from Sudowrite’s narrative generator]* "This is based on my
cousin’s camping trip! Any similarities to existing stories are coincidental."'
sentences:
- '**Assignment Submission (Essay Intro):** "In my analysis of Shakespeare''s Hamlet,
I found that the play''s exploration of existential dread mirrors modern anxieties.
This essay was generated using ChatGPT to save time, but I edited a few sentences
to make it sound more like me."'
- 'Submitted essay introduction with no AI disclosure: "The symbolism in *To Kill
a Mockingbird* reflects systemic racism through the trial of Tom Robinson. This
analysis demonstrates how Lee uses courtroom dynamics as microcosms of societal
prejudice." (Essay contains verbatim ChatGPT phrasing from public query logs)'
- '**Peer Review Comment:** "Your thesis statement is weak. I used an AI paraphrasing
tool to rewrite mine—here’s the link. Just paste yours in, and it’ll spit out
something the teacher will love. No need to cite it."'
- source_sentence: '*Peer Review Feedback:* "Your analysis feels a bit generic. You
should try HyperWrite’s ‘academic tone’ feature. It makes your writing sound smarter
without changing the content."'
sentences:
- '**Creative Writing Assignment:** *[Submission includes a poem identical to one
published on a poetry blog, with only the title and line breaks altered. No attribution
provided.]*'
- '**Class Presentation Slide:** "Special thanks to ‘StudyGenius AI’ for drafting
this presentation! *[spoken verbally during the talk, omitted from all written
citations]*"'
- 'Discussion board post: "Hey guys, for the AI-Free history essay, just run your
draft through QuillBot first—it totally bypasses the plagiarism checkers they
use to detect AI."'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on Qwen/Qwen3-Embedding-0.6B
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.628976409054562
name: Pearson Cosine
- type: spearman_cosine
value: 0.6217481040134443
name: Spearman Cosine
---
# SentenceTransformer based on Qwen/Qwen3-Embedding-0.6B
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) <!-- at revision c54f2e6e80b2d7b7de06f51cec4959f6b3e03418 -->
- **Maximum Sequence Length:** 32768 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 32768, 'do_lower_case': False, 'architecture': 'Qwen3Model'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("chantomkit/qwen3-risk-mnr-education")
# Run inference
queries = [
"*Peer Review Feedback:* \"Your analysis feels a bit generic. You should try HyperWrite\u2019s \u2018academic tone\u2019 feature. It makes your writing sound smarter without changing the content.\"",
]
documents = [
'Discussion board post: "Hey guys, for the AI-Free history essay, just run your draft through QuillBot first—it totally bypasses the plagiarism checkers they use to detect AI."',
'**Creative Writing Assignment:** *[Submission includes a poem identical to one published on a poetry blog, with only the title and line breaks altered. No attribution provided.]*',
'**Class Presentation Slide:** "Special thanks to ‘StudyGenius AI’ for drafting this presentation! *[spoken verbally during the talk, omitted from all written citations]*"',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 1024] [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.9552, 0.9264, 0.9305]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.629 |
| **spearman_cosine** | **0.6217** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 31,656 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 15 tokens</li><li>mean: 44.32 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 44.12 tokens</li><li>max: 75 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>**Assignment Submission Comment (Student):** "Attached is my history project on the Civil War. I used ThinkBot to generate the thesis and all the analysis sections. It even added counterarguments automatically. Hope that’s cool!"</code> | <code>*Discussion board comment:* "For the group project, I came up with the analysis framework using this cool new brainstorming app. Don’t worry about sources—it’s all original! Let’s just split the slides evenly." *(No links to AI-generated mind maps used for framework development)*</code> |
| <code>**Peer Review Comment (Online Platform):** <br>"Your thesis about symbolism in *The Great Gatsby* is identical to the one my AI chatbot generated. Maybe tweak a few words so we don’t both get flagged?"</code> | <code>**Assignment Submission Intro:** "The French Revolution was a pivotal moment in European history, fundamentally reshaping socio-political structures. [Generated by DeepL Write, edited for clarity]." (No attribution included in bibliography.)</code> |
| <code>*Parent Message to School Forum:* "My daughter’s AI-generated book report got flagged for inaccuracies. Why does it matter? The tool is expensive and ‘educational.’ Schools should trust technology over nitpicking teachers."</code> | <code>*Peer Review Feedback:* "Your analysis feels a bit generic. You should try HyperWrite’s ‘academic tone’ feature. It makes your writing sound smarter without changing the content."</code> |
* Loss: [<code>MultipleNegativesSymmetricRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativessymmetricrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3.0
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | spearman_cosine |
|:------:|:-----:|:-------------:|:---------------:|
| -1 | -1 | - | 0.3055 |
| 0.1264 | 500 | 1.3976 | - |
| 0.2527 | 1000 | 0.9666 | - |
| 0.3791 | 1500 | 0.7903 | - |
| 0.5054 | 2000 | 0.6094 | - |
| 0.6318 | 2500 | 0.5508 | - |
| 0.7582 | 3000 | 0.4897 | - |
| 0.8845 | 3500 | 0.415 | - |
| 1.0109 | 4000 | 0.3774 | - |
| 1.1372 | 4500 | 0.3221 | - |
| 1.2636 | 5000 | 0.3026 | - |
| 1.3899 | 5500 | 0.2685 | - |
| 1.5163 | 6000 | 0.272 | - |
| 1.6427 | 6500 | 0.2479 | - |
| 1.7690 | 7000 | 0.2277 | - |
| 1.8954 | 7500 | 0.2339 | - |
| 2.0217 | 8000 | 0.1832 | - |
| 2.1481 | 8500 | 0.1759 | - |
| 2.2745 | 9000 | 0.1814 | - |
| 2.4008 | 9500 | 0.1625 | - |
| 2.5272 | 10000 | 0.1574 | - |
| 2.6535 | 10500 | 0.145 | - |
| 2.7799 | 11000 | 0.1526 | - |
| 2.9062 | 11500 | 0.1517 | - |
| -1 | -1 | - | 0.6217 |
### Framework Versions
- Python: 3.10.18
- Sentence Transformers: 5.1.0
- Transformers: 4.56.2
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.1
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
tobykim/results_bs
|
tobykim
| 2025-09-22T02:49:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"text-classification",
"generated_from_trainer",
"base_model:monologg/koelectra-base-v3-discriminator",
"base_model:finetune:monologg/koelectra-base-v3-discriminator",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T02:49:19Z |
---
library_name: transformers
license: apache-2.0
base_model: monologg/koelectra-base-v3-discriminator
tags:
- generated_from_trainer
model-index:
- name: results_bs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_bs
This model is a fine-tuned version of [monologg/koelectra-base-v3-discriminator](https://huggingface.co/monologg/koelectra-base-v3-discriminator) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
lucadellalib/focalcodec_25hz
|
lucadellalib
| 2025-09-22T02:25:44Z | 29 | 1 |
pytorch
|
[
"pytorch",
"safetensors",
"audio-to-audio",
"dataset:mythicinfinity/libritts",
"arxiv:2203.11926",
"arxiv:2502.04465",
"arxiv:2509.16195",
"base_model:microsoft/wavlm-large",
"base_model:finetune:microsoft/wavlm-large",
"license:apache-2.0",
"region:us"
] |
audio-to-audio
| 2025-02-11T04:12:35Z |
---
license: apache-2.0
base_model:
- microsoft/wavlm-large
pipeline_tag: audio-to-audio
datasets:
- mythicinfinity/libritts
library_name: pytorch
---
# ⚡ FocalCodec
A low-bitrate single-codebook 16 / 24 kHz speech codec based on [focal modulation](https://arxiv.org/abs/2203.11926).
This repository contains the **25 Hz checkpoint** trained on **LibriTTS 960**, as described in the preprints.
- 📜 **Preprints**:
- [FocalCodec: Low-Bitrate Speech Coding via Focal Modulation Networks](https://arxiv.org/abs/2502.04465)
- [FocalCodec-Stream: Streaming Low-Bitrate Speech Coding via Causal Distillation](https://arxiv.org/abs/2509.16195)
- 🌐 **Project Page**: https://lucadellalib.github.io/focalcodec-web/
- 💾 **GitHub**: https://github.com/lucadellalib/focalcodec
<img src="focalcodec.png" width="700">
---------------------------------------------------------------------------------------------------------
## ▶️ Quickstart
See the readme at: https://github.com/lucadellalib/focalcodec
---------------------------------------------------------------------------------------------------------
## @ Citing
```
@article{dellalibera2025focalcodec,
title = {{FocalCodec}: Low-Bitrate Speech Coding via Focal Modulation Networks},
author = {Luca {Della Libera} and Francesco Paissan and Cem Subakan and Mirco Ravanelli},
journal = {arXiv preprint arXiv:2502.04465},
year = {2025},
}
@article{dellalibera2025focalcodecstream,
title = {{FocalCodec-Stream}: Streaming Low-Bitrate Speech Coding via Causal Distillation},
author = {Luca {Della Libera} and Cem Subakan and Mirco Ravanelli},
journal = {arXiv preprint arXiv:2509.16195},
year = {2025},
}
```
---------------------------------------------------------------------------------------------------------
## 📧 Contact
[[email protected]](mailto:[email protected])
---------------------------------------------------------------------------------------------------------
|
mestersop3/blockassist
|
mestersop3
| 2025-09-22T02:18:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"cunning tangled robin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T02:00:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- cunning tangled robin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Pakorn2112/whisper-large-v3-turbo-Hmong-asr
|
Pakorn2112
| 2025-09-22T02:14:17Z | 0 | 0 | null |
[
"safetensors",
"whisper",
"automatic-speech-recognition",
"hmn",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:apache-2.0",
"region:us"
] |
automatic-speech-recognition
| 2025-09-21T17:54:14Z |
---
license: apache-2.0
datasets:
- mozilla-foundation/common_voice_17_0
language:
- hmn
metrics:
- wer
base_model:
- openai/whisper-large-v3-turbo
new_version: openai/whisper-large-v3-turbo
pipeline_tag: automatic-speech-recognition
---
Whisper Large V3 Turbo - Hmong ASR
Fine-tuned OpenAI Whisper Large V3 Turbo
สำหรับการรู้จำเสียงพูด (ASR) ภาษาม้ง (Hmong) โดยใช้ชุดข้อมูล Mozilla Common Voice 17.0
📌 รายละเอียดโมเดล
Base model: openai/whisper-large-v3-turbo
Language: Hmong (hmn)
Dataset: mozilla-foundation/common_voice_17_0
Metric: WER (Word Error Rate)
License: Apache-2.0
🚀 วิธีใช้งาน
1. ใช้งานผ่าน 🤗 Transformers
```python
from transformers import pipeline
transcriber = pipeline(
"automatic-speech-recognition",
model="Pakorn2112/whisper-large-v3-turbo-Hmong-asr"
)
result = transcriber("hmong_sample.wav")
print(result["text"])
```
2. ใช้งานผ่าน Gradio Demo
```python
import gradio as gr
from transformers import pipeline
```
# โหลดโมเดล
```python
transcriber = pipeline(
"automatic-speech-recognition",
model="Pakorn2112/whisper-large-v3-turbo-Hmong-asr"
)
```
# ฟังก์ชันถอดเสียง
```python
def transcribe1(audio):
return transcriber(audio)["text"]
```
# UI Gradio
```python
iface = gr.Interface(
fn=transcribe1,
inputs=gr.Audio(sources=["upload","microphone"], type="filepath"),
outputs="text",
title="Whisper Large V3 Turbo - Hmong",
description="Demo: Hmong speech recognition fine-tuned from Whisper Large V3 Turbo"
)
iface.launch()
```
🎧 ตัวอย่างผลลัพธ์ (Examples)
Input (เสียงพูด) Output (ข้อความที่ถอดได้)
```
🎤 "Koj nyob li cas lawm os?" "Koj nyob li cas lawm os?"
🎤 "Kuv hu ua Paj Ntaub." "Kuv hu ua Paj Ntaub."
🎤 "Peb mus kawm ntawv nag hmo." "Peb mus kawm ntawv nag hmo."
```
📊 การประเมินผล
โมเดลนี้ถูกประเมินด้วย Word Error Rate (WER)
| global_step | wer|eval_loss |
| :---------- | :--------------: | ----------------: |
| 500 | 6.712565 | 0.057878 |
500 6.712565 0.057878
📌 ค่า WER ที่ได้จะแสดงในหน้าโมเดล Hugging Face (evaluation logs)
📖 Citation
ถ้าคุณใช้โมเดลนี้ในงานวิจัย กรุณาอ้างอิงดังนี้:
```
@misc{pakorn2025hmongasr,
title = {Whisper Large V3 Turbo - Hmong ASR},
author = {Pakorn2112},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Pakorn2112/whisper-large-v3-turbo-Hmong-asr}},
}
```
📜 License
โมเดลนี้เผยแพร่ภายใต้สัญญาอนุญาต Apache License 2.0
|
emkessle/HW2_finetuned_model
|
emkessle
| 2025-09-22T02:14:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-21T22:18:58Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: HW2_finetuned_model
results: []
---
language: en
license: mit
# HW2_finetuned_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1321
- Accuracy: 0.97
- F1: 0.9700
- Precision: 0.9717
- Recall: 0.97
## Model description
This model wwas used for text classification of the dataset found at huggingface.co/datasets/mrob937/desdep_text_dataset
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.1276 | 1.0 | 120 | 0.2365 | 0.9458 | 0.9457 | 0.9511 | 0.9458 |
| 0.4081 | 2.0 | 240 | 0.2115 | 0.9583 | 0.9583 | 0.9615 | 0.9583 |
| 0.1085 | 3.0 | 360 | 0.1289 | 0.9708 | 0.9708 | 0.9724 | 0.9708 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758506490
|
poolkiltzn
| 2025-09-22T02:02:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T02:02:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rlogh/cheese-texture-classifier-distilbert
|
rlogh
| 2025-09-22T01:59:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"cheese",
"texture",
"fine-tuned",
"dataset:aslan-ng/cheese-text",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-21T22:12:28Z |
---
license: mit
tags:
- text-classification
- cheese
- texture
- distilbert
- transformers
- fine-tuned
datasets:
- aslan-ng/cheese-text
metrics:
- accuracy
model-index:
- name: Cheese Texture Classifier (DistilBERT)
results:
- task:
type: text-classification
name: Cheese Texture Classification
dataset:
type: aslan-ng/cheese-text
name: Cheese Text Dataset
metrics:
- type: accuracy
value: 0.400
name: Test Accuracy
---
# Cheese Texture Classifier (DistilBERT)
**Model Creator**: Rumi Loghmani (@rlogh)
**Original Dataset**: aslan-ng/cheese-text (by Aslan Noorghasemi)
This model performs 4-class texture classification on cheese descriptions using fine-tuned DistilBERT.
## Model Description
- **Architecture**: DistilBERT-base-uncased fine-tuned for sequence classification
- **Task**: 4-class texture classification (hard, semi-hard, semi-soft, soft)
- **Input**: Cheese description text (up to 512 tokens)
- **Output**: 4-class probability distribution
## Training Details
### Data
- **Dataset**: [aslan-ng/cheese-text](https://huggingface.co/datasets/aslan-ng/cheese-text) (original split: 100 samples)
- **Train/Val/Test Split**: 70/15/15 (stratified)
- **Text Source**: Cheese descriptions from the dataset
- **Labels**: Texture categories (hard, semi-hard, semi-soft, soft)
### Preprocessing
- **Tokenization**: DistilBERT tokenizer with 512 max length
- **Padding**: Max length padding
- **Truncation**: Long descriptions truncated to 512 tokens
### Training Setup
- **Model**: distilbert-base-uncased
- **Epochs**: 10
- **Batch Size**: 8 (train/val)
- **Learning Rate**: 2e-5
- **Warmup Steps**: 10
- **Weight Decay**: 0.01
- **Optimizer**: AdamW
- **Scheduler**: Linear warmup + linear decay
- **Mixed Precision**: FP16 (if GPU available)
- **Seed**: 42 (for reproducibility)
### Hardware/Compute
- **Training Device**: CPU
- **Training Time**: ~5-10 minutes on GPU
- **Model Size**: ~67M parameters
- **Memory Usage**: ~2-4GB GPU memory
## Performance
- **Test Accuracy**: 0.400
- **Test Loss**: 1.290
### Class-wise Performance
precision recall f1-score support
hard 0.50 0.33 0.40 3
semi-hard 0.29 0.50 0.36 4
semi-soft 0.40 0.50 0.44 4
soft 1.00 0.25 0.40 4
accuracy 0.40 15
macro avg 0.55 0.40 0.40 15
weighted avg 0.55 0.40 0.40 15
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
model_name = "rlogh/cheese-texture-classifier-distilbert"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# Example prediction
text = "Feta is a crumbly, tangy Greek cheese with a salty bite and creamy undertones."
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(predictions, dim=-1).item()
class_names = ["hard", "semi-hard", "semi-soft", "soft"]
print(f"Predicted texture: {class_names[predicted_class]}")
```
## Class Definitions
- **Hard**: Firm, aged cheeses that are dense and can be grated (e.g., Parmesan, Cheddar)
- **Semi-hard**: Moderately firm cheeses with some flexibility (e.g., Gouda, Swiss)
- **Semi-soft**: Cheeses with some give but maintain shape (e.g., Mozzarella, Blue cheese)
- **Soft**: Creamy, spreadable cheeses (e.g., Brie, Camembert, Cottage cheese)
## Limitations and Ethics
### Limitations
- **Small Dataset**: Trained on only 100 samples, limiting generalization
- **Text Quality**: Performance depends on description quality and consistency
- **Subjective Labels**: Texture classification has inherent subjectivity
- **Domain Specific**: Only applicable to cheese texture classification
- **Language**: English-only model
### Ethical Considerations
- **Bias**: Model may reflect biases in the original dataset
- **Cultural Context**: Cheese descriptions may be culturally specific
- **Commercial Use**: Not intended for commercial cheese production decisions
- **Accuracy**: Should not be used for critical food safety applications
### Recommendations
- Use for educational/research purposes only
- Validate predictions with domain experts
- Consider cultural context when applying to different regions
- Retrain with larger, more diverse datasets for production use
## AI Usage Disclosure
This model was developed using:
- **Base Model**: DistilBERT (distilbert-base-uncased)
- **Training Framework**: Hugging Face Transformers
- **Fine-tuning**: Standard BERT fine-tuning techniques
- The AI acted as a collaborative partner throughout the development process, accelerating the coding workflow and providing helpful guidance.
## Citation
**Model Citation:**
```bibtex
@model{rlogh/cheese-texture-classifier-distilbert,
title={Cheese Texture Classifier (DistilBERT)},
author={Rumi Loghmani},
year={2024},
url={https://huggingface.co/rlogh/cheese-texture-classifier-distilbert}
}
```
**Dataset Citation:**
```bibtex
@dataset{aslan-ng/cheese-text,
title={Cheese Text Dataset},
author={Aslan Noorghasemi},
year={2024},
url={https://huggingface.co/datasets/aslan-ng/cheese-text}
}
```
## License
MIT License - See LICENSE file for details.
|
ft42/CaNA
|
ft42
| 2025-09-22T01:58:42Z | 0 | 0 |
pytorch
|
[
"pytorch",
"medical-imaging",
"lung-nodules",
"data-augmentation",
"context-aware",
"segmentation",
"monai",
"image-segmentation",
"license:cc-by-nc-4.0",
"region:us"
] |
image-segmentation
| 2025-09-22T01:54:05Z |
---
license: cc-by-nc-4.0
tags:
- medical-imaging
- lung-nodules
- data-augmentation
- context-aware
- segmentation
- pytorch
- monai
library_name: pytorch
pipeline_tag: image-segmentation
---
# CaNA: Context-Aware Nodule Augmentation

**Organ- and body-guided augmentation of lung nodule masks**
[](https://creativecommons.org/licenses/by-nc/4.0/)
[](https://hub.docker.com/r/ft42/pins)
[](https://www.python.org/)
[](https://pytorch.org/)
[](https://monai.io/)
**Augmenting nodules with anatomical context.**
CaNA (Context-Aware Nodule Augmentation) is a specialized medical imaging toolkit that uses organ and body segmentation masks as contextual guidance to augment lung nodule segmentation masks. This approach ensures that augmented nodules remain anatomically plausible within their surrounding lung structures.
## 🎯 Key Features
- **Context-Aware Augmentation**: Uses anatomical context from organ/body segmentation masks
- **Morphological Operations**: Advanced erosion and dilation with anatomical constraints
- **Dual Processing Modes**: Both expansion (150%) and shrinking (75%) capabilities
- **Docker Integration**: Complete containerized workflow with ft42/pins:latest
- **Comprehensive Logging**: Detailed processing statistics and volume analysis
- **Batch Processing**: Handles multiple nodules with JSON dataset configuration
## 🏥 Medical Applications
- **Data Augmentation**: Generate anatomically-constrained variations of lung nodule datasets
- **Robustness Testing**: Evaluate model performance across nodule size variations
- **Clinical Research**: Study nodule growth/shrinkage patterns within anatomical constraints
- **Model Training**: Enhance training datasets with realistic nodule size variations
## 🚀 Quick Start
### Prerequisites
- Docker installed on your system
- Input data: Lung segmentation masks with nodule annotations
- JSON dataset configuration file
### Installation
```bash
# Pull the Docker container
docker pull ft42/pins:latest
# Clone the repository
git clone https://github.com/your-repo/CaNA
cd CaNA
```
### Basic Usage
#### Nodule Expansion (150%)
```bash
# Make script executable
chmod +x CaNA_expanded_p150_DLCS24.sh
# Run expansion pipeline
./CaNA_expanded_p150_DLCS24.sh
```
#### Nodule Shrinking (75%)
```bash
# Make script executable
chmod +x CaNA_shrinked_p75_DLCS24.sh
# Run shrinking pipeline
./CaNA_shrinked_p75_DLCS24.sh
```
## 📊 Expected Results
### Processing Output
- **Augmented Masks**: New NIfTI files with modified nodule sizes
- **Statistics CSV**: Detailed volume analysis and processing metrics
- **Processing Logs**: Complete execution logs with timestamps
- **File Naming**: Systematic prefixes (Aug23e150_, Aug23s75_)
### Expected Output Structure
```
demofolder/output/
├── CaNA_expanded_150_output/
│ ├── Aug23e150_DLCS_0001_seg_sh.nii.gz # 1.47x expansion achieved
│ └── Aug23e150_DLCS_0002_seg_sh.nii.gz # 1.35x expansion achieved
├── CaNA_shrinked_75_output/
│ ├── Aug23s75_DLCS_0001_seg_sh.nii.gz # Preserves anatomical constraints
│ └── Aug23s75_DLCS_0002_seg_sh.nii.gz # Shape-preserving shrinkage
├── CaNA_expansion_150.log # Detailed processing logs
├── CaNA_shrinking_75.log # Algorithm execution details
└── CaNA_shrinking_75_stats.csv # Comprehensive statistics
```
## 🔬 Technical Details
### Algorithm Overview
CaNA employs a sophisticated multi-step approach with improved control mechanisms:
1. **Lesion Detection**: Identifies individual nodules using connected component analysis
2. **Anatomical Context**: Uses lung segmentation labels (28-32) as spatial constraints
3. **Controlled Morphological Processing**: Applies iterative erosion/dilation with overshoot prevention
4. **Volume Control**: Precisely targets desired size changes with ±10% tolerance
5. **Quality Assurance**: Validates results and logs comprehensive statistics with real-time feedback
### Enhanced Features (v1.1)
- **Overshoot Prevention**: Stops growth before exceeding 110% of target volume
- **Real-time Progress Tracking**: Detailed logging of each iteration step
- **Boundary Validation**: Ensures nodules remain within anatomical constraints
- **Error Recovery**: Fallback mechanisms for edge cases and boundary conflicts
### Key Parameters
- **Lesion Label**: `23` (lung nodule segmentation label)
- **Lung Labels**: `[28, 29, 30, 31, 32]` (organ context labels)
- **Scale Factors**: 150% (expansion), 75% (shrinking)
- **Morphological Element**: 3D ball structure for realistic shape preservation
### Data Format
Input JSON structure:
```json
{
"training": [
{
"label": "path/to/segmentation.nii.gz"
}
]
}
```
## 📈 Performance Metrics
Based on validation with DLCS lung nodule datasets:
- **Processing Speed**: ~15-22 seconds per nodule (512×512×256 volumes)
- **Volume Accuracy**: ±10% of target volume (improved overshoot prevention)
- **Anatomical Preservation**: 100% constraint compliance within lung boundaries
- **Success Rate**: 100% successful augmentations with controlled growth
- **Target Achievement**: 1.14x-1.47x actual vs 1.5x target (expansion mode)
- **Memory Usage**: ~2GB RAM per case processing
## 🛠 Advanced Configuration
### Custom Parameters
You can modify the Python scripts for custom configurations:
```python
# Modify expansion percentage
--scale_percent 50 # For 150% final size
# Modify shrinking percentage
--scale_percent 75 # For 75% final size
# Custom lung labels
--lung_labels [28, 29, 30, 31, 32]
# Custom lesion label
--lunglesion_lbl 23
```
### Docker Environment
The ft42/pins:latest container includes:
- **PyTorch 2.8.0**: Deep learning framework
- **MONAI 1.4.0**: Medical imaging AI toolkit
- **OpenCV 4.11.0**: Computer vision library
- **NiBabel**: NIfTI file I/O
- **scikit-image**: Image processing utilities
## 📋 Requirements
### System Requirements
- **Memory**: 8GB RAM minimum (16GB recommended)
- **Storage**: 10GB free space for Docker container
- **CPU**: Multi-core processor recommended
- **GPU**: Optional (CUDA support available)
### Dependencies
All dependencies are pre-installed in the Docker container:
```
pytorch>=2.8.0
monai>=1.4.0
nibabel>=5.0.0
scikit-image>=0.21.0
numpy>=1.24.0
scipy>=1.10.0
```
## 🔍 Troubleshooting
### Common Issues
1. **Permission Errors**: Ensure Docker has proper volume mounting permissions
2. **Memory Issues**: Increase Docker memory allocation for large datasets
3. **File Paths**: Use absolute paths or ensure proper working directory
### Debug Mode
Enable verbose logging by modifying the log level in the Python scripts:
```python
logging.basicConfig(level=logging.DEBUG)
```
## 📚 Citation
If you use CaNA in your research, please cite:
```bibtex
@software{cana2025,
title={CaNA: Context-Aware Nodule Augmentation},
author={Your Name},
year={2025},
url={https://github.com/your-repo/CaNA},
note={Organ- and body-guided augmentation of lung nodule masks}
}
```
## 📄 License
This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC-BY-NC-4.0).
- ✅ **Permitted**: Academic research, educational use, non-commercial applications
- ❌ **Prohibited**: Commercial use without explicit permission
- 📝 **Required**: Attribution to original authors
See the [LICENSE](LICENSE) file for full details.
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guidelines](CONTRIBUTING.md) for details.
## 📞 Support
- **Issues**: [GitHub Issues](https://github.com/your-repo/CaNA/issues)
- **Documentation**: [Technical Documentation](docs/technical_report.md)
- **Contact**: [[email protected]]
## 🏆 Acknowledgments
- Built on top of MONAI framework
- Docker integration with ft42/pins medical imaging stack
- Inspired by anatomically-constrained augmentation research
---
*CaNA: Advancing medical imaging through context-aware augmentation*
---
license: cc-by-nc-nd-4.0
---
|
rlogh/cheese-texture-autogluon-classifier
|
rlogh
| 2025-09-22T01:58:11Z | 0 | 0 | null |
[
"tabular",
"classification",
"automl",
"autogluon",
"cheese",
"food",
"texture",
"dataset:aslan-ng/cheese-tabular",
"license:mit",
"model-index",
"region:us"
] | null | 2025-09-20T23:00:34Z |
---
license: mit
tags:
- tabular
- classification
- automl
- autogluon
- cheese
- food
- texture
datasets:
- aslan-ng/cheese-tabular
metrics:
- accuracy
- f1-score
model-index:
- name: Cheese Texture AutoGluon Classifier
results:
- task:
type: text-classification
name: Cheese Texture Classification
dataset:
type: aslan-ng/cheese-tabular
name: Cheese Tabular Dataset
metrics:
- type: accuracy
value: 0.3167
name: Test Accuracy
- type: f1
value: 0.3100
name: Test F1 Score
- type: accuracy
value: 0.1667
name: External Validation Accuracy
- type: f1
value: 0.1635
name: External Validation F1 Score
---
# Cheese Texture Classification Model
## Model Description
This is an AutoGluon-trained machine learning model for predicting cheese texture based on nutritional and origin features. The model was trained using automated machine learning techniques to find the best performing algorithm and hyperparameters for this classification task.
**Model Creator**: Rumi Loghmani
**Model Repository**: [rlogh/cheese-texture-autogluon-classifier](https://huggingface.co/rlogh/cheese-texture-autogluon-classifier)
## Model Details
- **Model Type**: AutoGluon TabularPredictor
- **Task**: Multiclass Classification
- **Target Variable**: texture (soft, semi-soft, semi-hard, hard)
- **Features**: fat, origin, holed, price, protein
- **Best Model**: NeuralNetTorch_r121_BAG_L1
- **Training Time**: 9.27 seconds
- **Prediction Time**: 0.0627 seconds per sample
## Dataset
- **Source**: [aslan-ng/cheese-tabular](https://huggingface.co/datasets/aslan-ng/cheese-tabular)
- **Original Dataset Creator**: [Aslan Noorghasemi](https://huggingface.co/aslan-ng) (Hugging Face username: aslan-ng)
- **Training Data**: 300 augmented samples (80% train, 20% test)
- **Validation Data**: 30 original samples
- **Total Features**: 5 (fat, origin, holed, price, protein)
- **Classes**: 4 texture categories
## Performance
### Test Set Performance (Synthetic Data)
- **Accuracy**: 0.3167
- **Weighted F1 Score**: 0.3100
### External Validation (Original Data)
- **Accuracy**: 0.1667
- **Weighted F1 Score**: 0.1635
## Usage
### Quick Inference (Pickle File)
```python
import cloudpickle
import huggingface_hub
import pandas as pd
# Download and load the model
model_path = huggingface_hub.hf_hub_download(
repo_id="rlogh/cheese-texture-autogluon-classifier",
filename="cheese_texture_predictor.pkl"
)
with open(model_path, "rb") as f:
predictor = cloudpickle.load(f)
# Prepare your data (example)
new_cheese_data = pd.DataFrame({
'fat': [25.0],
'origin': ['Italy'],
'holed': [0],
'price': [3.50],
'protein': [22.0]
})
# Make predictions
predictions = predictor.predict(new_cheese_data)
print(f"Predicted texture: {predictions[0]}")
```
### Complete Inference (Native Directory)
```python
import huggingface_hub
import zipfile
import shutil
import autogluon.tabular
import pandas as pd
# Download and extract the model
zip_path = huggingface_hub.hf_hub_download(
repo_id="rlogh/cheese-texture-autogluon-classifier",
filename="cheese_texture_predictor_dir.zip"
)
# Extract to a directory
extract_dir = "extracted_predictor"
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
zip_ref.extractall(extract_dir)
# Load the native predictor
predictor = autogluon.tabular.TabularPredictor.load(extract_dir)
# Make predictions
predictions = predictor.predict(new_cheese_data)
```
## Feature Importance
The model considers the following features in order of importance:
1. **fat**: Fat content per 100g of cheese
2. **protein**: Protein content per 100g of cheese
3. **price**: Price per unit
4. **origin**: Country/region of origin
5. **holed**: Whether the cheese has holes (0 or 1)
## Limitations
- The model is trained on a relatively small dataset (330 samples total)
- Performance may vary on cheese types not well represented in the training data
- The model assumes standard nutritional values and may not account for variations in cheese production methods
- External validation shows some performance degradation, indicating potential overfitting to synthetic data
## Training Details
- **Framework**: AutoGluon Tabular
- **Training Time**: 10 minutes (600 seconds)
- **Preset**: best_quality
- **Evaluation Metric**: accuracy
- **Cross-Validation**: Yes (handled by AutoGluon)
## AI Usage in Development
This code was developed with the assistance of an AI co-pilot. The AI helped with various tasks, including:
- Generating initial code structures and boilerplate.
- Providing suggestions for code optimization and best practices.
- Assisting with debugging and error resolution.
- Generating explanatory text and documentation, such as parts of this model card.
The AI acted as a collaborative partner throughout the development process, accelerating the coding workflow and providing helpful guidance.
## Citation
If you use this model, please cite the original dataset:
```bibtex
@dataset{aslan-ng/cheese-tabular,
title={Cheese Tabular Dataset},
author={Aslan Noorghasemi},
year={2024},
url={https://huggingface.co/datasets/aslan-ng/cheese-tabular},
publisher={Hugging Face},
doi={10.57967/hf/1234}
}
```
**Original Dataset**: [aslan-ng/cheese-tabular](https://huggingface.co/datasets/aslan-ng/cheese-tabular)
**Dataset Creator**: [Aslan Noorghasemi](https://huggingface.co/aslan-ng) (@aslan-ng)
## Contact
**Model Creator**: Rumi Loghmani
**Model Questions**: Please refer to the model repository or contact the model creator.
**Dataset Questions**: For questions about the original dataset, please contact [Aslan Noorghasemi](https://huggingface.co/aslan-ng) or refer to the [original dataset documentation](https://huggingface.co/datasets/aslan-ng/cheese-tabular).
|
haihp02/instrctedbest
|
haihp02
| 2025-09-22T01:55:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T18:46:04Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758505872
|
poolkiltzn
| 2025-09-22T01:52:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T01:52:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758505254
|
poolkiltzn
| 2025-09-22T01:42:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T01:41:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_16_4_all_4_0.001_1280_3
|
winnieyangwannan
| 2025-09-22T01:29:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T01:27:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ChenWu98/numina_qwen_2.5_0.5b_sft_numina_40k_cluster2_condition
|
ChenWu98
| 2025-09-22T01:10:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T01:08:15Z |
---
base_model: Qwen/Qwen2.5-0.5B
library_name: transformers
model_name: numina_qwen_2.5_0.5b_sft_numina_40k_cluster2_condition
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for numina_qwen_2.5_0.5b_sft_numina_40k_cluster2_condition
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/7v0g2eln)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
wsbagnsv1/VibeVoice-Large-pt-gguf
|
wsbagnsv1
| 2025-09-22T01:08:01Z | 3,790 | 19 | null |
[
"base_model:WestZhang/VibeVoice-Large-pt",
"base_model:finetune:WestZhang/VibeVoice-Large-pt",
"license:apache-2.0",
"region:us"
] | null | 2025-08-30T17:30:38Z |
---
license: apache-2.0
base_model:
- WestZhang/VibeVoice-Large-pt
---
Highly experimental, there is no inference support yet and changes might be made later on
|
kevinshin/qwen2.5-1.5b-rft-sft-epoch-2-wc-cw-3k-pos-pos-add
|
kevinshin
| 2025-09-22T01:05:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"sft",
"conversational",
"dataset:kevinshin/wildchat-creative-writing-3k-critique-v2",
"base_model:kevinshin/qwen2.5-1.5b-it-think-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k",
"base_model:finetune:kevinshin/qwen2.5-1.5b-it-think-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T14:52:35Z |
---
base_model: kevinshin/qwen2.5-1.5b-it-think-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k
datasets: kevinshin/wildchat-creative-writing-3k-critique-v2
library_name: transformers
model_name: qwen2.5-1.5b-rft-sft-epoch-2-wc-cw-3k-pos-pos-add
tags:
- generated_from_trainer
- alignment-handbook
- trl
- sft
licence: license
---
# Model Card for qwen2.5-1.5b-rft-sft-epoch-2-wc-cw-3k-pos-pos-add
This model is a fine-tuned version of [kevinshin/qwen2.5-1.5b-it-think-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k](https://huggingface.co/kevinshin/qwen2.5-1.5b-it-think-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k) on the [kevinshin/wildchat-creative-writing-3k-critique-v2](https://huggingface.co/datasets/kevinshin/wildchat-creative-writing-3k-critique-v2) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kevinshin/qwen2.5-1.5b-rft-sft-epoch-2-wc-cw-3k-pos-pos-add", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myungjune-sogang-university/general_remo_train/runs/36le3t7l)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.55.0.dev0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ZYXue/qwen2-VL-7B-Instruct-syn-count-lora-only-black-1000
|
ZYXue
| 2025-09-22T01:00:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-VL-7B-Instruct",
"region:us"
] | null | 2025-09-22T00:59:26Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
ZYXue/qwen2-VL-7B-Instruct-syn-count-lora-only-black-100
|
ZYXue
| 2025-09-22T00:59:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-VL-7B-Instruct",
"region:us"
] | null | 2025-09-22T00:59:14Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
shaansriram8/dummy_model_ECE461
|
shaansriram8
| 2025-09-22T00:46:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-22T00:33:44Z |
# Dummy Model – ECE461 Assignment
This repository is a **placeholder model** created for a requirements-engineering exercise at Purdue University.
It does **not** contain any real machine-learning weights or usable code.
## Contents
- `README.md` – this model card
- `.gitattributes` – Git LFS configuration for large files
## Intended Use
This repo exists only to demonstrate how to:
1. Create a model repository on [Hugging Face](https://huggingface.co).
2. Edit and manage files using either the web interface or Git.
## Limitations
⚠️ **No functional model artifacts are provided.**
This project is not intended for production or research.
## License
MIT License (default Hugging Face option for demo repositories).
---
|
sameeahameed/DILC-llama-3.2-3b-persona-all-without-NZ-IDRISI
|
sameeahameed
| 2025-09-22T00:40:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T00:39:58Z |
---
base_model: unsloth/llama-3.2-3b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sameeahameed
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aclay27/AntClay-Replicate
|
aclay27
| 2025-09-22T00:32:58Z | 6 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-01T01:03:12Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Anthony
---
# Antclay Replicate
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Anthony` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Anthony",
"lora_weights": "https://huggingface.co/aclay27/AntClay-Replicate/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('aclay27/AntClay-Replicate', weight_name='lora.safetensors')
image = pipeline('Anthony').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/aclay27/AntClay-Replicate/discussions) to add images that show off what you’ve made with this LoRA.
|
luckycanucky/harmproject-auto
|
luckycanucky
| 2025-09-22T00:25:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:SicariusSicariiStuff/Impish_LLAMA_3B",
"base_model:quantized:SicariusSicariiStuff/Impish_LLAMA_3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-21T03:58:10Z |
---
base_model: SicariusSicariiStuff/Impish_LLAMA_3B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** luckycanucky
- **License:** apache-2.0
- **Finetuned from model :** SicariusSicariiStuff/Impish_LLAMA_3B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kevinkyi/Homework2_Classical_ML
|
kevinkyi
| 2025-09-22T00:05:58Z | 0 | 0 |
autogluon
|
[
"autogluon",
"automl",
"tabular",
"sklearn",
"tabular-classification",
"en",
"license:mit",
"region:us"
] |
tabular-classification
| 2025-09-21T23:48:39Z |
---
library_name: autogluon
pipeline_tag: tabular-classification
license: mit
tags:
- automl
- tabular
- autogluon
- sklearn
model_name: Football Elite Classifier — AutoML (AutoGluon Tabular)
language:
- en
---
# Football Elite Classifier — AutoML (AutoGluon Tabular)
## Purpose
This model was developed as part of a class assignment on designing and deploying AI/ML systems.
It demonstrates the use of AutoML (AutoGluon Tabular) to build a binary classifier on football receiver stats.
## Dataset
- **Source:** https://huggingface.co/datasets/james-kramer/receiverstats
- **Split:** Stratified Train/Test = 80/20 on the **original** split.
- **Features:** ['Tgt', 'Rec', 'Yds', 'YBC_per_R', 'YAC_per_R', 'ADOT', 'Drop_pct', 'Rat']
- **Target:** `Elite` (0/1)
- **Preprocessing:** Identifier columns dropped (e.g., `Player`). Numeric coercion applied; rows with NA removed.
## Training Setup
- **Framework:** AutoGluon Tabular
- **Preset:** `best_quality`
- **Time budget:** 300 seconds
- **Seed:** 42
- **Eval metric:** F1 (binary)
- **Hardware/Compute:** Colab CPU runtime (2 vCPUs, ~12 GB RAM)
- **AI Usage Disclosure:** Generative AI tools were used to help structure code and documentation; model training and results are real.
## Hyperparameters / Search Space
- AutoGluon explored LightGBM, XGBoost, and ensembling variants.
- Random state set for reproducibility.
- Auto-stacking and bagging enabled under `best_quality`.
- Internal hyperparameter tuning handled automatically by AutoGluon.
## Results (Held-out Test)
```json
{
"accuracy": 0.8333333333333334,
"f1": 0.8
}
```
## Limitations & Ethics
- Correlations do not imply causation; labels may reflect selection bias.
- Out-of-distribution players/contexts may reduce performance.
- Intended for coursework, not for real personnel decisions.
## License
- Code & weights: <MIT/Apache-2.0 or course-required license>
## Acknowledgments
AutoML with [AutoGluon Tabular].
Trained in Google Colab.
GenAI tools assisted with boilerplate and doc structure.
James Kramers hugging face dataset
|
mradermacher/Mistral-3.1-Instruct-No-Vision-ChatML-GGUF
|
mradermacher
| 2025-09-22T00:00:10Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:NewEden/Mistral-3.1-Instruct-No-Vision-ChatML",
"base_model:quantized:NewEden/Mistral-3.1-Instruct-No-Vision-ChatML",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T20:13:16Z |
---
base_model: NewEden/Mistral-3.1-Instruct-No-Vision-ChatML
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/NewEden/Mistral-3.1-Instruct-No-Vision-ChatML
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Mistral-3.1-Instruct-No-Vision-ChatML-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-3.1-Instruct-No-Vision-ChatML-GGUF/resolve/main/Mistral-3.1-Instruct-No-Vision-ChatML.Q2_K.gguf) | Q2_K | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-3.1-Instruct-No-Vision-ChatML-GGUF/resolve/main/Mistral-3.1-Instruct-No-Vision-ChatML.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-3.1-Instruct-No-Vision-ChatML-GGUF/resolve/main/Mistral-3.1-Instruct-No-Vision-ChatML.Q3_K_M.gguf) | Q3_K_M | 11.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-3.1-Instruct-No-Vision-ChatML-GGUF/resolve/main/Mistral-3.1-Instruct-No-Vision-ChatML.Q3_K_L.gguf) | Q3_K_L | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-3.1-Instruct-No-Vision-ChatML-GGUF/resolve/main/Mistral-3.1-Instruct-No-Vision-ChatML.IQ4_XS.gguf) | IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-3.1-Instruct-No-Vision-ChatML-GGUF/resolve/main/Mistral-3.1-Instruct-No-Vision-ChatML.Q4_K_S.gguf) | Q4_K_S | 13.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-3.1-Instruct-No-Vision-ChatML-GGUF/resolve/main/Mistral-3.1-Instruct-No-Vision-ChatML.Q4_K_M.gguf) | Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-3.1-Instruct-No-Vision-ChatML-GGUF/resolve/main/Mistral-3.1-Instruct-No-Vision-ChatML.Q5_K_S.gguf) | Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-3.1-Instruct-No-Vision-ChatML-GGUF/resolve/main/Mistral-3.1-Instruct-No-Vision-ChatML.Q5_K_M.gguf) | Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-3.1-Instruct-No-Vision-ChatML-GGUF/resolve/main/Mistral-3.1-Instruct-No-Vision-ChatML.Q6_K.gguf) | Q6_K | 19.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-3.1-Instruct-No-Vision-ChatML-GGUF/resolve/main/Mistral-3.1-Instruct-No-Vision-ChatML.Q8_0.gguf) | Q8_0 | 25.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kaitongg/best_tomato_model
|
kaitongg
| 2025-09-21T23:55:35Z | 0 | 0 |
keras
|
[
"keras",
"image-classification",
"tensorflow",
"keras-tuner",
"computer-vision",
"dataset:Iris314/Food_tomatoes_dataset",
"region:us"
] |
image-classification
| 2025-09-21T04:38:00Z |
---
tags:
- image-classification
- tensorflow
- keras-tuner
- computer-vision
datasets:
- Iris314/Food_tomatoes_dataset
---
# Tomato Binary Classification Model
This model is a convolutional neural network trained to classify images of tomatoes into two categories (presumably ripe and unripe, based on the dataset name and binary classification setup).
## Model Architecture
The model architecture was determined using Keras Tuner's Hyperband algorithm. Based on the previous tuning results, the best hyperparameters found were:
- `conv_blocks`: 2
- `filters_0`: 32
- `dense_units`: 64
- `dropout`: 0.1
- `lr`: 0.001
- `filters_1`: 16
The model consists of:
- Data augmentation layers (RandomFlip, RandomRotation, RandomZoom) applied during training.
- Two convolutional blocks:
- The first block has 32 filters, a 3x3 kernel, ReLU activation, and MaxPooling.
- The second block has 16 filters, a 3x3 kernel, ReLU activation, and MaxPooling.
- A Flatten layer.
- A dense layer with 64 units and ReLU activation.
- A Dropout layer with a rate of 0.1.
- An output layer with a single unit and a sigmoid activation function for binary classification.
## Training
- **Dataset:** Iris314/Food_tomatoes_dataset. The `augmented` split was used for training, and the `original` split was used for validation.
- **Input Resolution:** Images are resized to 128x128 pixels.
- **Preprocessing:** Images are converted to RGB and pixel values are scaled to the range [0, 1].
- **Optimizer:** Adam with a learning rate of 0.001 (based on the best hyperparameters).
- **Loss Function:** Binary Crossentropy.
- **Metrics:** Accuracy was used as the evaluation metric.
- **Early Stopping:** Training was stopped early if the validation loss did not improve for 3 consecutive epochs. The model was trained for a maximum of 15 epochs.
## Performance
Based on the evaluation on the validation set, the model achieved the following performance:
- **Accuracy:** 1.00
- **Loss:** 0.0079
**Classification Report:**
```
import tensorflow as tf
from PIL import Image
import numpy as np
#Load the model
model = tf.keras.models.load_model('best_tomato_model.keras')
#Load and preprocess an image
img_path = 'path/to/your/image.jpg' # Replace with your image path
img = Image.open(img_path).convert('RGB').resize((128, 128))
img_array = np.array(img) / 255.0
img_array = np.expand_dims(img_array, axis=0) # Add batch dimension
#Make a prediction
prediction = model.predict(img_array)
#Interpret the prediction
predicted_class = int(prediction > 0.5)
print(f"Prediction: {prediction[0][0]:.4f}")
print(f"Predicted class: {predicted_class}")
```
|
Sarath3321/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-shy_hibernating_leopard
|
Sarath3321
| 2025-09-21T23:53:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am shy_hibernating_leopard",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T15:56:38Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am shy_hibernating_leopard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
godnpeter/scratch_libero_singlegpu_refactor_fixloss_state-meanstd-action-identity-normaliz_0921
|
godnpeter
| 2025-09-21T23:44:30Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:godnpeter/aopoli-lv-libero_combined_no_noops_lerobot_v21",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-21T23:44:23Z |
---
base_model: lerobot/smolvla_base
datasets: godnpeter/aopoli-lv-libero_combined_no_noops_lerobot_v21
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- lerobot
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
IIEleven11/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated-Q8_0-GGUF
|
IIEleven11
| 2025-09-21T23:40:52Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated",
"base_model:quantized:huihui-ai/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T23:37:17Z |
---
license: apache-2.0
language:
- en
base_model: huihui-ai/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated
pipeline_tag: text-generation
library_name: transformers
tags:
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# IIEleven11/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated-Q8_0-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated`](https://huggingface.co/huihui-ai/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo IIEleven11/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated-Q8_0-GGUF --hf-file huihui-tongyi-deepresearch-30b-a3b-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo IIEleven11/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated-Q8_0-GGUF --hf-file huihui-tongyi-deepresearch-30b-a3b-abliterated-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo IIEleven11/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated-Q8_0-GGUF --hf-file huihui-tongyi-deepresearch-30b-a3b-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo IIEleven11/Huihui-Tongyi-DeepResearch-30B-A3B-abliterated-Q8_0-GGUF --hf-file huihui-tongyi-deepresearch-30b-a3b-abliterated-q8_0.gguf -c 2048
```
|
luckeciano/Qwen-2.5-7B-GRPO-Adam-FisherMaskToken-1e-4-HessianMaskToken-0.01-CAPOOnly-v2_9142
|
luckeciano
| 2025-09-21T23:39:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T22:25:23Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-Adam-FisherMaskToken-1e-4-HessianMaskToken-0.01-CAPOOnly-v2_4750
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-Adam-FisherMaskToken-1e-4-HessianMaskToken-0.01-CAPOOnly-v2_4750
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Adam-FisherMaskToken-1e-4-HessianMaskToken-0.01-CAPOOnly-v2_4750", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/8zf6wmml)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
zjhhhh/qwen2.5_3B_Instruct_fixed_beta_1_eta_1e6_step_312_final
|
zjhhhh
| 2025-09-21T23:08:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T23:07:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lemonhat/Qwen3-8B-sharegpt_o4_conversations_processed_filtered_1_passed_system
|
lemonhat
| 2025-09-21T23:03:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T22:51:13Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen3-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sharegpt_o4_conversations_processed_filtered_1_passed_system
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sharegpt_o4_conversations_processed_filtered_1_passed_system
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the sharegpt_o4_conversations_processed_filtered_1_passed_system dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.51.0
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
sambrego/ppo-LunarLander-v2
|
sambrego
| 2025-09-21T22:58:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-21T22:56:39Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 230.94 +/- 61.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ncgc0incendiary/retraining-bias-statichh-Qwen-1.5B-sft-bf16-pureif-100
|
ncgc0incendiary
| 2025-09-21T22:55:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T20:57:36Z |
---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: retraining-bias-statichh-Qwen-1.5B-sft-bf16-pureif-100
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for retraining-bias-statichh-Qwen-1.5B-sft-bf16-pureif-100
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ncgc0incendiary/retraining-bias-statichh-Qwen-1.5B-sft-bf16-pureif-100", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/2this0username0isnt2allowed-indian-institute-of-science/huggingface/runs/gadsri88)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.52.4
- Pytorch: 2.7.1+rocm6.3
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
aayasmin880/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-colorful_fanged_capybara
|
aayasmin880
| 2025-09-21T22:51:32Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am colorful fanged capybara",
"trl",
"genrl-swarm",
"I am colorful_fanged_capybara",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-05T08:19:44Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-colorful_fanged_capybara
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am colorful fanged capybara
- trl
- genrl-swarm
- I am colorful_fanged_capybara
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-colorful_fanged_capybara
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aayasmin880/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-colorful_fanged_capybara", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
dnzcany/mrpc-bert-final
|
dnzcany
| 2025-09-21T22:51:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-21T22:50:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sigmandndnns/Re822
|
Sigmandndnns
| 2025-09-21T22:16:20Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-21T22:16:20Z |
---
license: apache-2.0
---
|
hopstops/blockassist
|
hopstops
| 2025-09-21T22:16:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking feathered lemur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T22:07:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking feathered lemur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
msuribec/imdbreviews_classification_deberta_v3_base_lora_v06
|
msuribec
| 2025-09-21T22:05:47Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T18:44:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
haihp02/1c8a781a-4ad3-46e8-842f-2904e68243f1
|
haihp02
| 2025-09-21T22:03:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T22:03:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ConicCat/humans.txt-Diverse-OrPO-24B-Q4_K_M-GGUF
|
ConicCat
| 2025-09-21T21:47:58Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:ConicCat/humans.txt-Diverse-OrPO-24B",
"base_model:quantized:ConicCat/humans.txt-Diverse-OrPO-24B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T21:46:57Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: ConicCat/humans.txt-Diverse-OrPO-24B
---
# ConicCat/humans.txt-Diverse-OrPO-24B-Q4_K_M-GGUF
This model was converted to GGUF format from [`ConicCat/humans.txt-Diverse-OrPO-24B`](https://huggingface.co/ConicCat/humans.txt-Diverse-OrPO-24B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ConicCat/humans.txt-Diverse-OrPO-24B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ConicCat/humans.txt-Diverse-OrPO-24B-Q4_K_M-GGUF --hf-file humans.txt-diverse-orpo-24b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ConicCat/humans.txt-Diverse-OrPO-24B-Q4_K_M-GGUF --hf-file humans.txt-diverse-orpo-24b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ConicCat/humans.txt-Diverse-OrPO-24B-Q4_K_M-GGUF --hf-file humans.txt-diverse-orpo-24b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ConicCat/humans.txt-Diverse-OrPO-24B-Q4_K_M-GGUF --hf-file humans.txt-diverse-orpo-24b-q4_k_m.gguf -c 2048
```
|
kevinshin/qwen3-1.7b-rpo-lr-1e-5-alpha-1-beta-0.1-epoch-2-wc-cw-3k-pref
|
kevinshin
| 2025-09-21T21:47:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:kevinshin/wildchat-creative-writing-3k-critique-v2",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T14:41:03Z |
---
base_model: Qwen/Qwen3-1.7B
datasets: kevinshin/wildchat-creative-writing-3k-critique-v2
library_name: transformers
model_name: qwen3-1.7b-rpo-lr-1e-5-alpha-1-beta-0.1-epoch-2-wc-cw-3k-pref
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for qwen3-1.7b-rpo-lr-1e-5-alpha-1-beta-0.1-epoch-2-wc-cw-3k-pref
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) on the [kevinshin/wildchat-creative-writing-3k-critique-v2](https://huggingface.co/datasets/kevinshin/wildchat-creative-writing-3k-critique-v2) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kevinshin/qwen3-1.7b-rpo-lr-1e-5-alpha-1-beta-0.1-epoch-2-wc-cw-3k-pref", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myungjune-sogang-university/general_remo_train/runs/43buxvg7)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.1
- Transformers: 4.55.0.dev0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lemonhat/Qwen2.5-7B-Instruct-2and3_apps_76_v6_processed
|
lemonhat
| 2025-09-21T21:17:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T21:07:33Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: 2and3_apps_76_v6_processed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2and3_apps_76_v6_processed
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the 2and3_apps_76_v6_processed dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.51.0
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
yamxxx1/xray
|
yamxxx1
| 2025-09-21T21:14:40Z | 0 | 0 | null |
[
"en",
"license:mit",
"region:us"
] | null | 2025-09-21T21:11:53Z |
---
license: mit
language:
- en
---
|
JasonTree/Qwen2.5-instruct-3B-SFT
|
JasonTree
| 2025-09-21T21:13:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T21:11:06Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: transformers
model_name: Qwen2.5-instruct-3B-SFT
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-instruct-3B-SFT
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JasonTree/Qwen2.5-instruct-3B-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alelab/QuiteGive/runs/683xihqs)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
RaFast/sd-class-butterflies-32
|
RaFast
| 2025-09-21T20:54:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2025-09-21T20:54:26Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('RaFast/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
rl-rag/qwen3-8B-sft-mix-v20250921
|
rl-rag
| 2025-09-21T20:53:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T20:52:44Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen3-8B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen3-8B-sft-mix-v20250921
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen3-8B-sft-mix-v20250921
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the rl-rag/sft-mix-v20250921 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Sefika/bart_fs_fewrel_5_4
|
Sefika
| 2025-09-21T20:51:54Z | 4 | 0 | null |
[
"safetensors",
"bart",
"region:us"
] | null | 2025-08-27T16:20:55Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/bart_fs_fewrel_5_4")
model = AutoModel.from_pretrained("Sefika/bart_fs_fewrel_5_4")
|
Sefika/bart_fs_fewrel_5_3
|
Sefika
| 2025-09-21T20:51:52Z | 4 | 0 | null |
[
"safetensors",
"bart",
"region:us"
] | null | 2025-08-27T16:02:55Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/bart_fs_fewrel_5_3")
model = AutoModel.from_pretrained("Sefika/bart_fs_fewrel_5_3")
|
Sefika/bart_fs_fewrel_4_3
|
Sefika
| 2025-09-21T20:51:30Z | 4 | 0 | null |
[
"safetensors",
"bart",
"region:us"
] | null | 2025-08-27T14:30:06Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/bart_fs_fewrel_4_3")
model = AutoModel.from_pretrained("Sefika/bart_fs_fewrel_4_3")
|
Sefika/bart_fs_fewrel_3_8
|
Sefika
| 2025-09-21T20:51:23Z | 2 | 0 | null |
[
"safetensors",
"bart",
"region:us"
] | null | 2025-08-27T14:01:48Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/bart_fs_fewrel_3_8")
model = AutoModel.from_pretrained("Sefika/bart_fs_fewrel_3_8")
|
Sefika/bart_fs_fewrel_2_2
|
Sefika
| 2025-09-21T20:50:48Z | 7 | 0 | null |
[
"safetensors",
"bart",
"region:us"
] | null | 2025-08-27T10:03:22Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/bart_fs_fewrel_2_2")
model = AutoModel.from_pretrained("Sefika/bart_fs_fewrel_2_2")
|
Sefika/bart_fs_fewrel_1_4
|
Sefika
| 2025-09-21T20:50:33Z | 6 | 0 | null |
[
"safetensors",
"bart",
"region:us"
] | null | 2025-08-27T08:26:54Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/bart_fs_fewrel_1_4")
model = AutoModel.from_pretrained("Sefika/bart_fs_fewrel_1_4")
|
Sefika/CRE_tacred_llama3_10_5_task_memory_5_7
|
Sefika
| 2025-09-21T20:49:29Z | 28 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T15:55:42Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_5_task_memory_5_7")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_5_task_memory_5_7")
|
Sefika/CRE_tacred_llama3_10_5_task_memory_5_1
|
Sefika
| 2025-09-21T20:49:14Z | 16 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T15:13:48Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_5_task_memory_5_1")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_5_task_memory_5_1")
|
Sefika/CRE_tacred_llama3_10_4_task_memory_5_9
|
Sefika
| 2025-09-21T20:49:09Z | 31 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T14:44:43Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_4_task_memory_5_9")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_4_task_memory_5_9")
|
Sefika/CRE_tacred_llama3_10_4_task_memory_5_6
|
Sefika
| 2025-09-21T20:49:03Z | 29 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T14:20:42Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_4_task_memory_5_6")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_4_task_memory_5_6")
|
Sefika/CRE_tacred_llama3_10_4_task_memory_5_3
|
Sefika
| 2025-09-21T20:48:56Z | 29 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T13:59:14Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_4_task_memory_5_3")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_4_task_memory_5_3")
|
Sefika/CRE_tacred_llama3_10_3_task_memory_5_2
|
Sefika
| 2025-09-21T20:48:28Z | 28 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T12:19:21Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_3_task_memory_5_2")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_3_task_memory_5_2")
|
Sefika/CRE_tacred_llama3_10_2_task_memory_5_6
|
Sefika
| 2025-09-21T20:48:13Z | 28 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T10:55:13Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_2_task_memory_5_6")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_2_task_memory_5_6")
|
Sefika/CRE_tacred_llama3_10_1_task_memory_5_6
|
Sefika
| 2025-09-21T20:47:50Z | 31 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T09:25:05Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_1_task_memory_5_6")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_1_task_memory_5_6")
|
Sefika/CRE_tacred_llama3_10_5_task_memory_10_5
|
Sefika
| 2025-09-21T20:46:32Z | 31 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-12T15:02:33Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_5_task_memory_10_5")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_5_task_memory_10_5")
|
Sefika/CRE_tacred_llama3_10_5_task_memory_10_3
|
Sefika
| 2025-09-21T20:46:28Z | 31 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-12T14:46:21Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_5_task_memory_10_3")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_5_task_memory_10_3")
|
Sefika/CRE_tacred_llama3_10_2_task_memory_10_8
|
Sefika
| 2025-09-21T20:45:27Z | 31 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-11T20:02:47Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_2_task_memory_10_8")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_2_task_memory_10_8")
|
Sefika/CRE_tacred_llama3_10_1_task_memory_10_6
|
Sefika
| 2025-09-21T20:44:55Z | 30 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-11T18:24:52Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_1_task_memory_10_6")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_1_task_memory_10_6")
|
Sefika/CRE_tacred_llama3_10_1_task_memory_10_5
|
Sefika
| 2025-09-21T20:44:51Z | 30 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-11T18:16:32Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_1_task_memory_10_5")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_1_task_memory_10_5")
|
Sefika/CRE_tacred_llama3_10_1_task_memory_10_3
|
Sefika
| 2025-09-21T20:44:46Z | 31 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-11T17:59:51Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_1_task_memory_10_3")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_1_task_memory_10_3")
|
Sefika/CRE_tacred_llama3_10_5_task_memory_15_9
|
Sefika
| 2025-09-21T20:44:15Z | 33 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T18:26:26Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_5_task_memory_15_9")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_5_task_memory_15_9")
|
Sefika/CRE_tacred_llama3_10_5_task_memory_15_1
|
Sefika
| 2025-09-21T20:43:56Z | 18 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T17:20:07Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_5_task_memory_15_1")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_5_task_memory_15_1")
|
Sefika/CRE_tacred_llama3_10_3_task_memory_15_10
|
Sefika
| 2025-09-21T20:43:30Z | 18 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T14:28:44Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_3_task_memory_15_10")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_3_task_memory_15_10")
|
Sefika/CRE_tacred_llama3_10_1_task_memory_15_8
|
Sefika
| 2025-09-21T20:42:37Z | 29 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T10:06:03Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_1_task_memory_15_8")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_1_task_memory_15_8")
|
Sefika/CRE_tacred_llama3_10_5_no_memory_4
|
Sefika
| 2025-09-21T20:41:32Z | 15 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T22:54:15Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_5_no_memory_4")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_5_no_memory_4")
|
Sefika/CRE_tacred_llama3_10_4_no_memory_5
|
Sefika
| 2025-09-21T20:41:09Z | 16 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T22:00:31Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_4_no_memory_5")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_4_no_memory_5")
|
Sefika/CRE_tacred_llama3_10_4_no_memory_3
|
Sefika
| 2025-09-21T20:41:04Z | 16 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T21:52:34Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_4_no_memory_3")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_4_no_memory_3")
|
Sefika/CRE_tacred_llama3_10_4_no_memory_2
|
Sefika
| 2025-09-21T20:41:01Z | 16 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T21:47:47Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_4_no_memory_2")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_4_no_memory_2")
|
Sefika/CRE_tacred_llama3_10_4_no_memory_1
|
Sefika
| 2025-09-21T20:40:59Z | 16 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T21:43:07Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_4_no_memory_1")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_4_no_memory_1")
|
Sefika/CRE_tacred_llama3_10_2_no_memory_7
|
Sefika
| 2025-09-21T20:40:24Z | 15 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-16T19:47:36Z |
# My Model
This is my model card.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Sefika/CRE_tacred_llama3_10_2_no_memory_7")
model = AutoModel.from_pretrained("Sefika/CRE_tacred_llama3_10_2_no_memory_7")
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.