modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-23 18:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 573
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-23 18:28:01
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/JailbreakAgent-8B-GGUF
|
mradermacher
| 2025-09-21T09:09:22Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:MartinJYHuang/JailbreakAgent-8B",
"base_model:quantized:MartinJYHuang/JailbreakAgent-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-21T08:47:40Z |
---
base_model: MartinJYHuang/JailbreakAgent-8B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/MartinJYHuang/JailbreakAgent-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#JailbreakAgent-8B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/JailbreakAgent-8B-GGUF/resolve/main/JailbreakAgent-8B.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/JailbreakAgent-8B-GGUF/resolve/main/JailbreakAgent-8B.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/JailbreakAgent-8B-GGUF/resolve/main/JailbreakAgent-8B.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/JailbreakAgent-8B-GGUF/resolve/main/JailbreakAgent-8B.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/JailbreakAgent-8B-GGUF/resolve/main/JailbreakAgent-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/JailbreakAgent-8B-GGUF/resolve/main/JailbreakAgent-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/JailbreakAgent-8B-GGUF/resolve/main/JailbreakAgent-8B.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/JailbreakAgent-8B-GGUF/resolve/main/JailbreakAgent-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/JailbreakAgent-8B-GGUF/resolve/main/JailbreakAgent-8B.Q5_K_M.gguf) | Q5_K_M | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/JailbreakAgent-8B-GGUF/resolve/main/JailbreakAgent-8B.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/JailbreakAgent-8B-GGUF/resolve/main/JailbreakAgent-8B.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/JailbreakAgent-8B-GGUF/resolve/main/JailbreakAgent-8B.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
badrben/QCM_Francais_6
|
badrben
| 2025-09-21T08:53:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T08:52:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Msaddak99/tiny-chatbot-model-DPO
|
Msaddak99
| 2025-09-21T08:49:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T08:47:46Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: transformers
model_name: tiny-chatbot-model-DPO
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for tiny-chatbot-model-DPO
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Msaddak99/tiny-chatbot-model-DPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sarulab-speech/sidon-v0.1
|
sarulab-speech
| 2025-09-21T08:48:32Z | 0 | 4 | null |
[
"dataset:google/fleurs-r",
"dataset:parler-tts/libritts_r_filtered",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"region:us"
] | null | 2025-07-25T12:56:20Z |
---
license: mit
datasets:
- google/fleurs-r
- parler-tts/libritts_r_filtered
base_model:
- facebook/w2v-bert-2.0
---
# Contributors
- Wataru Nakata
- Yuki Saito
# Acknowledgements
The development of this model is supported by project gamma of the National Institute of Advanced Industrial Science and Technology
|
ostap-khm/ppo-LunarLander-v2
|
ostap-khm
| 2025-09-21T08:46:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-21T08:46:09Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 277.03 +/- 18.73
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mradermacher/DarkThoughts-LLaMa-70B-GGUF
|
mradermacher
| 2025-09-21T08:30:55Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TareksLab/DarkThoughts-LLaMa-70B",
"base_model:quantized:TareksLab/DarkThoughts-LLaMa-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-21T07:24:34Z |
---
base_model: TareksLab/DarkThoughts-LLaMa-70B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/TareksLab/DarkThoughts-LLaMa-70B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DarkThoughts-LLaMa-70B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DarkThoughts-LLaMa-70B-GGUF/resolve/main/DarkThoughts-LLaMa-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/DarkThoughts-LLaMa-70B-GGUF/resolve/main/DarkThoughts-LLaMa-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/DarkThoughts-LLaMa-70B-GGUF/resolve/main/DarkThoughts-LLaMa-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DarkThoughts-LLaMa-70B-GGUF/resolve/main/DarkThoughts-LLaMa-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/DarkThoughts-LLaMa-70B-GGUF/resolve/main/DarkThoughts-LLaMa-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/DarkThoughts-LLaMa-70B-GGUF/resolve/main/DarkThoughts-LLaMa-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DarkThoughts-LLaMa-70B-GGUF/resolve/main/DarkThoughts-LLaMa-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DarkThoughts-LLaMa-70B-GGUF/resolve/main/DarkThoughts-LLaMa-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/DarkThoughts-LLaMa-70B-GGUF/resolve/main/DarkThoughts-LLaMa-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/DarkThoughts-LLaMa-70B-GGUF/resolve/main/DarkThoughts-LLaMa-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DarkThoughts-LLaMa-70B-GGUF/resolve/main/DarkThoughts-LLaMa-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/DarkThoughts-LLaMa-70B-GGUF/resolve/main/DarkThoughts-LLaMa-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DarkThoughts-LLaMa-70B-GGUF/resolve/main/DarkThoughts-LLaMa-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kyle0612/llama32_11B_345certainP25
|
kyle0612
| 2025-09-21T08:09:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mllama",
"image-to-text",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-21T07:56:06Z |
---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** kyle0612
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aamijar/ReplaceME-Llama-3.1-8B-Instruct-lora-r8-sst2-epochs0
|
aamijar
| 2025-09-21T08:07:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T08:07:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wsbagnsv1/Lucy-Edit-Dev-5b
|
wsbagnsv1
| 2025-09-21T07:48:02Z | 13 | 2 | null |
[
"gguf",
"base_model:decart-ai/Lucy-Edit-Dev",
"base_model:quantized:decart-ai/Lucy-Edit-Dev",
"license:apache-2.0",
"region:us"
] | null | 2025-09-20T19:56:10Z |
---
license: apache-2.0
base_model:
- decart-ai/Lucy-Edit-Dev
---
Just for testing as it is not supported in comfyui!
|
bedio/MobileLLM-R1-360M_exp_32_copy_init
|
bedio
| 2025-09-21T07:45:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama4_text",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T07:45:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yav1327/qwen-3-2b-intent-tokenizer-V1
|
yav1327
| 2025-09-21T07:41:55Z | 0 | 0 |
transformers
|
[
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T07:41:53Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
huijelee/mistral-7b-qlora-nemotron-code
|
huijelee
| 2025-09-21T07:39:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-09-21T07:37:58Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
epakko23/blockassist
|
epakko23
| 2025-09-21T07:29:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"foxy playful jay",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-15T11:42:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- foxy playful jay
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/l3-stack-coffee-3-GGUF
|
mradermacher
| 2025-09-21T07:17:25Z | 26 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:CuriousCat29/l3-stack-coffee-3",
"base_model:quantized:CuriousCat29/l3-stack-coffee-3",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T11:54:36Z |
---
base_model: CuriousCat29/l3-stack-coffee-3
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/CuriousCat29/l3-stack-coffee-3
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#l3-stack-coffee-3-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q2_K.gguf) | Q2_K | 45.2 | |
| [PART 1](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q3_K_S.gguf.part2of2) | Q3_K_S | 52.9 | |
| [PART 1](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q3_K_M.gguf.part2of2) | Q3_K_M | 58.9 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q3_K_L.gguf.part2of2) | Q3_K_L | 64.1 | |
| [PART 1](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.IQ4_XS.gguf.part2of2) | IQ4_XS | 66.0 | |
| [PART 1](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q4_K_S.gguf.part2of2) | Q4_K_S | 69.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q4_K_M.gguf.part2of2) | Q4_K_M | 73.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q5_K_S.gguf.part2of2) | Q5_K_S | 84.1 | |
| [PART 1](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q5_K_M.gguf.part2of2) | Q5_K_M | 86.3 | |
| [PART 1](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q6_K.gguf.part3of3) | Q6_K | 100.1 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/l3-stack-coffee-3-GGUF/resolve/main/l3-stack-coffee-3.Q8_0.gguf.part3of3) | Q8_0 | 129.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
tamewild/4b_v113_merged_e5
|
tamewild
| 2025-09-21T07:13:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T07:12:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/van-derer-spell-v10-sdxl
|
John6666
| 2025-09-21T06:48:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-09-21T06:32:56Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1975691/vandererspell?modelVersionId=2236282).
This model created by [Dark_Schneider](https://civitai.com/user/Dark_Schneider).
|
hdnfnfn/blockassist-bc-giant_leggy_rhino_1758436999
|
hdnfnfn
| 2025-09-21T06:43:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"giant leggy rhino",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T06:43:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- giant leggy rhino
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MercuryNex/newer
|
MercuryNex
| 2025-09-21T06:06:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-09-21T06:06:23Z |
---
license: other
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
---
Converted from [https://civitai.com/api/download/models/246931?type=Model&format=SafeTensor&size=full&fp=fp16](https://civitai.com/api/download/models/246931?type=Model&format=SafeTensor&size=full&fp=fp16).
|
ehartford/VibeVoice-Large
|
ehartford
| 2025-09-21T05:59:01Z | 0 | 0 | null |
[
"safetensors",
"vibevoice",
"Podcast",
"text-to-speech",
"en",
"zh",
"arxiv:2508.19205",
"arxiv:2412.08635",
"license:mit",
"region:us"
] |
text-to-speech
| 2025-09-21T05:59:01Z |
---
license: mit
language:
- en
- zh
pipeline_tag: text-to-speech
tags:
- Podcast
---
## VibeVoice: A Frontier Open-Source Text-to-Speech Model
> This repository contains a copy of model weights obtained from ModelScope([microsoft/VibeVoice-Large](https://www.modelscope.cn/models/microsoft/VibeVoice-Large)).
> The license for this model is the `MIT License`, **which permits redistribution**.
>
> My understanding of the MIT License, which is consistent with the broader open-source community's consensus,
> is that it grants the right to distribute copies of the software and its derivatives.
> Therefore, I am lawfully exercising the right to redistribute this model.
>
> If you are a rights holder and believe this understanding of the license is incorrect, please submit a DMCA complaint to Hugging Face at [email protected]_
VibeVoice is a novel framework designed for generating expressive, long-form, multi-speaker conversational audio, such as podcasts, from text. It addresses significant challenges in traditional Text-to-Speech (TTS) systems, particularly in scalability, speaker consistency, and natural turn-taking.
A core innovation of VibeVoice is its use of continuous speech tokenizers (Acoustic and Semantic) operating at an ultra-low frame rate of 7.5 Hz. These tokenizers efficiently preserve audio fidelity while significantly boosting computational efficiency for processing long sequences. VibeVoice employs a next-token diffusion framework, leveraging a Large Language Model (LLM) to understand textual context and dialogue flow, and a diffusion head to generate high-fidelity acoustic details.
The model can synthesize speech up to **90 minutes** long with up to **4 distinct speakers**, surpassing the typical 1-2 speaker limits of many prior models.
➡️ **Technical Report:** [VibeVoice Technical Report](https://arxiv.org/abs/2508.19205)
➡️ **Project Page:** [microsoft/VibeVoice](https://microsoft.github.io/VibeVoice)
➡️ **Code:** [microsoft/VibeVoice-Code](https://github.com/microsoft/VibeVoice)
<p align="left">
<img src="figures/Fig1.png" alt="VibeVoice Overview" height="250px">
</p>
## Training Details
Transformer-based Large Language Model (LLM) integrated with specialized acoustic and semantic tokenizers and a diffusion-based decoding head.
- LLM: Qwen2.5 for this release.
- Tokenizers:
- Acoustic Tokenizer: Based on a σ-VAE variant (proposed in [LatentLM](https://arxiv.org/pdf/2412.08635)), with a mirror-symmetric encoder-decoder structure featuring 7 stages of modified Transformer blocks. Achieves 3200x downsampling from 24kHz input. Encoder/decoder components are ~340M parameters each.
- Semantic Tokenizer: Encoder mirrors the Acoustic Tokenizer's architecture (without VAE components). Trained with an ASR proxy task.
- Diffusion Head: Lightweight module (4 layers, ~600M parameters) conditioned on LLM hidden states. Predicts acoustic VAE features using a Denoising Diffusion Probabilistic Models (DDPM) process. Uses Classifier-Free Guidance (CFG) and DPM-Solver (and variants) during inference.
- Context Length: Trained with a curriculum increasing up to 32,768 tokens.
- Training Stages:
- Tokenizer Pre-training: Acoustic and Semantic tokenizers are pre-trained separately.
- VibeVoice Training: Pre-trained tokenizers are frozen; only the LLM and diffusion head parameters are trained. A curriculum learning strategy is used for input sequence length (4k -> 16K -> 32K). Text tokenizer not explicitly specified, but the LLM (Qwen2.5) typically uses its own. Audio is "tokenized" via the acoustic and semantic tokenizers.
## Models
| Model | Context Length | Generation Length | Weight |
|-------|----------------|----------|----------|
| VibeVoice-0.5B-Streaming | - | - | On the way |
| VibeVoice-1.5B | 64K | ~90 min | [HF link](https://huggingface.co/microsoft/VibeVoice-1.5B) |
| VibeVoice-Large| 32K | ~45 min | You are here. |
## Installation and Usage
Please refer to [GitHub README](https://github.com/microsoft/VibeVoice?tab=readme-ov-file#installation)
## Responsible Usage
### Direct intended uses
The VibeVoice model is limited to research purpose use exploring highly realistic audio dialogue generation detailed in the [tech report](https://arxiv.org/pdf/2508.19205).
### Out-of-scope uses
Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by MIT License. Use to generate any text transcript. Furthermore, this release is not intended or licensed for any of the following scenarios:
- Voice impersonation without explicit, recorded consent – cloning a real individual’s voice for satire, advertising, ransom, social‑engineering, or authentication bypass.
- Disinformation or impersonation – creating audio presented as genuine recordings of real people or events.
- Real‑time or low‑latency voice conversion – telephone or video‑conference “live deep‑fake” applications.
- Unsupported language – the model is trained only on English and Chinese data; outputs in other languages are unsupported and may be unintelligible or offensive.
- Generation of background ambience, Foley, or music – VibeVoice is speech‑only and will not produce coherent non‑speech audio.
## Risks and limitations
While efforts have been made to optimize it through various techniques, it may still produce outputs that are unexpected, biased, or inaccurate. VibeVoice inherits any biases, errors, or omissions produced by its base model.
Potential for Deepfakes and Disinformation: High-quality synthetic speech can be misused to create convincing fake audio content for impersonation, fraud, or spreading disinformation. Users must ensure transcripts are reliable, check content accuracy, and avoid using generated content in misleading ways. Users are expected to use the generated content and to deploy the models in a lawful manner, in full compliance with all applicable laws and regulations in the relevant jurisdictions. It is best practice to disclose the use of AI when sharing AI-generated content.
English and Chinese only: Transcripts in language other than English or Chinese may result in unexpected audio outputs.
Non-Speech Audio: The model focuses solely on speech synthesis and does not handle background noise, music, or other sound effects.
Overlapping Speech: The current model does not explicitly model or generate overlapping speech segments in conversations.
## Recommendations
We do not recommend using VibeVoice in commercial or real-world applications without further testing and development. This model is intended for research and development purposes only. Please use responsibly.
To mitigate the risks of misuse, we have:
Embedded an audible disclaimer (e.g. “This segment was generated by AI”) automatically into every synthesized audio file.
Added an imperceptible watermark to generated audio so third parties can verify VibeVoice provenance. Please see contact information at the end of this model card.
Logged inference requests (hashed) for abuse pattern detection and publishing aggregated statistics quarterly.
Users are responsible for sourcing their datasets legally and ethically. This may include securing appropriate rights and/or anonymizing data prior to use with VibeVoice. Users are reminded to be mindful of data privacy concerns.
## Contact
This project was conducted by members of Microsoft Research. We welcome feedback and collaboration from our audience. If you have suggestions, questions, or observe unexpected/offensive behavior in our technology, please contact us at [email protected].
If the team receives reports of undesired behavior or identifies issues independently, we will update this repository with appropriate mitigations.
|
om-ai/om-DocOCR-vi-3B
|
om-ai
| 2025-09-21T05:44:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"text-generation-inference",
"unsloth",
"en",
"base_model:ChatDOC/OCRFlux-3B",
"base_model:finetune:ChatDOC/OCRFlux-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-21T05:42:49Z |
---
base_model: ChatDOC/OCRFlux-3B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** om-ai
- **License:** apache-2.0
- **Finetuned from model :** ChatDOC/OCRFlux-3B
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gbcfchc/Qwen2.5-Math-1.5B-Open-Math-GRPO
|
gbcfchc
| 2025-09-21T04:58:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T12:10:19Z |
---
library_name: transformers
model_name: Qwen2.5-Math-1.5B-Open-Math-GRPO
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-Math-1.5B-Open-Math-GRPO
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gbcfchc/Qwen2.5-Math-1.5B-Open-Math-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/eLLM-han2024/huggingface/runs/6o53b1nk)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
jialicheng/cifar10_mobilenet-v2
|
jialicheng
| 2025-09-21T04:38:54Z | 0 | 0 | null |
[
"safetensors",
"mobilenet_v2",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:google/mobilenet_v2_1.0_224",
"base_model:finetune:google/mobilenet_v2_1.0_224",
"license:other",
"region:us"
] |
image-classification
| 2025-09-21T04:36:36Z |
---
license: other
base_model: google/mobilenet_v2_1.0_224
tags:
- image-classification
- vision
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mobilenet_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilenet_v2
This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4749
- Accuracy: 0.8446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 333 | 1.4842 | 0.5297 |
| 1.7515 | 2.0 | 666 | 1.6557 | 0.4453 |
| 1.7515 | 3.0 | 999 | 0.8495 | 0.7062 |
| 1.1908 | 4.0 | 1332 | 0.7553 | 0.747 |
| 1.0051 | 5.0 | 1665 | 0.7284 | 0.7479 |
| 1.0051 | 6.0 | 1998 | 0.8906 | 0.6977 |
| 0.9089 | 7.0 | 2331 | 1.0051 | 0.6587 |
| 0.8441 | 8.0 | 2664 | 0.5889 | 0.8025 |
| 0.8441 | 9.0 | 2997 | 0.6794 | 0.7749 |
| 0.7937 | 10.0 | 3330 | 0.9055 | 0.7074 |
| 0.7578 | 11.0 | 3663 | 0.7539 | 0.7619 |
| 0.7578 | 12.0 | 3996 | 0.6955 | 0.7708 |
| 0.7315 | 13.0 | 4329 | 1.1638 | 0.6383 |
| 0.7048 | 14.0 | 4662 | 0.6883 | 0.7777 |
| 0.7048 | 15.0 | 4995 | 0.8076 | 0.7407 |
| 0.6901 | 16.0 | 5328 | 0.7501 | 0.759 |
| 0.6627 | 17.0 | 5661 | 0.6667 | 0.7834 |
| 0.6627 | 18.0 | 5994 | 0.8337 | 0.7508 |
| 0.6457 | 19.0 | 6327 | 0.8104 | 0.7488 |
| 0.6365 | 20.0 | 6660 | 0.6201 | 0.793 |
| 0.6365 | 21.0 | 6993 | 0.6534 | 0.794 |
| 0.6244 | 22.0 | 7326 | 0.4883 | 0.835 |
| 0.6092 | 23.0 | 7659 | 0.6647 | 0.7898 |
| 0.6092 | 24.0 | 7992 | 0.6831 | 0.777 |
| 0.5978 | 25.0 | 8325 | 0.7547 | 0.7608 |
| 0.5838 | 26.0 | 8658 | 0.5030 | 0.8356 |
| 0.5838 | 27.0 | 8991 | 0.4207 | 0.8573 |
| 0.5828 | 28.0 | 9324 | 0.7332 | 0.7726 |
| 0.5716 | 29.0 | 9657 | 0.3767 | 0.8721 |
| 0.5716 | 30.0 | 9990 | 0.5153 | 0.8394 |
| 0.565 | 31.0 | 10323 | 0.5992 | 0.8111 |
| 0.5496 | 32.0 | 10656 | 0.6761 | 0.7903 |
| 0.5496 | 33.0 | 10989 | 0.6412 | 0.7951 |
| 0.5482 | 34.0 | 11322 | 0.7193 | 0.7872 |
| 0.5346 | 35.0 | 11655 | 0.5146 | 0.8348 |
| 0.5346 | 36.0 | 11988 | 0.9719 | 0.7291 |
| 0.5336 | 37.0 | 12321 | 0.6971 | 0.7816 |
| 0.5381 | 38.0 | 12654 | 0.6219 | 0.8095 |
| 0.5381 | 39.0 | 12987 | 0.8059 | 0.7571 |
| 0.5205 | 40.0 | 13320 | 0.5201 | 0.8323 |
| 0.5182 | 41.0 | 13653 | 0.7611 | 0.7731 |
| 0.5182 | 42.0 | 13986 | 0.4614 | 0.8502 |
| 0.5105 | 43.0 | 14319 | 0.7823 | 0.7874 |
| 0.5051 | 44.0 | 14652 | 0.5006 | 0.8431 |
| 0.5051 | 45.0 | 14985 | 0.4780 | 0.8436 |
| 0.5033 | 46.0 | 15318 | 0.7846 | 0.7505 |
| 0.4989 | 47.0 | 15651 | 0.7369 | 0.7783 |
| 0.4989 | 48.0 | 15984 | 0.6269 | 0.8136 |
| 0.4902 | 49.0 | 16317 | 0.6005 | 0.8187 |
| 0.4899 | 50.0 | 16650 | 0.7436 | 0.7906 |
| 0.4899 | 51.0 | 16983 | 0.8028 | 0.777 |
| 0.4837 | 52.0 | 17316 | 0.4615 | 0.8515 |
| 0.481 | 53.0 | 17649 | 0.7034 | 0.7907 |
| 0.481 | 54.0 | 17982 | 0.5976 | 0.8075 |
| 0.481 | 55.0 | 18315 | 0.5986 | 0.8119 |
| 0.4831 | 56.0 | 18648 | 0.5826 | 0.8211 |
| 0.4831 | 57.0 | 18981 | 1.2071 | 0.6883 |
| 0.4844 | 58.0 | 19314 | 0.5116 | 0.8411 |
| 0.4715 | 59.0 | 19647 | 0.3828 | 0.8749 |
| 0.4715 | 60.0 | 19980 | 0.5963 | 0.8205 |
| 0.4689 | 61.0 | 20313 | 0.5510 | 0.8319 |
| 0.472 | 62.0 | 20646 | 0.7266 | 0.79 |
| 0.472 | 63.0 | 20979 | 0.4501 | 0.8508 |
| 0.4668 | 64.0 | 21312 | 0.9535 | 0.7623 |
| 0.4627 | 65.0 | 21645 | 0.7841 | 0.7753 |
| 0.4627 | 66.0 | 21978 | 0.8179 | 0.7753 |
| 0.4549 | 67.0 | 22311 | 0.4133 | 0.8672 |
| 0.4578 | 68.0 | 22644 | 0.7689 | 0.7905 |
| 0.4578 | 69.0 | 22977 | 0.4337 | 0.8656 |
| 0.4581 | 70.0 | 23310 | 0.3573 | 0.8812 |
| 0.4544 | 71.0 | 23643 | 0.4087 | 0.8698 |
| 0.4544 | 72.0 | 23976 | 0.4307 | 0.8599 |
| 0.4547 | 73.0 | 24309 | 0.8750 | 0.7509 |
| 0.4536 | 74.0 | 24642 | 0.5887 | 0.8163 |
| 0.4536 | 75.0 | 24975 | 0.3848 | 0.8718 |
| 0.4573 | 76.0 | 25308 | 0.8057 | 0.7881 |
| 0.4492 | 77.0 | 25641 | 0.8340 | 0.7727 |
| 0.4492 | 78.0 | 25974 | 0.4320 | 0.8619 |
| 0.4437 | 79.0 | 26307 | 0.6830 | 0.7969 |
| 0.4462 | 80.0 | 26640 | 0.6303 | 0.8152 |
| 0.4462 | 81.0 | 26973 | 0.5285 | 0.8282 |
| 0.4419 | 82.0 | 27306 | 0.3664 | 0.8871 |
| 0.449 | 83.0 | 27639 | 0.9199 | 0.7549 |
| 0.449 | 84.0 | 27972 | 0.4462 | 0.8568 |
| 0.4373 | 85.0 | 28305 | 0.4055 | 0.8645 |
| 0.4454 | 86.0 | 28638 | 0.8410 | 0.7686 |
| 0.4454 | 87.0 | 28971 | 0.3777 | 0.8811 |
| 0.4459 | 88.0 | 29304 | 1.0111 | 0.7445 |
| 0.441 | 89.0 | 29637 | 0.9389 | 0.7426 |
| 0.441 | 90.0 | 29970 | 1.0830 | 0.7328 |
| 0.4396 | 91.0 | 30303 | 0.4384 | 0.8569 |
| 0.4381 | 92.0 | 30636 | 0.7627 | 0.795 |
| 0.4381 | 93.0 | 30969 | 0.8045 | 0.7615 |
| 0.439 | 94.0 | 31302 | 0.6230 | 0.8071 |
| 0.4435 | 95.0 | 31635 | 0.6560 | 0.8117 |
| 0.4435 | 96.0 | 31968 | 0.4749 | 0.8503 |
| 0.4428 | 97.0 | 32301 | 0.4037 | 0.8691 |
| 0.4353 | 98.0 | 32634 | 0.7115 | 0.7903 |
| 0.4353 | 99.0 | 32967 | 0.6069 | 0.8124 |
| 0.4433 | 100.0 | 33300 | 0.4749 | 0.8446 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
|
QuantStack/HunyuanImage-2.1-Refiner-GGUF
|
QuantStack
| 2025-09-21T04:00:29Z | 5,010 | 2 | null |
[
"gguf",
"base_model:tencent/HunyuanImage-2.1",
"base_model:quantized:tencent/HunyuanImage-2.1",
"region:us"
] | null | 2025-09-10T15:22:33Z |
---
base_model:
- tencent/HunyuanImage-2.1
---
|
debisoft/Taxi-v3-5x5-noRain
|
debisoft
| 2025-09-21T03:53:20Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-21T01:47:22Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-5x5-noRain
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="debisoft/Taxi-v3-5x5-noRain", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
zhiyuan5986/CHA-LoRA-pretrain-lorar-128-llama3.1-gradient32-time20250919101635-localrank0
|
zhiyuan5986
| 2025-09-21T03:19:25Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] | null | 2025-09-21T03:18:00Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
hitoshura25/webauthn-security-sequential_20250920_211325_stage2_codefix
|
hitoshura25
| 2025-09-21T02:55:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"security",
"vulnerability-analysis",
"webauthn",
"mlx-converted",
"license:apache-2.0",
"region:us"
] | null | 2025-09-21T02:55:35Z |
---
base_model: allenai/OLMo-2-1B
base_model_relation: adapter
library_name: peft
peft_type: LORA
tags:
- security
- vulnerability-analysis
- webauthn
- mlx-converted
license: apache-2.0
---
# WebAuthn Security LoRA Adapter
This LoRA adapter specializes the base model for WebAuthn security vulnerability analysis.
**Converted from MLX format to HuggingFace PEFT format for compatibility.**
## Model Details
- **Base Model**: allenai/OLMo-2-1B
- **Adapter Type**: LoRA (Low-Rank Adaptation)
- **Target Modules**: q_proj, v_proj, k_proj, o_proj, gate_proj, up_proj, down_proj
- **LoRA Rank**: 8
- **LoRA Alpha**: 20.0
- **LoRA Dropout**: 0.0
## Training Details
- **Training Framework**: MLX-LM (converted to PEFT format)
- **Training Data**: WebAuthn security vulnerabilities
- **Iterations**: 800
- **Learning Rate**: 1e-06
- **Optimizer**: adamw
- **Fine-tune Type**: lora
## Usage
Load this adapter with the PEFT library:
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load configuration and model
config = PeftConfig.from_pretrained("path/to/this/adapter")
base_model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(base_model, "path/to/this/adapter")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Use for inference
inputs = tokenizer("Analyze this WebAuthn vulnerability:", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## Conversion Notes
This adapter was originally trained using MLX-LM and converted to HuggingFace PEFT format using an evidence-based conversion pipeline that:
1. Converts MLX parameter naming (`lora_a/lora_b`) to PEFT format (`lora_A.weight/lora_B.weight`)
2. Adds proper `base_model.model.` prefixes to parameter names
3. Generates PEFT-compatible configuration with required fields
4. Maintains full compatibility with HuggingFace ecosystem
## Performance
This adapter enhances the base model's capability for:
- WebAuthn security vulnerability analysis
- Code fix generation for security issues
- Security-aware code recommendations
## License
Apache 2.0
|
ShourenWSR/HT-ht-analysis-Qwen-no-think-only
|
ShourenWSR
| 2025-09-21T02:41:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T02:39:05Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: Qwen_no_think_only
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen_no_think_only
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the ht-analysis_no_think_only dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 12
- total_train_batch_size: 24
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.5.1+cu124
- Datasets 2.19.1
- Tokenizers 0.21.1
|
nikilr/zephyr_skillft_pap
|
nikilr
| 2025-09-21T02:13:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T02:12:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
renderartist/saturday-morning-flux
|
renderartist
| 2025-09-21T01:42:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-09-21T01:26:13Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/SaturdayMorning_00123_.png
text: >-
saturd4ym0rning The image is a digital cartoon drawing featuring a
middle-aged man with a gray mustache and large, round eyes, seated on a
wooden bench. He wears a brown cowboy hat, a purple suit, a white shirt, and
a red tie. Next to him partially seated on his lap is a small, brown dog
with large, round eyes, wearing a red collar. The background is a simple
pink wall with a wooden door on the right side, slightly ajar. The overall
style is reminiscent of classic American cartoons.
- output:
url: images/SaturdayMorning_00106_.png
text: >-
saturd4ym0rning cartoon drawing of a cheerleader jumping mid arm legs bent
back poms out in the air, big smile with teeth, stylized toon.
- output:
url: images/SaturdayMorning_00105_.png
text: >-
saturd4ym0rning cartoon drawing of a fat man with crown balding, wearing a
blue suit, yellow tie. The man's belly protrudes and overhangs. Stylized
toon. Side profile 3/4 turn angle
- output:
url: images/SaturdayMorning_00155_.png
text: >-
saturd4ym0rning cartoon darwing of a classic witch from a 1960s cartoon, the
witch character is stereotypical with a broom and black dress.
- output:
url: images/SaturdayMorning_00164_.png
text: >-
saturd4ym0rning cartoon drawing of young woman seated on an upholstered blue
chair, she's reading a red book.
- output:
url: images/SaturdayMorning_00115_.png
text: saturd4ym0rning cartoon drawing of a cowboy riding a brown horse in a desert
- output:
url: images/SaturdayMorning_00109_.png
text: >-
saturd4ym0rning cartoon drawing of a cheerleader jumping mid arm legs bent
back poms out in the air, big smile with teeth, stylized toon.
- output:
url: images/SaturdayMorning_00096_.png
text: >-
saturd4ym0rning cartoon drawing of a fat man with crown balding, wearing a
blue suit, yellow tie. The man's belly protrudes and overhangs on his belt.
Stylized toon. Side profile 3/4 turn angle
- output:
url: images/SaturdayMorning_00240_.png
text: >-
saturd4ym0rning cartoon pig character, in front of a wood fence on a farm.
Behind the pig is a barn, closeup view.
- output:
url: images/SaturdayMorning_00135_.png
text: >-
saturd4ym0rning This is a digital cartoon drawing of a plump, middle-aged
man with a bald head and a mustache, wearing a light blue button-up shirt
and gray pants, holding a yellow basket full of plastic easter eggs. He
stands in a lush, green forest with tall, purple trees, surrounded by bushes
and purple irises. The man's expression is neutral, and the background is
detailed with varying shades of green and purple, creating a tranquil,
forested setting.
- output:
url: images/SaturdayMorning_00095_.png
text: >-
saturd4ym0rning drawing of a frog with it's tongue out, the frog looks
humurous, silly flaccid tongue. Dopey expression on his face, he's sitting
on a lily pad in a pond surrounded by tall grass and trees, clouds overhead.
- output:
url: images/SaturdayMorning_00139_.png
text: >-
saturd4ym0rning cartoon drawing of a koala hanging from a vine, he looks
sassy.
- output:
url: images/SaturdayMorning_00118_.png
text: saturd4ym0rning cartoon drawing aligator sitting on a log with sharp teeth
- output:
url: images/SaturdayMorning_00226_.png
text: >-
an evil angry villian from a cartoon show, kids network style. The villian
is a crazy henchman with a mustache, top hat, and a cane. The man has a
furrowed exaggerated brow. Seated in front of a control panel with 4 tv
monitors showing various views of a metro city. Beside him on the ground is
a matching doberman. They're both looking towards the viewer.
- output:
url: images/SaturdayMorning_00187_.png
text: >-
saturd4ym0rning illustration of a an humorous lunch lady caricature with a
grouchy angry furrowed brows grimace face, styled like a children's tv
network american toon style. The lunch lady is wearing a hairnet hoop
earrings and an apron, under the apron she's wearing a baby blue dress.
She's heavyset and holding a creamy yellow tray of mashed potatoes and a
chicken strips.
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: saturd4ym0rning, cartoon
license: creativeml-openrail-m
---
# Saturday Morning Flux
<Gallery />
## Model description
Presenting Saturday Morning Flux, a Flux LoRA that captures the energetic charm and clean aesthetic of modern American animation styles.
This LoRA is perfect for creating dynamic, expressive characters with a polished, modern feel. It's an ideal tool for generating characters that fit into a variety of projects, from personal illustrations to concept art. Whether you need a hero or a sidekick, this LoRA produces characters that are full of life and ready for fun. The idea was to create a strong toon LoRA that could be used along with all of the new image edit models to produce novel views of the same character.
Workflow examples are attached to the images in the gallery, just drag and drop the image into ComfyUI.
This LoRA was trained in Kohya using the Lion optimizer, stopped at 3,500 steps trained with ~70 AI generated images that were captioned with Joy Caption.
v1 - Initial training run, adjust the strength between 0.4-0.8 for the best results. I used res_multistep and bongtangent for most of these, feel free to explore and change whatever you don't like in your own workflow.
Hoping to have a WAN video model that compliments this style soon, expect a Qwen Image model as well.
## Trigger words
You should use `saturd4ym0rning` to trigger the image generation.
You should use `cartoon` to trigger the image generation.
## Download model
[Download](/renderartist/saturday-morning-flux/tree/main) them in the Files & versions tab.
|
panda19904/Qwen3-0.6B-Gensyn-Swarm-extinct_wise_caterpillar
|
panda19904
| 2025-09-21T01:39:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am extinct_wise_caterpillar",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T01:39:08Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am extinct_wise_caterpillar
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
schonsense/70B_llama3_1_Genre_slerp
|
schonsense
| 2025-09-21T01:19:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:meta-llama/Llama-3.1-70B",
"base_model:merge:meta-llama/Llama-3.1-70B",
"base_model:nbeerbower/Llama3.1-Gutenberg-Doppel-70B",
"base_model:merge:nbeerbower/Llama3.1-Gutenberg-Doppel-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T00:24:44Z |
---
base_model:
- meta-llama/Llama-3.1-70B
- nbeerbower/Llama3.1-Gutenberg-Doppel-70B
library_name: transformers
tags:
- mergekit
- merge
---
# llama3_1_genre_slerp
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the NuSLERP merge method using [meta-llama/Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B) as a base.
### Models Merged
The following models were included in the merge:
* D:\mergekit\_My_YAMLS\genre_ties
* [nbeerbower/Llama3.1-Gutenberg-Doppel-70B](https://huggingface.co/nbeerbower/Llama3.1-Gutenberg-Doppel-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: nuslerp
models:
- model: "D:\\mergekit\\_My_YAMLS\\genre_ties"
parameters:
weight: 0.905
- model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B
parameters:
weight: 0.095
base_model: meta-llama/Llama-3.1-70B
parameters:
normalize: False
int8_mask: true
dtype: float32
out_dtype: bfloat16
tokenizer:
source: union
pad_to_multiple_of: 8
```
|
memphiskol/blockassist
|
memphiskol
| 2025-09-21T01:18:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly hardy flea",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T01:09:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly hardy flea
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
schonsense/70B_llama3_1_Genre_ties
|
schonsense
| 2025-09-21T00:23:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:meta-llama/Llama-3.1-70B",
"base_model:merge:meta-llama/Llama-3.1-70B",
"base_model:schonsense/70B_ero_horror",
"base_model:merge:schonsense/70B_ero_horror",
"base_model:schonsense/70B_llama3_1_Base_GW",
"base_model:merge:schonsense/70B_llama3_1_Base_GW",
"base_model:schonsense/70B_llama3_1_Base_IKM",
"base_model:merge:schonsense/70B_llama3_1_Base_IKM",
"base_model:schonsense/70B_llama3_1_Base_SunVorGhast",
"base_model:merge:schonsense/70B_llama3_1_Base_SunVorGhast",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T22:49:14Z |
---
base_model:
- schonsense/70B_ero_horror
- schonsense/70B_llama3_1_Base_IKM
- meta-llama/Llama-3.1-70B
- schonsense/70B_llama3_1_Base_GW
- schonsense/70B_llama3_1_Base_SunVorGhast
library_name: transformers
tags:
- mergekit
- merge
---
# genre_ties
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B) as a base.
### Models Merged
The following models were included in the merge:
* [schonsense/70B_ero_horror](https://huggingface.co/schonsense/70B_ero_horror)
* [schonsense/70B_llama3_1_Base_IKM](https://huggingface.co/schonsense/70B_llama3_1_Base_IKM)
* D:\mergekit\_My_YAMLS\llama_3_1_ero_nearstock
* [schonsense/70B_llama3_1_Base_GW](https://huggingface.co/schonsense/70B_llama3_1_Base_GW)
* [schonsense/70B_llama3_1_Base_SunVorGhast](https://huggingface.co/schonsense/70B_llama3_1_Base_SunVorGhast)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: ties
models:
- model: schonsense/70B_llama3_1_Base_IKM
parameters:
density: 1
weight: 1
- model: schonsense/70B_llama3_1_Base_SunVorGhast
parameters:
density: 1
weight:
- filter: self_attn
value: [0, 0.1, 0.5, 0.9, 0.4, 0.2, 0]
- filter: mlp
value: [0, 0.9, 0.8, 0.4, 0.1, 0.01, 0]
- filter: embed_tokens
value: 0.8
- filter: lm_head
value: 0.6
- value: 0.1
- model: schonsense/70B_llama3_1_Base_GW
parameters:
density: 1
weight:
- filter: self_attn
value: [0, 0.1, 0.5, 0.9, 0.4, 0.2, 0]
- filter: mlp
value: [0, 0.9, 0.8, 0.4, 0.1, 0.01, 0]
- filter: embed_tokens
value: 0.8
- filter: lm_head
value: 0.6
- value: 0.1
- model: "D:\\mergekit\\_My_YAMLS\\llama_3_1_ero_nearstock"
parameters:
density: 1
weight:
- filter: self_attn
value: [0, 0.1, 0.5, 0.9, 0.4, 0.2, 0]
- filter: mlp
value: [0, 0.9, 0.8, 0.4, 0.1, 0.01, 0]
- filter: embed_tokens
value: 0.8
- filter: lm_head
value: 0.6
- value: 0.1
- model: schonsense/70B_ero_horror
parameters:
density: 1
weight: [0, 0.5, 0]
- model: meta-llama/Llama-3.1-70B
base_model: meta-llama/Llama-3.1-70B
parameters:
normalize: true
int8_mask: true
lambda: 1.1
dtype: float32
out_dtype: bfloat16
tokenizer:
source: union
pad_to_multiple_of: 8
```
|
Katymerk/blockassist
|
Katymerk
| 2025-09-20T23:14:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quiet lightfooted tamarin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T11:28:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quiet lightfooted tamarin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alesiaivanova/Qwen-3b-GRPO-1-sub-2-sub-3-sub-compute_tradeoff_50-float-1024-170_25-float-1280
|
alesiaivanova
| 2025-09-20T23:03:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T22:59:03Z |
---
library_name: transformers
model_name: Qwen-3b-GRPO-1-sub-2-sub-3-sub-compute_tradeoff_50-float-1024-170_25-float-1280
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen-3b-GRPO-1-sub-2-sub-3-sub-compute_tradeoff_50-float-1024-170_25-float-1280
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alesyaivanova/long-horizon-reasoning/runs/7zi6a21q)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.3
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nilc-nlp/fasttext-skip-gram-300d
|
nilc-nlp
| 2025-09-20T22:57:02Z | 0 | 0 |
safetensors
|
[
"safetensors",
"word-embeddings",
"static",
"portuguese",
"fasttext",
"skip-gram",
"300d",
"feature-extraction",
"pt",
"arxiv:1708.06025",
"license:cc-by-4.0",
"region:us"
] |
feature-extraction
| 2025-09-20T21:56:43Z |
---
language: pt
tags:
- word-embeddings
- static
- portuguese
- fasttext
- skip-gram
- 300d
license: cc-by-4.0
library_name: safetensors
pipeline_tag: feature-extraction
---
# NILC Portuguese Word Embeddings — FastText Skip-Gram 300d
This repository contains the **FastText Skip-Gram 300d** model in **safetensors** format.
## About
NILC-Embeddings is a repository for storing and sharing **word embeddings** for the Portuguese language. The goal is to provide ready-to-use vector resources for **Natural Language Processing (NLP)** and **Machine Learning** tasks.
The embeddings were trained on a large Portuguese corpus (Brazilian + European), composed of 17 corpora (~1.39B tokens). Training was carried out with the following algorithms: **Word2Vec**, **FastText**, **Wang2Vec**, and **GloVe**.
---
## 📂 Files
- `embeddings.safetensors` → embedding matrix (`[vocab_size, 300]`)
- `vocab.txt` → vocabulary (one token per line, aligned with rows)
---
## 🚀 Usage
```python
from huggingface_hub import hf_hub_download
from safetensors.numpy import load_file
path = hf_hub_download(repo_id="nilc-nlp/fasttext-skip-gram-300d",
filename="embeddings.safetensors")
data = load_file(path)
vectors = data["embeddings"]
vocab_path = hf_hub_download(repo_id="nilc-nlp/fasttext-skip-gram-300d",
filename="vocab.txt")
with open(vocab_path) as f:
vocab = [w.strip() for w in f]
print(vectors.shape)
```
Or in PyTorch:
```python
from safetensors.torch import load_file
tensors = load_file("embeddings.safetensors")
vectors = tensors["embeddings"] # torch.Tensor
```
---
## 📊 Corpus
The embeddings were trained on a combination of 17 corpora (~1.39B tokens):
| Corpus | Tokens | Types | Genre | Description |
|--------|--------|-------|-------|-------------|
| LX-Corpus [Rodrigues et al. 2016] | 714,286,638 | 2,605,393 | Mixed genres | Large collection of texts from 19 sources, mostly European Portuguese |
| Wikipedia | 219,293,003 | 1,758,191 | Encyclopedic | Wikipedia dump (2016-10-20) |
| GoogleNews | 160,396,456 | 664,320 | Informative | News crawled from Google News |
| SubIMDB-PT | 129,975,149 | 500,302 | Spoken | Movie subtitles from IMDb |
| G1 | 105,341,070 | 392,635 | Informative | News from G1 portal (2014–2015) |
| PLN-Br [Bruckschen et al. 2008] | 31,196,395 | 259,762 | Informative | Corpus of PLN-BR project (1994–2005) |
| Domínio Público | 23,750,521 | 381,697 | Prose | 138,268 literary works |
| Lacio-Web [Aluísio et al. 2003] | 8,962,718 | 196,077 | Mixed | Literary, informative, scientific, law, didactic texts |
| Literatura Brasileira | 1,299,008 | 66,706 | Prose | Classical Brazilian fiction e-books |
| Mundo Estranho | 1,047,108 | 55,000 | Informative | Texts from Mundo Estranho magazine |
| CHC | 941,032 | 36,522 | Informative | Texts from Ciência Hoje das Crianças |
| FAPESP | 499,008 | 31,746 | Science communication | Texts from Pesquisa FAPESP magazine |
| Textbooks | 96,209 | 11,597 | Didactic | Elementary school textbooks |
| Folhinha | 73,575 | 9,207 | Informative | Children’s news from Folhinha (Folha de São Paulo) |
| NILC subcorpus | 32,868 | 4,064 | Informative | Children’s texts (3rd–4th grade) |
| Para Seu Filho Ler | 21,224 | 3,942 | Informative | Children’s news from Zero Hora |
| SARESP | 13,308 | 3,293 | Didactic | School evaluation texts |
| **Total** | **1,395,926,282** | **3,827,725** | — | —
---
## 📖 Paper
**Portuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks**
Hartmann, N. et al. (2017), STIL 2017.
[ArXiv Paper](https://arxiv.org/abs/1708.06025)
### BibTeX
```bibtex
@inproceedings{hartmann-etal-2017-portuguese,
title = {{P}ortuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks},
author = {Hartmann, Nathan and Fonseca, Erick and Shulby, Christopher and Treviso, Marcos and Silva, J{'e}ssica and Alu{'i}sio, Sandra},
year = 2017,
month = oct,
booktitle = {Proceedings of the 11th {B}razilian Symposium in Information and Human Language Technology},
publisher = {Sociedade Brasileira de Computa{\c{c}}{\~a}o},
address = {Uberl{\^a}ndia, Brazil},
pages = {122--131},
url = {https://aclanthology.org/W17-6615/},
editor = {Paetzold, Gustavo Henrique and Pinheiro, Vl{'a}dia}
}
```
---
## 📜 License
Creative Commons Attribution 4.0 International (CC BY 4.0)
|
yanxg/FLUX.1-Kontext-dev-trimmed-L
|
yanxg
| 2025-09-20T22:53:55Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-09-20T22:50:42Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Rakesh7n/Llama3.1_8B_Indian_law_finetuned
|
Rakesh7n
| 2025-09-20T22:35:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T22:35:05Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Rakesh7n
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758407407
|
schooncestiaa
| 2025-09-20T22:31:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T22:31:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Nick976786/Qwen3-0.6B-Gensyn-Swarm-mighty_rangy_kangaroo
|
Nick976786
| 2025-09-20T22:27:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am mighty_rangy_kangaroo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T21:09:08Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am mighty_rangy_kangaroo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
quelmap/Lightning-4b
|
quelmap
| 2025-09-20T22:25:54Z | 28 | 7 |
transformers
|
[
"transformers",
"safetensors",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:finetune:Qwen/Qwen3-4B-Thinking-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T10:53:16Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-4B-Thinking-2507
pipeline_tag: text-generation
library_name: transformers
---
# Lightning-4b - Your Local data analysis agent
[](https://github.com/quelmap-inc/quelmap)
[](https://opensource.org/licenses/Apache-2.0)

## Overview
Lightning-4b is a language model specifically designed and trained for data analysis tasks on local devices. With just a laptop (fully tested on an M4 MacBook Air with 16GB RAM), you can process data without ever sending it to major LLM provider.
### What it can do
- Data visualization
- Table joins
- t-tests
- Unlimited rows, 30+ tables analyzed simultaneously
### What it cannot do
- Business reasoning or management decision-making advice
- Multi-turn analysis
To use this model, install [quelmap](https://github.com/quelmap-inc/quelmap) on your device.
Quelmap is an open-source data analysis assistant with every essential features like data upload and an built-in python sandbox.
For installation instructions, see the [Quick Start](https://quelmap.com/quickstart).

### Performance
This model was trained specifically for use with [quelmap](https://github.com/quelmap-inc/quelmap).
It was evaluated using a sample database and 122 analysis queries, and achieved performance surpassing models with **50x more parameters**.
For details about the model and its training process, see the [Lightning-4b Details](https://quelmap.com/lightning-4b) page.

### Running Model on your machine
You can easily install Lightning-4b and quelmap by following the [Quick Start](https://quelmap.com/quickstart).
Lightning-4b has multiple quantization versions depending on your hardware.
It runs smoothly on laptops, and on higher-spec machines it can handle more tables (30+ tables) and longer chat histories.
Example Specs and Model Versions
- Laptop (e.g. mac book air 16GB) - 4bit Quantization + 10,240 Context Window
```
ollama pull hf.co/quelmap/Lightning-4b-GGUF-short-ctx:Q4_K_M
```
- Gaming Laptop - 4bit Quantization + 40,960 Context Window
```
ollama pull hf.co/quelmap/Lightning-4b-GGUF:Q4_K_M
```
- Powerful PC with GPU - No Quantization + 40,960 Context Window
```
ollama pull hf.co/quelmap/Lightning-4b-GGUF:F16
```
For more details, please refer to the [Lightning-4b Details](https://quelmap.com/lightning-4b) page.
|
Wililasmareor/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-peckish_playful_giraffe
|
Wililasmareor
| 2025-09-20T22:24:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am peckish_playful_giraffe",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T22:23:51Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am peckish_playful_giraffe
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Wilunelzalonel/Qwen3-0.6B-Gensyn-Swarm-vigilant_slimy_fish
|
Wilunelzalonel
| 2025-09-20T22:22:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am vigilant_slimy_fish",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T22:22:38Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am vigilant_slimy_fish
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Wilolorsorarix/Qwen3-0.6B-Gensyn-Swarm-aquatic_iridescent_spider
|
Wilolorsorarix
| 2025-09-20T22:22:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am aquatic_iridescent_spider",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T22:22:11Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am aquatic_iridescent_spider
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MA9/ticket-bot-lora-inference
|
MA9
| 2025-09-20T21:53:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T21:52:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vjprav33n/gemma3_gs1_lang_translation
|
vjprav33n
| 2025-09-20T21:42:20Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T21:25:02Z |
---
base_model: google/gemma-3-4b-it
library_name: transformers
model_name: gemma3_gs1_lang_translation
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma3_gs1_lang_translation
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vjprav33n/gemma3_gs1_lang_translation", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Nick976786/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-downy_lively_peacock
|
Nick976786
| 2025-09-20T21:11:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am downy_lively_peacock",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T21:10:13Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am downy_lively_peacock
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ulioxfenunon/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_enormous_mallard
|
Ulioxfenunon
| 2025-09-20T21:06:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am robust_enormous_mallard",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T21:06:42Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am robust_enormous_mallard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
olegshulyakov/DeepSeek-R1-0528-Qwen3-8B-GGUF
|
olegshulyakov
| 2025-09-20T20:59:43Z | 356 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-03T04:00:12Z |
---
license: mit
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
---
# DeepSeek-R1-0528-Qwen3-8B
**Model creator:** [deepseek-ai](https://huggingface.co/deepseek-ai)<br/>
**Original model**: [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B)<br/>
**GGUF quantization:** provided by [olegshulyakov](https:/huggingface.co/olegshulyakov) using `llama.cpp`<br/>
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Use with Ollama
```bash
ollama run "hf.co/olegshulyakov/DeepSeek-R1-0528-Qwen3-8B-GGUF:Q5_K_XL"
```
## Use with LM Studio
```bash
lms load "olegshulyakov/DeepSeek-R1-0528-Qwen3-8B-GGUF"
```
## Use with llama.cpp CLI
```bash
llama-cli -hf olegshulyakov/DeepSeek-R1-0528-Qwen3-8B-GGUF:Q5_K_XL -p "The meaning to life and the universe is"
```
## Use with llama.cpp Server:
```bash
llama-server -hf olegshulyakov/DeepSeek-R1-0528-Qwen3-8B-GGUF:Q5_K_XL -ngl 99 -c 0
```
|
takara-ai/qwen_rwkv_projection
|
takara-ai
| 2025-09-20T20:56:10Z | 0 | 0 | null |
[
"safetensors",
"en",
"dataset:takara-ai/micropajama",
"license:mit",
"region:us"
] | null | 2025-09-20T20:36:04Z |
---
license: mit
datasets:
- takara-ai/micropajama
language:
- en
---
<img src="https://takara.ai/images/logo-24/TakaraAi.svg" width="200" alt="Takara.ai Logo" />
From the Frontier Research Team at takara.ai we present a linear projection model that maps Qwen embeddings to RWKV embeddings for enhanced cross-model compatibility.
## Model Details
- **Input Dimensions**: 4096 (Qwen embeddings)
- **Output Dimensions**: 768 (RWKV embeddings)
- **Architecture**: Linear layer (no bias)
- **Training**: Cosine similarity loss on L2-normalized pairs
- **Dataset**: takara-ai/micropajama_embedded_concat
## Usage
### Quick Start
```python
import torch
from huggingface_hub import PyTorchModelHubMixin
# Define the model class (copy this exactly)
class QwenRwkvProjection(torch.nn.Module, PyTorchModelHubMixin,
library_name="takara-ai",
tags=["embedding", "projection", "qwen", "rwkv"],
license="mit"):
def __init__(self, din=4096, dout=768):
super().__init__()
self.linear = torch.nn.Linear(din, dout, bias=False)
def forward(self, x):
return self.linear(x)
# Load from Hub
model = QwenRwkvProjection.from_pretrained("takara-ai/qwen_rwkv_projection")
model.eval()
# Project embeddings (don't forget to normalize!)
normalized_qwen_embeddings = torch.nn.functional.normalize(your_qwen_embeddings, p=2, dim=-1, eps=1e-8)
projected_embeddings = model(normalized_qwen_embeddings)
```
### Important Notes
- **Dimensions**: Input must be (batch_size, 4096), output will be (batch_size, 768)
- **Bias**: Model uses no bias term (trained on normalized pairs)
|
asadullah797/wav2vec2-multitask
|
asadullah797
| 2025-09-20T20:32:18Z | 0 | 0 | null |
[
"wav2vec2-multitask",
"code",
"audio-classification",
"en",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:mit",
"region:us"
] |
audio-classification
| 2025-09-20T15:17:35Z |
---
license: mit
language:
- en
base_model:
- facebook/wav2vec2-base
pipeline_tag: audio-classification
tags:
- code
---
# Wav2Vec2-MultiTask
This is a fine-tuned **Wav2Vec2.0** model for **multi-task learning**:
- Phoneme recognition
- Emotion classification
- Speaker identification
## Usage
```python
from transformers import AutoModel, AutoConfig, AutoProcessor
model = AutoModel.from_pretrained(
"username/my-wav2vec2-multitask",
trust_remote_code=True
)
config = AutoConfig.from_pretrained(
"username/my-wav2vec2-multitask",
trust_remote_code=True
)
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base")
inputs = processor("hello world", return_tensors="pt", sampling_rate=16000)
# phoneme recognition
logits = model(**inputs, task="phoneme")
|
haihp02/70f57f76-f8bb-4925-bf23-9dbd8daf1060
|
haihp02
| 2025-09-20T19:52:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T19:48:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
WetOnTheWater/Flux-Cross-LORA
|
WetOnTheWater
| 2025-09-20T19:25:52Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-09-20T19:25:28Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/WhatsApp Image 2025-09-17 at 22.55.46_f575e6c4.jpg
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: LCRS1
---
# Flux-Cross-RUN_01
<Gallery />
## Trigger words
You should use `LCRS1` to trigger the image generation.
## Download model
[Download](/WetOnTheWater/Flux-Cross-LORA/tree/main) them in the Files & versions tab.
|
stepdc/stack_cube_3_cameras_smolvla_2_l20
|
stepdc
| 2025-09-20T18:28:35Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:stepdc/stack_cube_3_cameras",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-20T18:28:14Z |
---
base_model: lerobot/smolvla_base
datasets: stepdc/stack_cube_3_cameras
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- robotics
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758392571
|
schooncestiaa
| 2025-09-20T18:24:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T18:23:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sd-concepts-library/crypto-punk-nft
|
sd-concepts-library
| 2025-09-20T18:12:59Z | 0 | 5 | null |
[
"license:mit",
"region:us"
] | null | 2023-04-07T16:52:20Z |
---
license: mit
---
### Crypto_punk_nft on Stable Diffusion
This is the `<crypto-punk>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:










|
zjhhhh/qwen2.5_3B_Instruct_fixed_0.01_step_312_final
|
zjhhhh
| 2025-09-20T17:56:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T17:55:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vuitton/STS_KAT14
|
vuitton
| 2025-09-20T17:54:07Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-19T17:03:17Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
igomsmiranda/qwen2.5-7b_instruct-math7b_linear_50
|
igomsmiranda
| 2025-09-20T17:51:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:merge:Qwen/Qwen2.5-7B-Instruct",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:merge:Qwen/Qwen2.5-Math-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T13:40:51Z |
---
base_model:
- Qwen/Qwen2.5-7B-Instruct
- Qwen/Qwen2.5-Math-7B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Benchmakrs
| Tasks |Version| Filter |n-shot| Metric | |Value | |Stderr|
|-----------------|-------|----------------|-----:|-----------|---|-----:|---|------|
|tinyBenchmarks | N/A| | | | | | | |
| - tinyArc | 0|none | 25|acc_norm |↑ |0.5361|± | N/A|
| - tinyGSM8k | 0|flexible-extract| 5|exact_match|↑ |0.6066|± | N/A|
| | |strict-match | 5|exact_match|↑ |0.5813|± | N/A|
| - tinyHellaswag | 0|none | 10|acc_norm |↑ |0.6234|± | N/A|
| - tinyMMLU | 0|none | 0|acc_norm |↑ |0.5490|± | N/A|
| - tinyTruthfulQA| 0|none | 0|acc |↑ |0.5333|± | N/A|
| - tinyWinogrande| 0|none | 5|acc_norm |↑ |0.6196|± | N/A|
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
* [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen2.5-7B-Instruct
parameters:
weight: 0.5
- model: Qwen/Qwen2.5-Math-7B-Instruct
parameters:
weight: 0.5
merge_method: linear
dtype: float16
```
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758389486
|
schooncestiaa
| 2025-09-20T17:32:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T17:32:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
44827throughthevault/Solace
|
44827throughthevault
| 2025-09-20T17:23:15Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-09-20T17:23:15Z |
---
license: other
license_name: other
license_link: LICENSE
---
|
abhijithmallya/embeddinggemma-300m-Q4_0-GGUF
|
abhijithmallya
| 2025-09-20T17:22:18Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"gguf",
"sentence-similarity",
"feature-extraction",
"text-embeddings-inference",
"llama-cpp",
"gguf-my-repo",
"base_model:google/embeddinggemma-300m",
"base_model:quantized:google/embeddinggemma-300m",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-20T17:22:12Z |
---
license: gemma
pipeline_tag: sentence-similarity
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- text-embeddings-inference
- llama-cpp
- gguf-my-repo
extra_gated_heading: Access EmbeddingGemma on Hugging Face
extra_gated_prompt: To access EmbeddingGemma on Hugging Face, you’re required to review
and agree to Google’s usage license. To do this, please ensure you’re logged in
to Hugging Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/embeddinggemma-300m
---
# abhijithmallya/embeddinggemma-300m-Q4_0-GGUF
This model was converted to GGUF format from [`google/embeddinggemma-300m`](https://huggingface.co/google/embeddinggemma-300m) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/embeddinggemma-300m) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo abhijithmallya/embeddinggemma-300m-Q4_0-GGUF --hf-file embeddinggemma-300m-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo abhijithmallya/embeddinggemma-300m-Q4_0-GGUF --hf-file embeddinggemma-300m-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo abhijithmallya/embeddinggemma-300m-Q4_0-GGUF --hf-file embeddinggemma-300m-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo abhijithmallya/embeddinggemma-300m-Q4_0-GGUF --hf-file embeddinggemma-300m-q4_0.gguf -c 2048
```
|
mradermacher/Scripturient-DT-LLaMa-70B-GGUF
|
mradermacher
| 2025-09-20T16:50:34Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TareksTesting/Scripturient-DT-LLaMa-70B",
"base_model:quantized:TareksTesting/Scripturient-DT-LLaMa-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-20T15:42:52Z |
---
base_model: TareksTesting/Scripturient-DT-LLaMa-70B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/TareksTesting/Scripturient-DT-LLaMa-70B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Scripturient-DT-LLaMa-70B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Scripturient-DT-LLaMa-70B-GGUF/resolve/main/Scripturient-DT-LLaMa-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Scripturient-DT-LLaMa-70B-GGUF/resolve/main/Scripturient-DT-LLaMa-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Scripturient-DT-LLaMa-70B-GGUF/resolve/main/Scripturient-DT-LLaMa-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Scripturient-DT-LLaMa-70B-GGUF/resolve/main/Scripturient-DT-LLaMa-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Scripturient-DT-LLaMa-70B-GGUF/resolve/main/Scripturient-DT-LLaMa-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Scripturient-DT-LLaMa-70B-GGUF/resolve/main/Scripturient-DT-LLaMa-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Scripturient-DT-LLaMa-70B-GGUF/resolve/main/Scripturient-DT-LLaMa-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Scripturient-DT-LLaMa-70B-GGUF/resolve/main/Scripturient-DT-LLaMa-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Scripturient-DT-LLaMa-70B-GGUF/resolve/main/Scripturient-DT-LLaMa-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Scripturient-DT-LLaMa-70B-GGUF/resolve/main/Scripturient-DT-LLaMa-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Scripturient-DT-LLaMa-70B-GGUF/resolve/main/Scripturient-DT-LLaMa-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Scripturient-DT-LLaMa-70B-GGUF/resolve/main/Scripturient-DT-LLaMa-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Scripturient-DT-LLaMa-70B-GGUF/resolve/main/Scripturient-DT-LLaMa-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
no-name-research/multilingual-bert-placer-ner-classifier
|
no-name-research
| 2025-09-20T16:41:55Z | 0 | 0 | null |
[
"safetensors",
"bert",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-09-20T16:41:27Z |
---
license: cc-by-nc-sa-4.0
---
|
Ba2han/mistral-finetune-2k1
|
Ba2han
| 2025-09-20T16:28:00Z | 0 | 0 |
transformers
|
[
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T16:27:56Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rabeeqasem/pyramids
|
rabeeqasem
| 2025-09-20T16:18:51Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2025-09-20T16:09:28Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rabeeqasem/pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
im-tsr/distilbert-finetuned-youtube_sentiment_analysis
|
im-tsr
| 2025-09-20T16:11:02Z | 98 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"sentiment-analysis",
"youtube-comments",
"fine-tuned",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-06T18:03:04Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- sentiment-analysis
- youtube-comments
- text-classification
- distilbert
- fine-tuned
metrics:
- f1
- accuracy
- precision
- recall
model-index:
- name: distilbert-finetuned-youtube_sentiment_analysis
results: []
---
# Finetuned DistilBERT model card
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on a custom YouTube comments dataset for sentiment analysis.
## Model description
The model is based on DistilBERT, a distilled version of BERT that is smaller, faster, and requires less computational resources while maintaining 97% of BERT's performance. This specific model has been fine-tuned to perform sentiment analysis on YouTube comments, classifying them into three categories: POSITIVE, NEUTRAL, and NEGATIVE.
The model is designed to understand the nuanced language used in social media comments, including slang, emojis, and informal speech patterns typically found in YouTube comment sections.
### Model architecture
- **Base model**: DistilBERT (distilbert-base-uncased)
- **Task type**: Text Classification (Sentiment Analysis)
- **Number of labels**: 3 (POSITIVE, NEUTRAL, NEGATIVE)
- **Label mapping**: {"NEUTRAL": 0, "POSITIVE": 1, "NEGATIVE": 2}
## Training and evaluation data
The model was trained on a custom dataset of YouTube comments with sentiment labels:
- **Dataset**: [im-tsr/comments-sentiments](https://huggingface.co/datasets/im-tsr/comments-sentiments)
- **Training set size**: 7,20,977 samples
- **Test set size**: 36,558 samples
- **Maximum sequence length**: 128 tokens
## Training procedure
The model was fine-tuned using the Hugging Face Transformers library with Pytorch.
### Training hyperparameters
The following hyperparameters were used during training:
- **Learning rate**: 5e-05
- **Train batch size**: 64
- **Eval batch size**: 64
- **Weight decay**: 0.01
- **Optimizer**: AdamW
- **Number of epochs**: 3
- **Maximum sequence length**: 128
### Evaluation results
The model was evaluated on a test set of YouTube comments. The evaluation metrics are:
- **Accuracy**: 0.730100
- **F1 Score**: 0.730025
### Using Hugging Face Transformers
```python
import torch
from transformers import AutoTokenizer
from transformers import AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("im-tsr/distilbert-finetuned-youtube_sentiment_analysis")
model = AutoModelForSequenceClassification.from_pretrained("im-tsr/distilbert-finetuned-youtube_sentiment_analysis")
def hf_predict_sentiment(text):
# Tokenize the text
tokens = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128)
# Set model to evaluation model
model.eval()
with torch.no_grad():
tokens = {key: val.to(model.device) for key, val in tokens.items()}
output = model(**tokens)
prediction = torch.argmax(output.logits, dim=1).item()
label_map = {0: "NEUTRAL", 1: "POSITIVE", 2: "NEGATIVE"}
return label_map[prediction]
hf_predict_sentiment("I love this product! It's absolutely wonderful.")
# 'POSITIVE'
```
## Intended uses
This model is intended for analyzing sentiment in YouTube comments and similar social media text. It can be used for:
- Monitoring sentiment in YouTube comment sections
- Content moderation assistance
- Social media sentiment analysis
- User feedback analysis
- Tracking audience reaction to videos or content
### Limitations
- The model is specifically trained on YouTube comments, which may have different characteristics from other text sources
- Performance may vary for comments in languages other than English
- The model may not handle sarcasm, irony, or cultural context effectively
- Limited to three sentiment categories which may not capture the full spectrum of emotions
- May not perform optimally on very short texts or texts with multiple conflicting sentiments
## Links
- [Dataset on Hugging Face](https://huggingface.co/datasets/im-tsr/comments-sentiments)
- [Demo on Hugging Face Spaces](https://huggingface.co/spaces/im-tsr/sentiment-analysis)
## Framework versions
- Transformers 4.53
- Pytorch 2.8.0
- Datasets 3.6.0
|
okkp12/eli2
|
okkp12
| 2025-09-20T15:26:14Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-20T15:04:04Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: eli
---
# Eli2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `eli` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "eli",
"lora_weights": "https://huggingface.co/okkp12/eli2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('okkp12/eli2', weight_name='lora.safetensors')
image = pipeline('eli').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/okkp12/eli2/discussions) to add images that show off what you’ve made with this LoRA.
|
lkhl/VideoLLaMA3-7B-Image-HF
|
lkhl
| 2025-09-20T13:44:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"video_llama_3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T13:16:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
james73duff/JamesDuff-Replicate
|
james73duff
| 2025-09-20T13:42:02Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-20T13:14:40Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: James
---
# Jamesduff Replicate
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `James` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "James",
"lora_weights": "https://huggingface.co/james73duff/JamesDuff-Replicate/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('james73duff/JamesDuff-Replicate', weight_name='lora.safetensors')
image = pipeline('James').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2001
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/james73duff/JamesDuff-Replicate/discussions) to add images that show off what you’ve made with this LoRA.
|
Gapeleon/kaniTTS_Elise
|
Gapeleon
| 2025-09-20T13:18:43Z | 0 | 1 | null |
[
"safetensors",
"lfm2",
"en",
"dataset:MrDragonFox/Elise",
"base_model:nineninesix/kani-tts-450m-0.1-pt",
"base_model:finetune:nineninesix/kani-tts-450m-0.1-pt",
"license:cc-by-4.0",
"region:us"
] | null | 2025-09-20T13:06:00Z |
---
license: cc-by-4.0
datasets:
- MrDragonFox/Elise
language:
- en
base_model:
- nineninesix/kani-tts-450m-0.1-pt
---
A quick test run training
[nineninesix/kani-tts-450m-0.1-pt](https://huggingface.co/nineninesix/kani-tts-450m-0.1-pt) on [MrDragonFox/Elise](https://huggingface.co/datasets/MrDragonFox/Elise)
## Sample 1
"Hey there, my name is Elise <giggles>, and I'm a text to speech model. Do I sound like a person?"
<audio controls><source src="https://huggingface.co/Gapeleon/kaniTTS_Elise/resolve/main/elise_1.wav" type="audio/wav"></audio>
## Sample 2
"Got it. $300,000. I can definitely help you get a very good price for your property by selecting a realtor."
<audio controls><source src="https://huggingface.co/Gapeleon/kaniTTS_Elise/resolve/main/elise_2.wav" type="audio/wav"></audio>
|
mradermacher/grok-2-GGUF
|
mradermacher
| 2025-09-20T12:53:29Z | 0 | 0 |
transformers
|
[
"transformers",
"en",
"base_model:xai-org/grok-2",
"base_model:finetune:xai-org/grok-2",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T03:29:33Z |
---
base_model: xai-org/grok-2
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/xai-org/grok-2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#grok-2-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/grok-2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q2_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q2_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q2_K.gguf.part3of3) | Q2_K | 100.2 | |
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_S.gguf.part3of3) | Q3_K_S | 118.1 | |
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_M.gguf.part3of3) | Q3_K_M | 130.2 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_L.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_L.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q3_K_L.gguf.part3of3) | Q3_K_L | 139.2 | |
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.IQ4_XS.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.IQ4_XS.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.IQ4_XS.gguf.part3of3) | IQ4_XS | 146.5 | |
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q4_K_S.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q4_K_S.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q4_K_S.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q4_K_S.gguf.part4of4) | Q4_K_S | 154.4 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q4_K_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q4_K_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q4_K_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q4_K_M.gguf.part4of4) | Q4_K_M | 164.2 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q5_K_S.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q5_K_S.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q5_K_S.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q5_K_S.gguf.part4of4) | Q5_K_S | 186.0 | |
| [PART 1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q5_K_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q5_K_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q5_K_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q5_K_M.gguf.part4of4) | Q5_K_M | 191.7 | |
| [P1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q6_K.gguf.part1of5) [P2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q6_K.gguf.part2of5) [P3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q6_K.gguf.part3of5) [P4](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q6_K.gguf.part4of5) [P5](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q6_K.gguf.part5of5) | Q6_K | 221.5 | very good quality |
| [P1](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q8_0.gguf.part1of6) [P2](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q8_0.gguf.part2of6) [P3](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q8_0.gguf.part3of6) [P4](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q8_0.gguf.part4of6) [P5](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q8_0.gguf.part5of6) [P6](https://huggingface.co/mradermacher/grok-2-GGUF/resolve/main/grok-2.Q8_0.gguf.part6of6) | Q8_0 | 286.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AQ-MedAI/Diver-Retriever-4B
|
AQ-MedAI
| 2025-09-20T12:31:22Z | 2,164 | 17 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"medical",
"code",
"math",
"reasoning",
"general",
"text-ranking",
"zh",
"en",
"dataset:Raderspace/MATH_qCoT_LLMquery_questionasquery_lexicalquery",
"dataset:reasonir/reasonir-data",
"dataset:truehealth/medqa",
"dataset:AQ-MedAI/PRGB-ZH",
"arxiv:2508.07995",
"base_model:Qwen/Qwen3-Embedding-4B",
"base_model:finetune:Qwen/Qwen3-Embedding-4B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-ranking
| 2025-08-22T03:33:29Z |
---
license: apache-2.0
tags:
- medical
- code
- math
- reasoning
- general
datasets:
- Raderspace/MATH_qCoT_LLMquery_questionasquery_lexicalquery
- reasonir/reasonir-data
- truehealth/medqa
- AQ-MedAI/PRGB-ZH
metrics:
- accuracy
- recall
base_model:
- Qwen/Qwen3-Embedding-4B
pipeline_tag: text-ranking
language:
- zh
- en
library_name: transformers
---
# Diver-Retriever-4B
## HighLights
The Diver Retriever 4B model is a reasoning-intensive model designed to tackle the challenge of reasonIR and rader.
We combined data from the fields of mathematics, coding, and healthcare.
At the same time, we made precise matching in terms of the difficulty level of the samples, and uniquely
constructed negative samples corresponding to each field. Therefore, this model performs very well on the Bright LeaderBoard
as well as the Mteb-Medical Benchmark.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** Text Embedding
- **Language(s) (NLP):** Bilingual (Chinese & English)
- **Context Length:** 40k
- **Number of Paramaters:** 4B
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our GitHub (https://github.com/AQ-MedAI/Diver).
## Evaluation
<table>
<thead>
<tr>
<th>Method</th>
<th style="text-align:right">Avg.</th>
<th style="text-align:right">Bio.</th>
<th style="text-align:right">Earth.</th>
<th style="text-align:right">Econ.</th>
<th style="text-align:right">Psy.</th>
<th style="text-align:right">Rob.</th>
<th style="text-align:right">Stack.</th>
<th style="text-align:right">Sus.</th>
<th style="text-align:right">Leet.</th>
<th style="text-align:right">Pony</th>
<th style="text-align:right">AoPS</th>
<th style="text-align:right">TheoQ.</th>
<th style="text-align:right">TheoT.</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan=12 style="text-align:center"><strong>Evaluate Retriever with Original Query</strong></td>
</tr>
<tr>
<td>BM25</td>
<td style="text-align:right">14.5</td>
<td style="text-align:right">18.9</td>
<td style="text-align:right">27.2</td>
<td style="text-align:right">14.9</td>
<td style="text-align:right">12.5</td>
<td style="text-align:right">13.6</td>
<td style="text-align:right">18.4</td>
<td style="text-align:right">15.0</td>
<td style="text-align:right">24.4</td>
<td style="text-align:right">7.9</td>
<td style="text-align:right">6.2</td>
<td style="text-align:right">10.4</td>
<td style="text-align:right">4.9</td>
</tr>
<tr>
<td>SBERT</td>
<td style="text-align:right">14.9</td>
<td style="text-align:right">15.1</td>
<td style="text-align:right">20.4</td>
<td style="text-align:right">16.6</td>
<td style="text-align:right">22.7</td>
<td style="text-align:right">8.2</td>
<td style="text-align:right">11.0</td>
<td style="text-align:right">15.3</td>
<td style="text-align:right">26.4</td>
<td style="text-align:right">7.0</td>
<td style="text-align:right">5.3</td>
<td style="text-align:right">20.0</td>
<td style="text-align:right">10.8</td>
</tr>
<tr>
<td>gte-Qwen1.5-7B</td>
<td style="text-align:right">22.5</td>
<td style="text-align:right">30.6</td>
<td style="text-align:right">36.4</td>
<td style="text-align:right">17.8</td>
<td style="text-align:right">24.6</td>
<td style="text-align:right">13.2</td>
<td style="text-align:right">22.2</td>
<td style="text-align:right">14.8</td>
<td style="text-align:right">25.5</td>
<td style="text-align:right">9.9</td>
<td style="text-align:right">14.4</td>
<td style="text-align:right">27.8</td>
<td style="text-align:right">32.9</td>
</tr>
<tr>
<td>Qwen3-4B</td>
<td style="text-align:right">5.6</td>
<td style="text-align:right">3.5</td>
<td style="text-align:right">8.0</td>
<td style="text-align:right">2.3</td>
<td style="text-align:right">2.0</td>
<td style="text-align:right">1.6</td>
<td style="text-align:right">1.0</td>
<td style="text-align:right">4.4</td>
<td style="text-align:right">2.1</td>
<td style="text-align:right">0.1</td>
<td style="text-align:right">4.9</td>
<td style="text-align:right">18.0</td>
<td style="text-align:right">19.2</td>
</tr>
<tr>
<td>OpenAI</td>
<td style="text-align:right">17.9</td>
<td style="text-align:right">23.3</td>
<td style="text-align:right">26.7</td>
<td style="text-align:right">19.5</td>
<td style="text-align:right">27.6</td>
<td style="text-align:right">12.8</td>
<td style="text-align:right">14.3</td>
<td style="text-align:right">20.5</td>
<td style="text-align:right">23.6</td>
<td style="text-align:right">2.4</td>
<td style="text-align:right">8.5</td>
<td style="text-align:right">23.5</td>
<td style="text-align:right">11.7</td>
</tr>
<tr>
<td>Google</td>
<td style="text-align:right">20.0</td>
<td style="text-align:right">22.7</td>
<td style="text-align:right">34.8</td>
<td style="text-align:right">19.6</td>
<td style="text-align:right">27.8</td>
<td style="text-align:right">15.7</td>
<td style="text-align:right">20.1</td>
<td style="text-align:right">17.1</td>
<td style="text-align:right">29.6</td>
<td style="text-align:right">3.6</td>
<td style="text-align:right">9.3</td>
<td style="text-align:right">23.8</td>
<td style="text-align:right">15.9</td>
</tr>
<tr>
<td>ReasonIR-8B</td>
<td style="text-align:right">24.4</td>
<td style="text-align:right">26.2</td>
<td style="text-align:right">31.4</td>
<td style="text-align:right">23.3</td>
<td style="text-align:right">30.0</td>
<td style="text-align:right">18.0</td>
<td style="text-align:right"><strong>23.9</strong></td>
<td style="text-align:right">20.5</td>
<td style="text-align:right">35.0</td>
<td style="text-align:right">10.5</td>
<td style="text-align:right"><strong>14.7</strong></td>
<td style="text-align:right">31.9</td>
<td style="text-align:right">27.2</td>
</tr>
<tr>
<td>RaDeR-7B</td>
<td style="text-align:right">25.5</td>
<td style="text-align:right">34.6</td>
<td style="text-align:right">38.9</td>
<td style="text-align:right">22.1</td>
<td style="text-align:right">33.0</td>
<td style="text-align:right">14.8</td>
<td style="text-align:right">22.5</td>
<td style="text-align:right">23.7</td>
<td style="text-align:right">37.3</td>
<td style="text-align:right">5.0</td>
<td style="text-align:right">10.2</td>
<td style="text-align:right">28.4</td>
<td style="text-align:right">35.1</td>
</tr>
<tr>
<td>Seed1.5-Embedding</td>
<td style="text-align:right">27.2</td>
<td style="text-align:right">34.8</td>
<td style="text-align:right"><strong>46.9</strong></td>
<td style="text-align:right"><strong>23.4</strong></td>
<td style="text-align:right">31.6</td>
<td style="text-align:right">19.1</td>
<td style="text-align:right">25.4</td>
<td style="text-align:right">21.0</td>
<td style="text-align:right"><strong>43.2</strong></td>
<td style="text-align:right">4.9</td>
<td style="text-align:right">12.2</td>
<td style="text-align:right">33.3</td>
<td style="text-align:right">30.5</td>
</tr>
<tr>
<td>DIVER-Retriever</td>
<td style="text-align:right"><strong>28.9</strong></td>
<td style="text-align:right"><strong>41.8</strong></td>
<td style="text-align:right">43.7</td>
<td style="text-align:right">21.7</td>
<td style="text-align:right"><strong>35.3</strong></td>
<td style="text-align:right"><strong>21.0</strong></td>
<td style="text-align:right">21.2</td>
<td style="text-align:right"><strong>25.1</strong></td>
<td style="text-align:right">37.6</td>
<td style="text-align:right"><strong>13.2</strong></td>
<td style="text-align:right">10.7</td>
<td style="text-align:right"><strong>38.4</strong></td>
<td style="text-align:right"><strong>37.3</strong></td>
</tr>
<tr>
<td colspan=12 style="text-align:center"><strong>Evaluate Retriever with GPT-4 REASON-query</strong></td>
</tr>
<tr>
<td>BM25</td>
<td style="text-align:right">27.0</td>
<td style="text-align:right"><strong>53.6</strong></td>
<td style="text-align:right"><strong>54.1</strong></td>
<td style="text-align:right">24.3</td>
<td style="text-align:right">38.7</td>
<td style="text-align:right">18.9</td>
<td style="text-align:right">27.7</td>
<td style="text-align:right">26.3</td>
<td style="text-align:right">19.3</td>
<td style="text-align:right">17.6</td>
<td style="text-align:right">3.9</td>
<td style="text-align:right">19.2</td>
<td style="text-align:right">20.8</td>
</tr>
<tr>
<td>SBERT</td>
<td style="text-align:right">17.8</td>
<td style="text-align:right">18.5</td>
<td style="text-align:right">26.3</td>
<td style="text-align:right">17.5</td>
<td style="text-align:right">27.2</td>
<td style="text-align:right">8.8</td>
<td style="text-align:right">11.8</td>
<td style="text-align:right">17.5</td>
<td style="text-align:right">24.3</td>
<td style="text-align:right">10.3</td>
<td style="text-align:right">5.0</td>
<td style="text-align:right">22.3</td>
<td style="text-align:right">23.5</td>
</tr>
<tr>
<td>gte-Qwen1.5-7B</td>
<td style="text-align:right">24.8</td>
<td style="text-align:right">35.5</td>
<td style="text-align:right">43.1</td>
<td style="text-align:right">24.3</td>
<td style="text-align:right">34.3</td>
<td style="text-align:right">15.4</td>
<td style="text-align:right">22.9</td>
<td style="text-align:right">23.9</td>
<td style="text-align:right">25.4</td>
<td style="text-align:right">5.2</td>
<td style="text-align:right">4.6</td>
<td style="text-align:right">28.7</td>
<td style="text-align:right">34.6</td>
</tr>
<tr>
<td>Qwen3-4B</td>
<td style="text-align:right">5.5</td>
<td style="text-align:right">1.3</td>
<td style="text-align:right">17.3</td>
<td style="text-align:right">2.5</td>
<td style="text-align:right">6.2</td>
<td style="text-align:right">1.0</td>
<td style="text-align:right">4.8</td>
<td style="text-align:right">4.5</td>
<td style="text-align:right">3.0</td>
<td style="text-align:right">5.9</td>
<td style="text-align:right">0.0</td>
<td style="text-align:right">7.2</td>
<td style="text-align:right">12.5</td>
</tr>
<tr>
<td>OpenAI</td>
<td style="text-align:right">23.3</td>
<td style="text-align:right">35.2</td>
<td style="text-align:right">40.1</td>
<td style="text-align:right">25.1</td>
<td style="text-align:right">38.0</td>
<td style="text-align:right">13.6</td>
<td style="text-align:right">18.2</td>
<td style="text-align:right">24.2</td>
<td style="text-align:right">24.5</td>
<td style="text-align:right">6.5</td>
<td style="text-align:right">7.7</td>
<td style="text-align:right">22.9</td>
<td style="text-align:right">23.8</td>
</tr>
<tr>
<td>Google</td>
<td style="text-align:right">26.2</td>
<td style="text-align:right">36.4</td>
<td style="text-align:right">45.6</td>
<td style="text-align:right">25.6</td>
<td style="text-align:right">38.2</td>
<td style="text-align:right">18.7</td>
<td style="text-align:right"><strong>29.5</strong></td>
<td style="text-align:right">17.9</td>
<td style="text-align:right">31.1</td>
<td style="text-align:right">3.7</td>
<td style="text-align:right">10.0</td>
<td style="text-align:right">27.8</td>
<td style="text-align:right">30.4</td>
</tr>
<tr>
<td>ReasonIR-8B</td>
<td style="text-align:right">29.9</td>
<td style="text-align:right">43.6</td>
<td style="text-align:right">42.9</td>
<td style="text-align:right"><strong>32.7</strong></td>
<td style="text-align:right">38.8</td>
<td style="text-align:right">20.9</td>
<td style="text-align:right">25.8</td>
<td style="text-align:right"><strong>27.5</strong></td>
<td style="text-align:right">31.5</td>
<td style="text-align:right"><strong>19.6</strong></td>
<td style="text-align:right">7.4</td>
<td style="text-align:right">33.1</td>
<td style="text-align:right">35.7</td>
</tr>
<tr>
<td>RaDeR-7B</td>
<td style="text-align:right">29.2</td>
<td style="text-align:right">36.1</td>
<td style="text-align:right">42.9</td>
<td style="text-align:right">25.2</td>
<td style="text-align:right">37.9</td>
<td style="text-align:right">16.6</td>
<td style="text-align:right">27.4</td>
<td style="text-align:right">25.0</td>
<td style="text-align:right"><strong>34.8</strong></td>
<td style="text-align:right">11.9</td>
<td style="text-align:right"><strong>12.0</strong></td>
<td style="text-align:right">37.7</td>
<td style="text-align:right"><strong>43.4</strong></td>
</tr>
<tr>
<td>DIVER-Retriever</td>
<td style="text-align:right"><strong>32.1</strong></td>
<td style="text-align:right">51.9</td>
<td style="text-align:right">53.5</td>
<td style="text-align:right">29.5</td>
<td style="text-align:right"><strong>41.2</strong></td>
<td style="text-align:right"><strong>21.4</strong></td>
<td style="text-align:right">27.5</td>
<td style="text-align:right">26.1</td>
<td style="text-align:right">33.5</td>
<td style="text-align:right">11.7</td>
<td style="text-align:right">9.5</td>
<td style="text-align:right"><strong>39.3</strong></td>
<td style="text-align:right">39.7</td>
</tr>
<tr>
<td colspan=12 style="text-align:center"><strong>Evaluate retriever with DIVER-QExpand query</strong></td>
</tr>
<tr>
<td>ReasonIR-8B</td>
<td style="text-align:right">32.6</td>
<td style="text-align:right">49.4</td>
<td style="text-align:right">44.7</td>
<td style="text-align:right">32.4</td>
<td style="text-align:right">44.0</td>
<td style="text-align:right">26.6</td>
<td style="text-align:right">31.8</td>
<td style="text-align:right">29.0</td>
<td style="text-align:right">32.3</td>
<td style="text-align:right">12.8</td>
<td style="text-align:right">9.1</td>
<td style="text-align:right"><strong>40.7</strong></td>
<td style="text-align:right">38.4</td>
</tr>
<tr>
<td>+BM25 (Hybrid)</td>
<td style="text-align:right">35.7</td>
<td style="text-align:right">56.8</td>
<td style="text-align:right">53.5</td>
<td style="text-align:right"><strong>33.0</strong></td>
<td style="text-align:right"><strong>48.5</strong></td>
<td style="text-align:right"><strong>29.4</strong></td>
<td style="text-align:right"><strong>34.2</strong></td>
<td style="text-align:right"><strong>32.0</strong></td>
<td style="text-align:right"><strong>35.2</strong></td>
<td style="text-align:right">16.8</td>
<td style="text-align:right">12.9</td>
<td style="text-align:right">39.3</td>
<td style="text-align:right">36.8</td>
</tr>
<tr>
<td>DIVER-Retriever</td>
<td style="text-align:right"><strong>33.9</strong></td>
<td style="text-align:right">54.5</td>
<td style="text-align:right">52.7</td>
<td style="text-align:right">28.8</td>
<td style="text-align:right">44.9</td>
<td style="text-align:right">25.1</td>
<td style="text-align:right">27.4</td>
<td style="text-align:right">29.5</td>
<td style="text-align:right">34.5</td>
<td style="text-align:right">10.0</td>
<td style="text-align:right">14.5</td>
<td style="text-align:right"><strong>40.7</strong></td>
<td style="text-align:right">44.7</td>
</tr>
<tr>
<td>+BM25 (Hybrid)</td>
<td style="text-align:right"><strong>37.2</strong></td>
<td style="text-align:right"><strong>60.0</strong></td>
<td style="text-align:right"><strong>55.9</strong></td>
<td style="text-align:right">31.8</td>
<td style="text-align:right">47.9</td>
<td style="text-align:right">27.1</td>
<td style="text-align:right">33.9</td>
<td style="text-align:right">31.9</td>
<td style="text-align:right">35.1</td>
<td style="text-align:right"><strong>23.1</strong></td>
<td style="text-align:right"><strong>16.8</strong></td>
<td style="text-align:right">36.9</td>
<td style="text-align:right"><strong>46.6</strong></td>
</tr>
</tbody>
</table>
## Usage
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Inference
#### Sentence Transformers Usage
```bash
# Requires transformers>=4.51.0
# Requires sentence-transformers>=2.7.0
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer("AQ-MedAI/Diver-Retriever-4B")
# The queries and documents to embed
queries = [
"What is the capital of China?",
"Explain gravity",
]
documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
]
# Encode the queries and documents. Note that queries benefit from using a prompt
# Here we use the prompt called "query" stored under `model.prompts`, but you can
# also pass your own prompt via the `prompt` argument
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
# Compute the (cosine) similarity between the query and document embeddings
similarity = model.similarity(query_embeddings, document_embeddings)
print(similarity)
```
#### Transformers Usage
```bash
# Requires transformers>=4.51.0
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery:{query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'What is the capital of China?'),
get_detailed_instruct(task, 'Explain gravity')
]
# No need to add instructions for retrieval documents
documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('AQ-MedAI/Diver-Retriever-4B', padding_side='left')
model = AutoModel.from_pretrained('AQ-MedAI/Diver-Retriever-4B')
max_length = 8192
# Tokenize the input texts
batch_dict = tokenizer(
input_texts,
padding=True,
truncation=True,
max_length=max_length,
return_tensors="pt",
)
batch_dict.to(model.device)
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T)
print(scores.tolist())
# [[0.9319270849227905, 0.5878604054450989], [0.639923095703125, 0.7950234413146973]]
```
### Finetuning
We recommend you to use [swift](https://github.com/modelscope/ms-swift) to finetune our DIVER-Retriever-4B with infonce.
Before starting training, please ensure your environment is properly configured.
```bash
pip install ms-swift -U
# Install from source
pip install git+https://github.com/modelscope/ms-swift.git
pip install transformers -U
# Optional packages
pip install deepspeed # multi-GPU training
pip install liger-kernel # save GPU memory resources
pip install flash-attn --no-build-isolation
```
#### Training Command
Using the infonce loss as an example, the complete training command is as follows:
```bash
nproc_per_node=8
NPROC_PER_NODE=$nproc_per_node \
swift sft \
--model DIVER/DIVER-Retriever-4B \
--task_type embedding \
--model_type qwen3_emb \
--train_type full \
--dataset your_dataset \
--split_dataset_ratio 0.05 \
--eval_strategy steps \
--output_dir output \
--eval_steps 20 \
--num_train_epochs 5 \
--save_steps 20 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 4 \
--learning_rate 6e-6 \
--loss_type infonce \
--label_names labels \
--dataloader_drop_last true \
--deepspeed zero3
```
## Citation
<!-- If a paper or blog post is introducing the model, the APA and BibTeX information for that should go in this section. -->
If you find our work helpful, feel free to cite it.
```
@misc{long2025divermultistageapproachreasoningintensive,
title={DIVER: A Multi-Stage Approach for Reasoning-intensive Information Retrieval},
author={Meixiu Long and Duolin Sun and Dan Yang and Junjie Wang and Yue Shen and Jian Wang and Peng Wei and Jinjie Gu and Jiahai Wang},
year={2025},
eprint={2508.07995},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2508.07995},
}
```
|
KathirKs/qwen-2.5-0.5b
|
KathirKs
| 2025-09-20T12:15:28Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-20T12:13:09Z |
---
license: apache-2.0
---
|
adaptive-classifier/product-category
|
adaptive-classifier
| 2025-09-20T12:09:38Z | 37 | 0 | null |
[
"safetensors",
"adaptive-classifier",
"text-classification",
"continuous-learning",
"multilingual",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2025-07-22T15:43:22Z |
---
language: multilingual
tags:
- adaptive-classifier
- text-classification
- continuous-learning
license: apache-2.0
---
# Adaptive Classifier
This model is an instance of an [adaptive-classifier](https://github.com/codelion/adaptive-classifier) that allows for continuous learning and dynamic class addition.
You can install it with `pip install adaptive-classifier`.
## Model Details
- Base Model: answerdotai/ModernBERT-base
- Number of Classes: 4
- Total Examples: 20
- Embedding Dimension: 768
## Class Distribution
```
books: 5 examples (25.0%)
clothing: 5 examples (25.0%)
electronics: 5 examples (25.0%)
home_garden: 5 examples (25.0%)
```
## Usage
```python
from adaptive_classifier import AdaptiveClassifier
# Load the model
classifier = AdaptiveClassifier.from_pretrained("adaptive-classifier/model-name")
# Make predictions
text = "Your text here"
predictions = classifier.predict(text)
print(predictions) # List of (label, confidence) tuples
# Add new examples
texts = ["Example 1", "Example 2"]
labels = ["class1", "class2"]
classifier.add_examples(texts, labels)
```
## Training Details
- Training Steps: 11
- Examples per Class: See distribution above
- Prototype Memory: Active
- Neural Adaptation: Active
## Limitations
This model:
- Requires at least 3 examples per class
- Has a maximum of 150 examples per class
- Updates prototypes every 10 examples
## Citation
```bibtex
@software{adaptive_classifier,
title = {Adaptive Classifier: Dynamic Text Classification with Continuous Learning},
author = {Sharma, Asankhaya},
year = {2025},
publisher = {GitHub},
url = {https://github.com/codelion/adaptive-classifier}
}
```
|
EyeJack/paddle-ocr-endpoint
|
EyeJack
| 2025-09-20T11:54:26Z | 0 | 0 | null |
[
"endpoints_compatible",
"region:us"
] | null | 2025-03-23T11:06:48Z |
# PaddleOCR Inference Endpoint
A Hugging Face-compatible inference endpoint for PaddleOCR, capable of performing OCR on images in multiple languages.
## Features
- Fast and efficient OCR API using PaddleOCR
- Support for multiple languages: Chinese, English, French, German, Korean, Japanese
- Confidence threshold filtering
- Optional return of annotated images with detected text regions
- Model caching for improved performance
## Files
- `app.py`: Local testing application for the endpoint
- `handler.py`: The main handler that processes OCR requests
- `app_gradio.py`: The original Gradio-based demo application (kept for reference)
- `HF_ENDPOINT.md`: Detailed documentation for using the endpoint
## Supported Languages
- Chinese (`ch`)
- English (`en`)
- French (`fr`)
- German (`german`)
- Korean (`korean`)
- Japanese (`japan`)
## Local Testing
To test the endpoint locally:
```bash
python app.py --img_path ./example_imgs/example.jpg --lang en --confidence 0.5 --return_image True
```
This will:
1. Process the specified image with PaddleOCR
2. Print the recognized text and confidence scores
3. Save the results to `test_result.json`
4. Save the annotated image to `test_result.jpg` (if return_image is True)
## Hugging Face Deployment
See `HF_ENDPOINT.md` for detailed instructions on deploying and using this as a Hugging Face Inference Endpoint.
## Example Usage
```python
import requests
import base64
from PIL import Image
import io
# Load image
image = Image.open("example.jpg")
buffered = io.BytesIO()
image.save(buffered, format="JPEG")
encoded_image = base64.b64encode(buffered.getvalue()).decode('utf-8')
# API endpoint (when deployed to Hugging Face)
API_URL = "https://your-endpoint-url.huggingface.cloud"
headers = {
"Authorization": f"Bearer {API_TOKEN}",
"Content-Type": "application/json"
}
# Request data
data = {
"inputs": encoded_image,
"parameters": {
"lang": "en",
"confidence": 0.5,
"return_image": False
}
}
# Send request
response = requests.post(API_URL, headers=headers, json=data)
result = response.json()
# Process results
for item in result["result"]:
print(f"Text: {item['text']}, Confidence: {item['score']}")
```
## Original Demo
The original Gradio-based demo can still be run with:
```bash
python app_gradio.py
```
---
title: Paddle Ocr Demo
emoji: 🦀
colorFrom: indigo
colorTo: gray
sdk: gradio
sdk_version: 5.8.0
app_file: app_gradio.py
pinned: false
license: mit
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
OrangeCrystalFox/Qwen3-0.6B-Gensyn-Swarm-lethal_jagged_owl
|
OrangeCrystalFox
| 2025-09-20T11:38:36Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am lethal_jagged_owl",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-27T01:53:15Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am lethal_jagged_owl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Bhavana-3core/resume
|
Bhavana-3core
| 2025-09-20T11:33:36Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-20T11:33:36Z |
---
license: apache-2.0
---
|
ilkerduman/Qwen3-0.6B-Gensyn-Swarm-silent_wise_kangaroo
|
ilkerduman
| 2025-09-20T10:56:05Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am silent_wise_kangaroo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T19:32:23Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am silent_wise_kangaroo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DaVinciCode/doctra-docres-mbd
|
DaVinciCode
| 2025-09-20T10:25:20Z | 0 | 0 |
pytorch
|
[
"pytorch",
"document-image-restoration",
"mask-background-detection",
"preprocessing",
"dtsprompt",
"doctra",
"image-segmentation",
"license:mit",
"region:us"
] |
image-segmentation
| 2025-09-20T09:46:40Z |
---
license: mit
library_name: pytorch
pipeline_tag: image-segmentation
tags:
- document-image-restoration
- mask-background-detection
- preprocessing
- dtsprompt
- doctra
model-index:
- name: DocRes MBD (mbd.pkl)
results: []
---
# DocRes MBD Weights (mbd.pkl)
These are the **MBD weights** (`mbd.pkl`) used by the **DocRes** model (CVPR 2024), rehosted for use in the [Doctra](https://github.com/AdemBoukhris457/Doctra) library.
---
## 📖 Source
- Original repository: [ZZZHANG-jx/DocRes](https://github.com/ZZZHANG-jx/DocRes)
- Paper: *DocRes: Dynamic Task-Specific Prompt for Generalist Document Image Restoration* (CVPR 2024)
---
## ⚖️ License
MIT License (see LICENSE file).
Weights are redistributed under the same terms, with attribution to the original authors.
---
## ✅ Intended Use
The `mbd.pkl` weights are used for **Mask and Background Detection (MBD)**, a critical component of DocRes for:
- Generating document masks
- Producing background priors
- Supporting the Dynamic Task-Specific Prompt (DTSPrompt) mechanism
These weights are required to prepare task-specific prompts for the main `docres.pkl` model.
---
## ⚠️ Limitations
- Designed specifically for document mask/background detection.
- Performance depends on the quality of scanned/photographed inputs.
- Should be used in combination with the main DocRes weights (`docres.pkl`) for full restoration capability.
---
|
raphael2028/Qwen2.5-7B-Instruct-Gensyn-Swarm-flexible_freckled_eagle
|
raphael2028
| 2025-09-20T10:00:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am flexible_freckled_eagle",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T09:56:12Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am flexible_freckled_eagle
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Abolfazl79/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-tiny_snappy_manatee
|
Abolfazl79
| 2025-09-20T09:54:41Z | 114 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am tiny_snappy_manatee",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T10:09:19Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am tiny_snappy_manatee
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FelixMartins/a2c-PandaReachDense-v3
|
FelixMartins
| 2025-09-20T09:39:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-20T09:36:33Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
decryptellix/Llama-3.1-8B-TK-CP-LoRA-test
|
decryptellix
| 2025-09-20T07:50:13Z | 8 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:decryptellix/Llama-3.1-8B-CP",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"arxiv:1910.09700",
"base_model:decryptellix/Llama-3.1-8B-CP",
"region:us"
] |
text-generation
| 2025-09-20T04:58:02Z |
---
base_model: decryptellix/Llama-3.1-8B-CP
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:decryptellix/Llama-3.1-8B-CP
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
mradermacher/Picaro-24b-2506-636-i1-GGUF
|
mradermacher
| 2025-09-20T07:42:52Z | 79 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:AliCat2/Picaro-24b-2506-636",
"base_model:quantized:AliCat2/Picaro-24b-2506-636",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-09-19T15:27:52Z |
---
base_model: AliCat2/Picaro-24b-2506-636
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/AliCat2/Picaro-24b-2506-636
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Picaro-24b-2506-636-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Picaro-24b-2506-636-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Picaro-24b-2506-636-i1-GGUF/resolve/main/Picaro-24b-2506-636.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
UnifiedHorusRA/Wan_2.2_5b_t2v_Big_sagging_butts_wide_hips_big_sagging_breasts
|
UnifiedHorusRA
| 2025-09-20T07:09:58Z | 21 | 0 | null |
[
"custom",
"region:us"
] | null | 2025-09-04T05:29:12Z |
<!-- CIVITAI_MODEL_ID: 1853378 -->
<!-- TITLE_BLOCK_START -->
# Wan 2.2 5b t2v Big sagging butts, wide hips, big sagging breasts
**Creator**: [shadmar138](https://civitai.com/user/shadmar138)
**Civitai Model Page**: [https://civitai.com/models/1853378](https://civitai.com/models/1853378)
<!-- TITLE_BLOCK_END -->
<!-- VERSIONS_TABLE_START -->
## Versions Included
| Preview | Version Name | Folder on Hugging Face | Civitai Link |
|---|---|---|---|
| <img src="https://huggingface.co/UnifiedHorusRA/Wan_2.2_5b_t2v_Big_sagging_butts_wide_hips_big_sagging_breasts/resolve/main/v1.0/previews/93388417.jpg" width="150" alt="Preview for v1.0"> | v1.0 | [`v1.0`](https://huggingface.co/UnifiedHorusRA/Wan_2.2_5b_t2v_Big_sagging_butts_wide_hips_big_sagging_breasts/tree/main/v1.0) | [Link](https://civitai.com/models/1853378?modelVersionId=2097470) |
<!-- VERSIONS_TABLE_END -->
|
vemanarandhi1999/finetuned-gpt-2-sentiment-classification
|
vemanarandhi1999
| 2025-09-20T06:07:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T06:06:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vibhorag101/llama-2-13b-chat-hf-phr_mental_therapy
|
vibhorag101
| 2025-09-20T05:48:49Z | 1,829 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:vibhorag101/phr_mental_therapy_dataset",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-17T05:14:36Z |
---
license: mit
datasets:
- vibhorag101/phr_mental_therapy_dataset
# - jerryjalapeno/nart-100k-synthetic
language:
- en
pipeline_tag: text-generation
---
# Model Card
<!-- Provide a quick summary of what the model is/does. -->
- This model is a finetune of the **llama-2-13b-chat-hf** model on a therapy dataset.
- The model aims to provide basic therapy to the users and improve their mental health until they seek professional help.
- The model has been adjusted to encourage giving cheerful responses to the user. The system prompt has been mentioned below.
## Model Details
### Training Hardware
- RTX A5000 24GB
- 48 Core Intel Xeon
- 128GB Ram.
### Model Hyperparameters
- This [training script](https://github.com/phr-winter23/phr-mental-chat/blob/main/finetuneModel/finetuneScriptLLaMA-2.ipynb) was used to do the finetuning.
- The shareGPT format dataset was converted to llama-2 training format using this [script](https://github.com/phr-winter23/phr-mental-chat/blob/main/finetuneModel/llamaDataMaker.ipynb).
- num_train_epochs = 2
- per_device_train_batch_size = 2
- per_device_eval_batch_size = 2
- gradient_accumulation_steps = 1
- max_seq_length = 4096
- lora_r = 64
- lora_alpha = 16
- lora_dropout = 0.1
- use_4bit = True
- bnb_4bit_compute_dtype = "float16"
- bnb_4bit_quant_type = "nf4"
- use_nested_quant = False
- fp16 = False
- bf16 = True
- Data Sample: 1000 (80:20 split)
### Model System Prompt
You are a helpful and joyous mental therapy assistant. Always answer as helpfully and cheerfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content.Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
#### Model Training Data

### Model Benchmarks
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vibhorag101__llama-2-13b-chat-hf-phr_mental_therapy)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 42.5 |
| ARC (25-shot) | 38.82 |
| HellaSwag (10-shot) | 72.76 |
| MMLU (5-shot) | 23.12 |
| TruthfulQA (0-shot) | 46.92 |
| Winogrande (5-shot) | 65.59 |
| GSM8K (5-shot) | 7.81 |
|
aamijar/MaskLLM-Llama-2-7b-hf-lora-r8-sst2
|
aamijar
| 2025-09-20T05:22:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T05:22:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
XCarleX/Apex-l40s
|
XCarleX
| 2025-09-20T05:08:06Z | 0 | 0 | null |
[
"text-classification",
"license:agpl-3.0",
"region:us"
] |
text-classification
| 2025-09-19T23:49:46Z |
---
license: agpl-3.0
pipeline_tag: text-classification
---
|
handawon/lama3_hdw_unslot_data
|
handawon
| 2025-09-20T04:47:43Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T04:45:28Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** handawon
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
amethyst9/1501315
|
amethyst9
| 2025-09-20T02:39:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:39:01Z |
[View on Civ Archive](https://civarchive.com/models/1416624?modelVersionId=1601181)
|
amethyst9/730388
|
amethyst9
| 2025-09-20T02:37:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:37:04Z |
[View on Civ Archive](https://civarchive.com/models/150889?modelVersionId=816770)
|
amethyst9/526903
|
amethyst9
| 2025-09-20T02:36:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:36:32Z |
[View on Civ Archive](https://civarchive.com/models/549914?modelVersionId=611849)
|
ultratopaz/1033328
|
ultratopaz
| 2025-09-20T02:36:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:35:58Z |
[View on Civ Archive](https://civarchive.com/models/603182?modelVersionId=1128689)
|
seraphimzzzz/112905
|
seraphimzzzz
| 2025-09-20T02:32:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:32:06Z |
[View on Civ Archive](https://civarchive.com/models/137401?modelVersionId=151673)
|
amethyst9/1072379
|
amethyst9
| 2025-09-20T02:26:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:26:03Z |
[View on Civ Archive](https://civarchive.com/models/1040315?modelVersionId=1167094)
|
amethyst9/1423007
|
amethyst9
| 2025-09-20T02:24:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:24:31Z |
[View on Civ Archive](https://civarchive.com/models/1348320?modelVersionId=1522910)
|
thaddeusk/sesame-csm-elise-gguf
|
thaddeusk
| 2025-09-20T02:20:05Z | 0 | 0 | null |
[
"gguf",
"csm",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T22:53:18Z |
---
license: apache-2.0
---
Quantized keanteng/sesame-csm-elise for csm.rs
created with
python scripts/quantize.py \
--model-path /path/to/sesame/csm-1b/model.safetensors \
--output-path ./q8.gguf \
--qtype q8_0
|
nikilr/Llama3.1-8B-clustertax50
|
nikilr
| 2025-09-20T02:12:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T02:11:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.