modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 00:37:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 537
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 00:37:04
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
apishi/blockassist-bc-foraging_foraging_duck_1754748044
|
apishi
| 2025-08-09T14:02:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"foraging foraging duck",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T14:02:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- foraging foraging duck
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1754747841
|
roeker
| 2025-08-09T13:58:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T13:58:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pom0/blockassist-bc-scurrying_playful_crab_1754747621
|
pom0
| 2025-08-09T13:53:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scurrying playful crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T13:53:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scurrying playful crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
frisk038/frisk_test
|
frisk038
| 2025-08-09T13:53:41Z | 28 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-09T12:21:14Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Frisk_Test
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/frisk038/frisk_test/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('frisk038/frisk_test', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/frisk038/frisk_test/discussions) to add images that show off what you’ve made with this LoRA.
|
Jboadu/test-model-2-pretrain
|
Jboadu
| 2025-08-09T13:52:36Z | 36 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"axolotl",
"generated_from_trainer",
"conversational",
"dataset:representation_variation_GAIA_Raw_Training_Data.jsonl",
"dataset:text_chunks_GAIA_Raw_Training_Data.jsonl",
"dataset:inferred_facts_GAIA_Raw_Training_Data.jsonl",
"base_model:Jboadu/test-model-1-pretrain",
"base_model:finetune:Jboadu/test-model-1-pretrain",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T13:20:56Z |
---
library_name: transformers
license: apache-2.0
base_model: Jboadu/test-model-1-pretrain
tags:
- axolotl
- generated_from_trainer
datasets:
- representation_variation_GAIA_Raw_Training_Data.jsonl
- text_chunks_GAIA_Raw_Training_Data.jsonl
- inferred_facts_GAIA_Raw_Training_Data.jsonl
model-index:
- name: test-model-2-pretrain
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.12.0`
```yaml
base_model: Jboadu/test-model-1-pretrain
tokenizer_type: AutoTokenizer
model_type: AutoModelForCausalLM
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: representation_variation_GAIA_Raw_Training_Data.jsonl
type: completion
- path: text_chunks_GAIA_Raw_Training_Data.jsonl
type: completion
- path: inferred_facts_GAIA_Raw_Training_Data.jsonl
type: completion
dataset_prepared_path: last_run_prepared
output_dir: ./model-output
seed: 1337
sequence_len: 5000
sample_packing: true
pad_to_sequence_len: false
shuffle_merged_datasets: true
gradient_accumulation_steps: 75
micro_batch_size: 2
eval_batch_size: 4
num_epochs: 4
optimizer: paged_adamw_8bit
lr_scheduler: constant
learning_rate: 2.0e-05
noisy_embedding_alpha: 5
weight_decay: 0
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
logging_steps: 1
xformers_attention: false
flash_attention: true
chat_template: chatml
auto_resume_from_checkpoints: false
warmup_ratio: 0.1
evals_per_epoch: 1
val_set_size: 0.04
saves_per_epoch: 1
eval_sample_packing: false
save_total_limit: 2
special_tokens:
pad_token: <unk>
use_liger_kernel: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true
sequence_length: 10000
wandb_project: test-project
wandb_entity: ''
wandb_watch: ''
wandb_run_id: ''
wandb_log_model: ''
hub_model_id: Jboadu/test-model-2-pretrain
hub_strategy: all_checkpoints
```
</details><br>
# test-model-2-pretrain
This model is a fine-tuned version of [Jboadu/test-model-1-pretrain](https://huggingface.co/Jboadu/test-model-1-pretrain) on the representation_variation_GAIA_Raw_Training_Data.jsonl, the text_chunks_GAIA_Raw_Training_Data.jsonl and the inferred_facts_GAIA_Raw_Training_Data.jsonl datasets.
It achieves the following results on the evaluation set:
- Loss: 0.9761
- Memory/max Mem Active(gib): 31.49
- Memory/max Mem Allocated(gib): 31.49
- Memory/device Mem Reserved(gib): 33.08
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 1337
- gradient_accumulation_steps: 75
- total_train_batch_size: 150
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- training_steps: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mem Active(gib) | Mem Allocated(gib) | Mem Reserved(gib) |
|:-------------:|:------:|:----:|:---------------:|:---------------:|:------------------:|:-----------------:|
| No log | 0 | 0 | 1.6467 | 19.79 | 19.79 | 24.59 |
| 3.0113 | 0.8021 | 2 | 1.8388 | 31.49 | 31.49 | 33.08 |
| 1.5032 | 1.4011 | 4 | 1.4474 | 31.49 | 31.49 | 33.08 |
| 1.1777 | 2.0 | 6 | 1.1725 | 31.49 | 31.49 | 33.08 |
| 0.9505 | 2.8021 | 8 | 0.9761 | 31.49 | 31.49 | 33.08 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.7.1+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1754747418
|
fatepurriyaz
| 2025-08-09T13:51:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic pawing pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T13:51:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic pawing pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koita/blockassist-bc-scampering_gentle_turkey_1754747213
|
koita
| 2025-08-09T13:48:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scampering gentle turkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T13:48:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scampering gentle turkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dorian2B/Vera-1.0-Nano-Preview-GGUF
|
Dorian2B
| 2025-08-09T13:44:47Z | 383 | 1 | null |
[
"gguf",
"French",
"LLM",
"2.6B",
"Nano",
"Compact",
"llama-cpp",
"text-generation",
"fr",
"en",
"es",
"it",
"pl",
"base_model:Dorian2B/Vera-1.0-Nano-Preview",
"base_model:quantized:Dorian2B/Vera-1.0-Nano-Preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-07-10T16:04:08Z |
---
license: apache-2.0
language:
- fr
- en
- es
- it
- pl
pipeline_tag: text-generation
tags:
- French
- LLM
- 2.6B
- Nano
- Compact
- llama-cpp
base_model:
- Dorian2B/Vera-1.0-Nano-Preview
---
<!DOCTYPE html>
<html lang="fr">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Vera 1.0 (Nano) - Preview</title>
<style>
/* Animations pour le fond liquide */
@keyframes liquid-bg {
0% { transform: translate(0, 0) scale(1); }
50% { transform: translate(10%, -10%) scale(1.2); }
100% { transform: translate(0, 0) scale(1); }
}
/* Réinitialisation des marges et paddings, et box-sizing */
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
/* Styles du corps de la page */
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
background: #0D1117; /* Couleur de fond sombre */
min-height: 100vh; /* Hauteur minimale de 100% de la vue */
color: #e0e0e0; /* Couleur de texte claire */
line-height: 1.6; /* Hauteur de ligne pour une meilleure lisibilité */
position: relative;
overflow-x: hidden; /* Empêche le défilement horizontal */
}
/* Pseudo-élément pour l'effet de fond liquide */
body::before {
content: '';
position: fixed;
top: 20%;
left: 20%;
width: 60%;
height: 60%;
background: radial-gradient(circle at center, rgba(56, 182, 255, 0.15) 0%, transparent 60%); /* Dégradé radial bleu */
filter: blur(100px); /* Effet de flou intense */
z-index: -1; /* Place l'élément derrière le contenu */
animation: liquid-bg 25s infinite ease-in-out; /* Animation du fond */
}
/* Conteneur principal du contenu */
.container {
max-width: 750px; /* Largeur maximale */
margin: 0 auto; /* Centrage horizontal */
padding: 40px 20px; /* Espacement intérieur */
}
/* Section d'en-tête */
.header {
background: rgba(22, 27, 34, 0.5); /* Fond semi-transparent */
backdrop-filter: blur(20px); /* Effet de flou derrière l'élément */
-webkit-backdrop-filter: blur(20px);
border: 1px solid rgba(255, 255, 255, 0.1); /* Bordure légère */
border-radius: 20px; /* Coins arrondis */
padding: 30px 25px; /* Espacement intérieur */
text-align: center; /* Texte centré */
margin-bottom: 30px; /* Marge en bas */
box-shadow: 0 8px 32px rgba(0, 0, 0, 0.2); /* Ombre portée */
}
/* Titre principal */
.header h1 {
font-size: 2.8em; /* Taille de police */
background: linear-gradient(135deg, #58a6ff, #9370db); /* Dégradé de couleur pour le texte */
-webkit-background-clip: text; /* Clip le fond au texte */
-webkit-text-fill-color: transparent; /* Rend le texte transparent pour laisser voir le dégradé */
background-clip: text;
margin-bottom: 10px; /* Marge en bas */
font-weight: 700; /* Poids de la police */
}
/* Sous-titre */
.subtitle {
color: #b0b0b0; /* Couleur de texte */
font-size: 1.2em; /* Taille de police */
font-style: italic; /* Texte en italique */
margin-bottom: 20px; /* Marge en bas */
}
/* Conteneur des badges */
.badges {
display: flex; /* Utilise flexbox */
flex-wrap: wrap; /* Permet aux éléments de passer à la ligne */
gap: 10px; /* Espacement entre les badges */
justify-content: center; /* Centre les badges */
}
/* Styles des badges */
.badge {
background: rgba(56, 139, 253, 0.15); /* Fond semi-transparent */
border: 1px solid rgba(56, 139, 253, 0.3); /* Bordure */
color: #58a6ff; /* Couleur de texte */
padding: 6px 14px; /* Espacement intérieur */
border-radius: 30px; /* Coins très arrondis */
font-size: 0.85em; /* Taille de police */
font-weight: 500; /* Poids de la police */
}
/* Styles des cartes (sections de contenu) */
.card {
background: rgba(22, 27, 34, 0.5); /* Fond semi-transparent */
backdrop-filter: blur(20px); /* Effet de flou */
-webkit-backdrop-filter: blur(20px);
border: 1px solid rgba(255, 255, 255, 0.1); /* Bordure */
border-radius: 16px; /* Coins arrondis */
padding: 25px; /* Espacement intérieur */
margin-bottom: 20px; /* Marge en bas */
box-shadow: 0 4px 20px rgba(0, 0, 0, 0.1); /* Ombre portée */
transition: box-shadow 0.3s ease; /* Transition douce pour l'ombre */
}
/* Effet au survol de la carte */
.card:hover {
box-shadow: 0 8px 30px rgba(0, 0, 0, 0.2); /* Ombre plus prononcée */
}
/* Titres des cartes */
.card h2 {
color: #58a6ff; /* Couleur de texte */
font-size: 1.5em; /* Taille de police */
margin-bottom: 15px; /* Marge en bas */
display: flex; /* Utilise flexbox pour aligner l'emoji et le texte */
align-items: center; /* Alignement vertical */
gap: 10px; /* Espacement entre l'emoji et le texte */
border-bottom: 1px solid rgba(255, 255, 255, 0.1); /* Ligne de séparation */
padding-bottom: 10px; /* Espacement sous la ligne */
}
/* Paragraphes dans les cartes */
.card p {
margin-bottom: 12px; /* Marge en bas */
color: #c0c0c0; /* Couleur de texte */
text-align: left; /* Alignement du texte à gauche */
}
/* Listes non ordonnées dans les cartes */
.card ul {
margin: 15px 0; /* Marges verticales */
padding-left: 20px; /* Retrait à gauche */
}
/* Éléments de liste */
.card li {
margin-bottom: 8px; /* Marge en bas */
color: #c0c0c0; /* Couleur de texte */
}
/* Conteneur des fonctionnalités */
.features {
display: grid; /* Utilise CSS Grid */
gap: 12px; /* Espacement entre les éléments */
margin: 15px 0; /* Marges verticales */
}
/* Styles d'une fonctionnalité individuelle */
.feature {
background: rgba(255, 255, 255, 0.05); /* Fond semi-transparent */
padding: 15px; /* Espacement intérieur */
border-radius: 10px; /* Coins arrondis */
border-left: 4px solid #58a6ff; /* Bordure gauche colorée */
}
/* Texte en gras dans les fonctionnalités */
.feature strong {
color: #58a6ff; /* Couleur de texte */
display: block; /* Rend l'élément de type bloc */
margin-bottom: 6px; /* Marge en bas */
font-size: 1.1em; /* Taille de police */
}
/* Conteneur des statistiques */
.stats {
display: grid; /* Utilise CSS Grid */
grid-template-columns: repeat(auto-fit, minmax(120px, 1fr)); /* Colonnes responsives */
gap: 15px; /* Espacement entre les éléments */
margin: 20px 0; /* Marges verticales */
}
/* Styles d'une statistique individuelle */
.stat {
text-align: center; /* Texte centré */
background: rgba(255, 255, 255, 0.03); /* Fond semi-transparent */
padding: 15px; /* Espacement intérieur */
border-radius: 12px; /* Coins arrondis */
border: 1px solid rgba(255, 255, 255, 0.08); /* Bordure */
}
/* Nombre de la statistique */
.stat-number {
font-size: 1.8em; /* Taille de police */
font-weight: bold; /* Gras */
color: #58a6ff; /* Couleur de texte */
display: block; /* Rend l'élément de type bloc */
}
/* Label de la statistique */
.stat-label {
font-size: 0.9em; /* Taille de police */
color: #b0b0b0; /* Couleur de texte */
margin-top: 4px; /* Marge en haut */
}
/* Conteneur du tableau comparatif (permet le défilement horizontal) */
.comparison-table {
overflow-x: auto; /* Active le défilement horizontal si nécessaire */
margin: 15px 0; /* Marges verticales */
}
/* Styles du tableau */
.comparison-table table {
width: 100%; /* Largeur de 100% */
border-collapse: collapse; /* Supprime les espacements entre les cellules */
background: transparent; /* Fond transparent */
border-radius: 10px; /* Coins arrondis */
overflow: hidden; /* Cache le contenu qui dépasse */
min-width: 600px; /* Largeur minimale pour forcer le défilement sur petits écrans */
}
/* En-têtes et cellules du tableau */
.comparison-table th,
.comparison-table td {
padding: 14px; /* Espacement intérieur */
text-align: left; /* Alignement du texte à gauche */
border-bottom: 1px solid rgba(255, 255, 255, 0.1); /* Bordure inférieure */
vertical-align: middle; /* Alignement vertical au milieu */
}
/* En-têtes du tableau */
.comparison-table th {
background: rgba(56, 139, 253, 0.1); /* Fond semi-transparent */
color: #58a6ff; /* Couleur de texte */
font-weight: 600; /* Poids de la police */
}
/* Cellules du tableau */
.comparison-table td {
color: #c0c0c0; /* Couleur de texte */
}
/* Section des limitations */
.limitations {
background: rgba(255, 110, 199, 0.05); /* Fond semi-transparent rosé */
border: 1px solid rgba(255, 110, 199, 0.2); /* Bordure rosée */
border-radius: 10px; /* Coins arrondis */
padding: 15px; /* Espacement intérieur */
margin: 15px 0; /* Marges verticales */
}
/* Titre des limitations */
.limitations h3 {
color: #ff6ec7; /* Couleur de texte rosée */
margin-bottom: 10px; /* Marge en bas */
}
/* Élément de la feuille de route */
.roadmap-item {
margin: 10px 0; /* Marges verticales */
padding: 10px 15px; /* Espacement intérieur */
border-left: 3px solid #58a6ff; /* Bordure gauche colorée */
background: rgba(255, 255, 255, 0.03); /* Fond semi-transparent */
}
/* Style pour les éléments de feuille de route complétés */
.roadmap-item.completed {
border-left-color: #f7b731; /* Bordure gauche orange */
}
/* Style pour les éléments de feuille de route en cours */
.roadmap-item.in-progress {
border-left-color: #3fb950; /* Bordure gauche verte */
}
/* Section de l'appel à l'action */
.cta-section {
text-align: center; /* Texte centré */
padding: 25px; /* Espacement intérieur */
}
/* Boutons d'appel à l'action et de téléchargement */
.cta-button, .download-btn {
background: linear-gradient(135deg, #3895ff, #7a5cff); /* Dégradé de fond */
color: #fff; /* Couleur de texte blanc */
padding: 12px 30px; /* Espacement intérieur */
border: none; /* Pas de bordure */
border-radius: 30px; /* Coins très arrondis */
font-size: 1.1em; /* Taille de police */
font-weight: 600; /* Poids de la police */
cursor: pointer; /* Curseur pointeur */
text-decoration: none; /* Pas de soulignement */
display: inline-block; /* Affichage en ligne-bloc */
box-shadow: 0 4px 20px rgba(56, 182, 255, 0.2); /* Ombre portée */
transition: transform 0.2s ease, box-shadow 0.2s ease; /* Transitions douces */
}
/* Effet au survol des boutons */
.cta-button:hover, .download-btn:hover {
transform: scale(1.05) translateY(-2px); /* Agrandissement et léger déplacement vers le haut */
box-shadow: 0 6px 25px rgba(56, 182, 255, 0.3); /* Ombre plus prononcée */
}
/* Styles spécifiques au bouton de téléchargement */
.download-btn {
padding: 8px 18px; /* Espacement intérieur réduit */
font-size: 0.9em; /* Taille de police réduite */
}
/* Pied de page */
.footer {
text-align: center; /* Texte centré */
color: rgba(255, 255, 255, 0.6); /* Couleur de texte semi-transparente */
margin-top: 30px; /* Marge en haut */
font-size: 0.9em; /* Taille de police */
padding: 20px; /* Espacement intérieur */
border-top: 1px solid rgba(255, 255, 255, 0.1); /* Ligne de séparation supérieure */
}
/* Styles pour les emojis */
.emoji {
font-size: 1.2em; /* Taille de police */
margin-right: 5px; /* Marge à droite */
}
/* Styles pour les titres h3 */
h3 {
color: #9370db; /* Couleur de texte violette */
margin: 20px 0 10px 0; /* Marges */
}
/* Media queries pour la responsivité sur petits écrans */
@media (max-width: 480px) {
.container {
padding: 20px 15px; /* Espacement intérieur réduit */
}
.header h1 {
font-size: 2.2em; /* Taille de police réduite */
}
.stats {
grid-template-columns: 1fr 1fr; /* Deux colonnes */
}
.comparison-table {
font-size: 0.9em; /* Taille de police réduite */
}
}
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>Vera 1.0 (Nano) - Preview</h1>
<p class="subtitle">Votre assistant IA personnel intelligent, conçu pour vous accompagner au quotidien</p>
<div class="badges">
<span class="badge">Apache 2.0</span>
<span class="badge">2.6B paramètres</span>
<span class="badge">8K contexte</span>
<span class="badge">FR | EN | IT | ES | PL</span>
</div>
</div>
<div class="card">
<h2><span class="emoji">🤖</span> Introduction</h2>
<p>Vera 1.0 (Nano) est une version compacte et optimisée de notre assistant IA personnel, conçue pour offrir des performances impressionnantes avec une empreinte légère. Avec 2.6 milliards de paramètres et un contexte de 8K tokens, Vera Nano est idéale pour une utilisation sur des appareils aux ressources plus limitées, tout en conservant une capacité de conversation naturelle et pertinente.</p>
<p>Ce qui rend Vera (Nano) unique, c'est sa capacité à comprendre le contexte de vos demandes et à fournir des réponses personnalisées et pertinentes, même dans un format plus petit. Que vous ayez besoin d'aide pour rédiger un email, comprendre un concept simple, ou simplement discuter de sujets variés, Vera Nano s'adapte naturellement à votre style de communication et à vos préférences.</p>
<p>Le modèle Vera (Nano) est en partie basé sur l'architecture Gemma 2 2B, enrichie d'optimisations spécifiques pour l'assistance personnelle. Vera Nano excelle dans la conversation en français et en anglais, avec un support étendu pour l'italien, l'espagnol et le polonais. Cette polyglottie naturelle fait de Vera Nano un assistant précieux pour une utilisation internationale.</p>
</div>
<div class="card">
<h2><span class="emoji">🚀</span> Caractéristiques techniques</h2>
<ul>
<li><strong>Paramètres :</strong> 2.6 milliards (légèreté et efficacité)</li>
<li><strong>Architecture :</strong> Basée sur Gemma 2 2B optimisée (stabilité et compatibilité)</li>
<li><strong>Contexte :</strong> 8 192 tokens (conversations de longueur moyenne et suivi contextuel)</li>
<li><strong>Langues principales :</strong> Français, Anglais (expression naturelle et nuancée)</li>
<li><strong>Langues secondaires :</strong> Italien, Espagnol, Polonais (support international étendu)</li>
<li><strong>Formats :</strong> SafeTensors, GGUF (flexibilité de déploiement)</li>
</ul>
<p>La capacité de Vera (Nano) à traiter 8 192 tokens permet de maintenir des conversations cohérentes et de traiter des requêtes de longueur raisonnable sans perdre le fil. Cette fonctionnalité est particulièrement utile pour les interactions quotidiennes et les analyses de texte de taille modérée.</p>
</div>
<div class="card">
<h2><span class="emoji">💻</span> Installation et déploiement</h2>
<p>L'intégration de Vera (Nano) dans votre environnement est conçue pour être rapide et efficace. Le modèle est fait pour tourner de manière entièrement hors ligne.</p>
<h3>Options de quantization GGUF</h3>
<div class="comparison-table">
<table>
<thead>
<tr>
<th>Quantization</th>
<th>Usage recommandé</th>
<th>Mémoire requise</th>
<th>Qualité</th>
<th>Télécharger</th>
</tr>
</thead>
<tbody>
<tr>
<td>Q8_0</td>
<td>Utilisation optimale</td>
<td>~3-4 GB</td>
<td>Maximale</td>
<td><a href="https://huggingface.co/Dorian2B/Vera-1.0-Nano-Preview-GGUF/resolve/main/vera-1.0-nano-preview-q8_0.gguf?download=true" class="download-btn" target="_blank" rel="noopener noreferrer">Télécharger</a></td>
</tr>
<tr>
<td>Q6_K</td>
<td>Usage quotidien</td>
<td>~2.5-3 GB</td>
<td>Excellente</td>
<td><a href="https://huggingface.co/Dorian2B/Vera-1.0-Nano-Preview-GGUF/resolve/main/vera-1.0-nano-preview-q6_k.gguf?download=true" class="download-btn" target="_blank" rel="noopener noreferrer">Télécharger</a></td>
</tr>
<tr>
<td>Q4_K_M</td>
<td>Configuration limitée</td>
<td>~2-2.5 GB</td>
<td>Très bonne</td>
<td><a href="https://huggingface.co/Dorian2B/Vera-1.0-Nano-Preview-GGUF/resolve/main/vera-1.0-nano-preview-q4_k_m.gguf?download=true" class="download-btn" target="_blank" rel="noopener noreferrer">Télécharger</a></td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="card">
<h2><span class="emoji">⚙️</span> Configuration recommandée</h2>
<p>Pour une expérience optimale avec Vera (Nano), nous recommandons une configuration matérielle adaptée aux modèles de 2.6 milliards de paramètres :</p>
<h3>Matériel optimal</h3>
<ul>
<li><strong>RAM système :</strong> 8 GB pour une fluidité maximale, 4 GB minimum</li>
<li><strong>VRAM GPU :</strong> 4-6 GB (GTX 1650, RTX 3050, ou équivalent AMD)</li>
<li><strong>Processeur :</strong> CPU moderne (Intel i5 ou AMD Ryzen 5)</li>
<li><strong>Stockage :</strong> 5+ GB d'espace libre sur SSD pour les performances optimales</li>
</ul>
<h3>Paramètres recommandés</h3>
<ul>
<li><strong>Température :</strong> 0.7 (équilibre optimal entre créativité et cohérence)</li>
<li><strong>Top-p :</strong> 0.9 pour maintenir la diversité des réponses</li>
</ul>
</div>
<div class="card">
<h2><span class="emoji">⚠️</span> Limitations actuelles</h2>
<div class="limitations">
<h3>Points d'amélioration (Version Preview)</h3>
<p><strong>Programmation avancée :</strong> Vera (Nano) peut assister sur des tâches de programmation de base, mais ses capacités sont limitées pour les algorithmes complexes ou les architectures logicielles avancées.</p>
<p><strong>Domaines ultra-spécialisés :</strong> Pour des domaines très techniques (recherche médicale avancée, finance quantitative, ingénierie aérospatiale), Vera (Nano) n'aura pas la même profondeur d'expertise que des modèles plus grands.</p>
<p><strong>Évolutions récentes :</strong> Comme tous les modèles IA, Vera (Nano) a une limite temporelle dans ses connaissances des événements les plus récents et des développements technologiques de dernière minute.</p>
<p><strong>Raisonnement mathématique complexe :</strong> Bien que capable de résoudre des problèmes mathématiques standards, Vera (Nano) peut montrer des limites sur des démonstrations mathématiques très avancées ou des calculs nécessitant une précision absolue.</p>
</div>
</div>
<div class="cta-section">
<a href="https://huggingface.co/Dorian2B/Vera-1.0-Nano-Preview" class="cta-button">
<span class="emoji">🚀</span> Découvrir Vera (Nano) sur HuggingFace
</a>
</div>
<div class="footer">
<p><strong>Développé par :</strong> Dorian Dominici</p>
<p><strong>Version :</strong> 1.0 (Nano) - Preview • <strong>Licence :</strong> Apache 2.0 • <strong>Année :</strong> 2025</p>
<br>
<p><em>Vera 1.0 (Nano) - Votre assistant IA personnel intelligent et léger, conçu pour vous accompagner au quotidien.</em></p>
</div>
</div>
</body>
</html>
|
TheDenk/wan2.2-t2v-a14b-controlnet-depth-v1
|
TheDenk
| 2025-08-09T13:44:28Z | 21 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"video",
"video-generation",
"video-to-video",
"controlnet",
"wan2.2",
"en",
"license:apache-2.0",
"region:us"
] |
video-to-video
| 2025-08-08T15:36:00Z |
---
license: apache-2.0
language:
- en
tags:
- video
- video-generation
- video-to-video
- controlnet
- diffusers
- wan2.2
---
# Controlnet for Wan2.2 A14B (depth)
This repo contains the code for controlnet module for Wan2.2. See <a href="https://github.com/TheDenk/wan2.2-controlnet">Github code</a>.
Same approach as controlnet for [Wan2.1](https://github.com/TheDenk/wan2.1-dilated-controlnet).
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63fde49f6315a264aba6a7ed/baX-ZcaUbUh18KzYo6bMv.mp4"></video>
### For ComfyUI
Use the cool [ComfyUI-WanVideoWrapper](https://github.com/kijai/ComfyUI-WanVideoWrapper).
<img width="600" height="480" alt="t2i_workflow" src="https://github.com/user-attachments/assets/4788c2d0-8ff0-405e-9b6d-0e0b1347865b" />
The latest version of the `diffusers` is required.
You can install it like this:
```bash
pip install git+https://github.com/huggingface/diffusers.git
```
### Inference examples
#### Simple inference with cli
```bash
python -m inference.cli_demo \
--video_path "resources/bubble.mp4" \
--prompt "Close-up shot with soft lighting, focusing sharply on the lower half of a young woman's face. Her lips are slightly parted as she blows an enormous bubblegum bubble. The bubble is semi-transparent, shimmering gently under the light, and surprisingly contains a miniature aquarium inside, where two orange-and-white goldfish slowly swim, their fins delicately fluttering as if in an aquatic universe. The background is a pure light blue color." \
--controlnet_type "depth" \
--base_model_path Wan-AI/Wan2.2-T2V-A14B \
--controlnet_model_path TheDenk/wan2.2-t2v-a14b-controlnet-depth-v1
```
#### Minimal code example
```python
import os
os.environ['CUDA_VISIBLE_DEVICES'] = "0"
os.environ["TOKENIZERS_PARALLELISM"] = "false"
import torch
from diffusers.utils import load_video, export_to_video
from diffusers import AutoencoderKLWan, UniPCMultistepScheduler
from controlnet_aux import MidasDetector
from wan_controlnet import WanControlnet
from wan_transformer import CustomWanTransformer3DModel
from wan_t2v_controlnet_pipeline import WanTextToVideoControlnetPipeline
base_model_path = "Wan-AI/Wan2.2-T2V-A14B"
controlnet_model_path = "TheDenk/wan2.2-t2v-a14b-controlnet-depth-v1"
vae = AutoencoderKLWan.from_pretrained(base_model_path, subfolder="vae", torch_dtype=torch.float32)
transformer = CustomWanTransformer3DModel.from_pretrained(base_model_path, subfolder="transformer", torch_dtype=torch.bfloat16)
controlnet = WanControlnet.from_pretrained(controlnet_model_path, torch_dtype=torch.bfloat16)
pipe = WanTextToVideoControlnetPipeline.from_pretrained(
pretrained_model_name_or_path=base_model_path,
controlnet=controlnet,
transformer=transformer,
vae=vae,
torch_dtype=torch.bfloat16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=12.0)
pipe.enable_model_cpu_offload()
controlnet_processor = MidasDetector.from_pretrained('lllyasviel/Annotators')
img_h = 704 # 704 480
img_w = 1280 # 1280 832
num_frames = 121 # 121 81 49
video_path = 'bubble.mp4'
video_frames = load_video(video_path)[:num_frames]
video_frames = [x.resize((img_w, img_h)) for x in video_frames]
controlnet_frames = [controlnet_processor(x) for x in video_frames]
prompt = "Close-up shot with soft lighting, focusing sharply on the lower half of a young woman's face. Her lips are slightly parted as she blows an enormous bubblegum bubble. The bubble is semi-transparent, shimmering gently under the light, and surprisingly contains a miniature aquarium inside, where two orange-and-white goldfish slowly swim, their fins delicately fluttering as if in an aquatic universe. The background is a pure light blue color."
negative_prompt = "bad quality, worst quality"
output = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
height=img_h,
width=img_w,
num_frames=num_frames,
guidance_scale=5,
generator=torch.Generator(device="cuda").manual_seed(42),
output_type="pil",
controlnet_frames=controlnet_frames,
controlnet_guidance_start=0.0,
controlnet_guidance_end=0.8,
controlnet_weight=0.8,
teacache_treshold=0.6,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
```
## Acknowledgements
Original code and models [Wan2.2](https://github.com/Wan-Video/Wan2.2).
## Citations
```
@misc{TheDenk,
title={Wam2.2 Controlnet},
author={Karachev Denis},
url={https://github.com/TheDenk/wan2.2-controlnet},
publisher={Github},
year={2025}
}
```
## Contacts
<p>Issues should be raised directly in the repository. For professional support and recommendations please <a>[email protected]</a>.</p>
|
a1024053774/q-FrozenLake-v1-4x4-noSlippery
|
a1024053774
| 2025-08-09T13:43:53Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-09T13:43:27Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** 智能体玩 **FrozenLake-v1**
这是**Q-Learning** 智能体玩 **FrozenLake-v1** 的训练模型。
## 用法
```python
model = load_from_hub(repo_id="a1024053774/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# 不要忘记检查是否需要添加其他属性(is_slippery=False 等)
env = gym.make(model["env_id"])
```
|
AlekseyCalvin/Phenakistiscopes_alt_Wan22_5B_LoRA
|
AlekseyCalvin
| 2025-08-09T13:35:32Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-09T13:33:41Z |
---
license: apache-2.0
---
|
Fdex/LunarLander-v3
|
Fdex
| 2025-08-09T13:32:47Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v3",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-09T13:32:42Z |
---
tags:
- LunarLander-v3
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v3
type: LunarLander-v3
metrics:
- type: mean_reward
value: -141.69 +/- 60.25
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v3
This is a trained model of a PPO agent playing LunarLander-v3.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v3'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Fdex/LunarLander-v3'
'batch_size': 512
'minibatch_size': 128}
```
|
PixelForgeStudios/TunaTalk-XS-tr-ExtraSmall
|
PixelForgeStudios
| 2025-08-09T13:29:15Z | 2 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-08-09T12:58:43Z |
---
license: apache-2.0
---
TunaTalk-Sentiment
Bu model, duygu analizi veri seti üzerinde ince ayar (fine-tuning) yapılarak oluşturulmuş bir Türkçe metin üretme modelidir.
Model Açıklaması
Bu projenin amacı, pozitif ve negatif yorum metinleri içeren bir veri setiyle bir dil modelini eğiterek, özellikle ürün ve film yorumları gibi duygu belirten metinlerin yapısına aşina bir model ortaya çıkarmaktır.
Dil: Türkçe (tr)
Parametre Sayısı: 124.44 Milyon
Sınırlamalar ve Sorumluluk Reddi
Bu model, eğitildiği veri setinin doğası gereği yanlılıklar içerebilir. Üretilen metinlerin doğruluğu veya uygunluğu garanti edilmez. Lütfen modeli kullanırken bu sınırlamaları göz önünde bulundurun.
TunaTalk-Sentiment (Your Model Name Here)
This is a Turkish text generation model, fine-tuned on a sentiment analysis dataset.
Model Description
The goal of this project was to train a language model on a dataset containing positive and negative reviews. This process resulted in a model proficient at generating text that mimics the style of sentiment-heavy content, such as product or movie reviews.
Language: Turkish (tr)
Parameters: 124.44 Million
Limitations and Disclaimer
Due to the nature of the dataset it was trained on, this model may contain biases. The accuracy or appropriateness of the generated text is not guaranteed. Please consider these limitations when using the model.
|
Kei-Sanada/task-13-Qwen-Qwen2.5-0.5B-Instruct
|
Kei-Sanada
| 2025-08-09T13:28:59Z | 83 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"region:us"
] | null | 2025-08-09T12:47:06Z |
---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
kznmp3/blockassist-bc-lively_raging_hippo_1754746079
|
kznmp3
| 2025-08-09T13:28:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lively raging hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T13:28:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lively raging hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yemreckr/blockassist-bc-lazy_howling_tiger_1754745980
|
yemreckr
| 2025-08-09T13:27:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lazy howling tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T13:27:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lazy howling tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ocaklisemih/gpt-oss-20b-turkish-astrology-it
|
ocaklisemih
| 2025-08-09T13:25:41Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-09T12:57:08Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ocaklisemih
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
techyygarry/gjnxlora
|
techyygarry
| 2025-08-09T13:24:52Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-09T13:23:26Z |
---
license: apache-2.0
---
|
qdf6688/sd-class-butterflies-32-222
|
qdf6688
| 2025-08-09T13:22:30Z | 6 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional -image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2025-08-09T13:17:50Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional -image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class]
This model is a diffusion model for unconditional image generation of cute
## Usage
'''python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('qdf6688/sd-class-butterflies-32-222')
iamge = pipeline().iamges[0]
image
|
roeker/blockassist-bc-quick_wiry_owl_1754745567
|
roeker
| 2025-08-09T13:21:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T13:20:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dreamygeek/blockassist-bc-swift_amphibious_alpaca_1754743543
|
dreamygeek
| 2025-08-09T13:16:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"swift amphibious alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T13:16:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- swift amphibious alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1754745260
|
roeker
| 2025-08-09T13:15:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T13:15:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akdja55ka/aksks
|
akdja55ka
| 2025-08-09T13:13:34Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-09T13:13:34Z |
---
license: apache-2.0
---
|
seuncoded/blockassist-bc-armored_insectivorous_sardine_1754743647
|
seuncoded
| 2025-08-09T13:12:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored insectivorous sardine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T13:11:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored insectivorous sardine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koita/blockassist-bc-scampering_gentle_turkey_1754744963
|
koita
| 2025-08-09T13:11:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scampering gentle turkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T13:10:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scampering gentle turkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1754744952
|
roeker
| 2025-08-09T13:10:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T13:09:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xnftraff/blockassist-bc-sprightly_freckled_deer_1754744006
|
xnftraff
| 2025-08-09T13:10:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly freckled deer",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T13:09:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly freckled deer
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jasminekitty328/flan-t5-intentconan-lora_3000
|
jasminekitty328
| 2025-08-09T13:07:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-09T13:07:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
skyxyz/blockassist-bc-clawed_swift_ibis_1754744589
|
skyxyz
| 2025-08-09T13:03:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"clawed swift ibis",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T13:03:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- clawed swift ibis
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
turnipzealy/blockassist-bc-armored_strong_macaw_1754742262
|
turnipzealy
| 2025-08-09T13:03:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored strong macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T13:02:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored strong macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
andr0m4da/blockassist-bc-grazing_hunting_boar_1754744477
|
andr0m4da
| 2025-08-09T13:02:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grazing hunting boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T13:02:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grazing hunting boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nev8r/bert-finetuned-ner
|
nev8r
| 2025-08-09T13:01:25Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-09T12:31:27Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0607
- Precision: 0.9336
- Recall: 0.9505
- F1: 0.9420
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0774 | 1.0 | 1756 | 0.0631 | 0.9059 | 0.9362 | 0.9208 | 0.9820 |
| 0.0343 | 2.0 | 3512 | 0.0680 | 0.9362 | 0.9456 | 0.9409 | 0.9854 |
| 0.0209 | 3.0 | 5268 | 0.0607 | 0.9336 | 0.9505 | 0.9420 | 0.9864 |
### Framework versions
- Transformers 4.54.1
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.4
|
mradermacher/PaperPrediction-ALL-1.7B-GGUF
|
mradermacher
| 2025-08-09T13:00:06Z | 760 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:PaperPred/PaperPrediction-ALL-1.7B",
"base_model:quantized:PaperPred/PaperPrediction-ALL-1.7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-09T12:37:32Z |
---
base_model: PaperPred/PaperPrediction-ALL-1.7B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/PaperPred/PaperPrediction-ALL-1.7B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#PaperPrediction-ALL-1.7B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-ALL-1.7B-GGUF/resolve/main/PaperPrediction-ALL-1.7B.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-ALL-1.7B-GGUF/resolve/main/PaperPrediction-ALL-1.7B.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-ALL-1.7B-GGUF/resolve/main/PaperPrediction-ALL-1.7B.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-ALL-1.7B-GGUF/resolve/main/PaperPrediction-ALL-1.7B.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-ALL-1.7B-GGUF/resolve/main/PaperPrediction-ALL-1.7B.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-ALL-1.7B-GGUF/resolve/main/PaperPrediction-ALL-1.7B.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-ALL-1.7B-GGUF/resolve/main/PaperPrediction-ALL-1.7B.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-ALL-1.7B-GGUF/resolve/main/PaperPrediction-ALL-1.7B.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-ALL-1.7B-GGUF/resolve/main/PaperPrediction-ALL-1.7B.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-ALL-1.7B-GGUF/resolve/main/PaperPrediction-ALL-1.7B.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-ALL-1.7B-GGUF/resolve/main/PaperPrediction-ALL-1.7B.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/PaperPrediction-ALL-1.7B-GGUF/resolve/main/PaperPrediction-ALL-1.7B.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
thedobro/blockassist-bc-sedate_shiny_mantis_1754744254
|
thedobro
| 2025-08-09T12:59:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sedate shiny mantis",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T12:59:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sedate shiny mantis
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1754744292
|
roeker
| 2025-08-09T12:59:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T12:59:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
duonglam99/blockassist-bc-scampering_shrewd_nightingale_1754741700
|
duonglam99
| 2025-08-09T12:59:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scampering shrewd nightingale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T12:58:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scampering shrewd nightingale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FrancoInfante/FIVE
|
FrancoInfante
| 2025-08-09T12:58:56Z | 0 | 0 | null |
[
"es",
"en",
"dataset:NousResearch/Hermes-3-Dataset",
"dataset:fka/awesome-chatgpt-prompts",
"license:apache-2.0",
"region:us"
] | null | 2025-08-09T12:52:00Z |
---
license: apache-2.0
datasets:
- NousResearch/Hermes-3-Dataset
- fka/awesome-chatgpt-prompts
language:
- es
- en
metrics:
- accuracy
---
|
Whwjkdwj/6jwjbsvabw
|
Whwjkdwj
| 2025-08-09T12:55:50Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-09T12:55:49Z |
---
license: apache-2.0
---
|
tootshine/omega_GvjJh36
|
tootshine
| 2025-08-09T12:53:59Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-09T12:53:59Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
loweegee/ppo-SnowballTarget
|
loweegee
| 2025-08-09T12:53:48Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-08-09T12:53:45Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: loweegee/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hsge/0.5B_Llama_8B_HHH
|
hsge
| 2025-08-09T12:52:53Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T12:52:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
roeker/blockassist-bc-quick_wiry_owl_1754743559
|
roeker
| 2025-08-09T12:47:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T12:46:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zheng6677/my_policy2
|
zheng6677
| 2025-08-09T12:46:47Z | 2 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:zheng6677/record-test2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-09T12:45:15Z |
---
datasets: zheng6677/record-test2
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- lerobot
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
akratz/unsloth.Q8_0
|
akratz
| 2025-08-09T12:46:27Z | 96 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-09T11:36:44Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Model Card: Qwen2.5-3B (GRPO-GSM8K) — Reasoning Format
## Model summary
- Name: Qwen2.5-3B (GRPO-GSM8K)
- Base model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
- Developer/Maintainer: akratz
- License: Apache-2.0
- Languages: English
- Status: Experimental; trained to emit explicit reasoning traces
- Primary change: Reinforcement learning with GRPO on GSM8K to induce chain-of-thought style outputs
## What's in the box
This is a small Qwen 2.5 3B-parameter instruct model fine-tuned with GRPO following Unsloth’s RL tutorial to produce responses in a two-block format:
```
> Tom’s cat is 8 years old. His rabbit is half the age of his cat. His dog is three times as old as his rabbit. How old is the
dog?
<reasoning>
To find out how old Tom's dog is, we need to follow these steps:
1. Determine the age of Tom's rabbit.
2. Use the rabbit's age to find out the dog's age.
First, we know that Tom's cat is 格格 (which I assume was supposed to be '8' years old).
Next, the rabbit is half the age of the cat. Therefore, the rabbit's age is:
\[ \text{Rabbit's age} = \frac{1}{2} \times \text{Cat's age} = \frac{1}{2} \times 8 = 4 \text{ years old} \]
Now, the dog is three times as old as the rabbit. So, the dog's age is:
\[ \text{Dog's age} = 3 \times \text{Rabbit's age} = 3 \times 4 = 12 \text{ years old} \]
Thus, the dog is 12 years old.
...
<answer>
12
</answer>
```
The goal was to validate the full RL pipeline end-to-end, not to achieve state-of-the-art accuracy. The model does emit structured reasoning, but overall quality is low compared with larger or better-trained models.
## Intended use
- Intended: Educational and reproducibility demos of GRPO on a small model; sandbox experiments on formatting, reward shaping, and inference tooling.
- Not intended: Production use; safety-critical domains; factual or mathematical reliability; privacy-sensitive data.
## Limitations and risks
- Trained only on GSM8K signals; narrow domain coverage.
- Reasoning traces can be verbose, inconsistent, or incorrect.
- Hallucinations and arithmetic mistakes are common.
- Safety and refusal behaviors were not a focus of training.
- Reward overfitting to the target format and short-term hacks is likely.
## Training data
- Supervision signal: GSM8K math word problems (for rewards/formatting).
- Data scope: Math grade-school style problems only.
- Provenance: Public dataset (standard GSM8K release). No private data added.
## Training procedure
- Method: GRPO (Group Relative Policy Optimization) via Hugging Face TRL + Unsloth.
- Starting weights: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
- Objective: Encourage the required output format and correct final answers on GSM8K.
- Notes: This run was sized and configured to “make it work,” not tuned for peak performance. Hyperparameters were kept close to the Unsloth tutorial defaults.
## Inference
### llama.cpp example
Invoke with a system prompt that enforces the format:
```
~/work/llama.cpp/build/bin/llama-cli \
-m ~/work/llama.cpp/models/unsloth.Q8_0.gguf \
-sys "Respond in the following format:
<reasoning>
...
</reasoning>
<answer>
...
</answer>"
```
### Output contract
- Always emits <reasoning>...</reasoning> followed by <answer>...</answer>.
- Consumers should parse only the <answer> block as the final answer.
- The <reasoning> block is untrusted and may contain errors.
### Practical tips
- Use deterministic decoding for evaluation (temp≈0.0–0.2, small top_p).
- Set a stop sequence on </answer> to avoid trailing chatter.
- For non-math tasks, expect degraded quality; re-prompt with very explicit instructions if you must try.
## Evaluation
- Benchmarking: No formal leaderboard-quality evaluation was performed.
- Sanity checks: The model produces the requested two-block format on GSM8K-style prompts and can reach correct answers on simple instances.
- Known behavior: Accuracy is inconsistent; failure modes include arithmetic slips and brittle parsing.
## Compatibility
- Quantization used for deployment: unsloth.Q8_0.gguf tested with llama.cpp.
- Libraries: Works with standard GGUF-compatible runtimes. Training used TRL + Unsloth.
## Responsible use
- Do not deploy in any domain where mistakes cause harm.
- Do not rely on the <reasoning> text for truth; treat it as a by-product of training.
- Filter or redact sensitive inputs. Add your own safety layers if experimenting in the wild.
## How to reproduce
- Tutorial followed: Unsloth “Train your own reasoning model with GRPO” (the GSM8K walkthrough).
- Replicate the pipeline, then enforce the same output format in the system prompt at inference time.
## Versioning
- v0.1 (2025-08-09): Initial GRPO finetune from unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit on GSM8K; emits reasoning/answer blocks; experimental quality.
## Citation
If you use this model in experiments, please cite:
- Qwen2.5 model family (Qwen team)
- TRL library (Hugging Face)
- Unsloth library and GRPO tutorial
- GSM8K dataset
@misc{akratz_qwen25_3b_grpo_gsm8k_2025,
title = {Qwen2.5-3B (GRPO-GSM8K) — Reasoning Format},
author = {akratz},
year = {2025},
note = {Fine-tuned from unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit using GRPO on GSM8K; Apache-2.0}
}
## Card metadata (for Hub front-matter)
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- reasoning
- grpo
license: apache-2.0
language:
- en
|
MrKK1277/mygpt
|
MrKK1277
| 2025-08-09T12:43:46Z | 0 | 0 | null |
[
"base_model:openai/gpt-oss-120b",
"base_model:finetune:openai/gpt-oss-120b",
"license:llama2",
"region:us"
] | null | 2025-08-09T12:42:21Z |
---
license: llama2
base_model:
- openai/gpt-oss-120b
---
|
BeDream/tuning-lora-tinyllama-1.1b
|
BeDream
| 2025-08-09T12:43:14Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"lora",
"transformers",
"text-generation",
"conversational",
"id",
"dataset:indonlp/cendol_collection_v2",
"arxiv:1910.09700",
"arxiv:2401.02385",
"arxiv:2404.06138",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-07T01:43:07Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0
- lora
- transformers
license: apache-2.0
datasets:
- indonlp/cendol_collection_v2
language:
- id
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
datasets:
- cendol
@misc{zhang2024tinyllama,
title={TinyLlama: An Open-Source Small Language Model},
author={Peiyuan Zhang and Guangtao Zeng and Tianduo Wang and Wei Lu},
year={2024},
eprint={2401.02385},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{cahyawijaya-etal-2024-cendol,
title={Cendol: Open Instruction-tuned Generative Large Language Models for Indonesian Languages},
author={Samuel Cahyawijaya and Holy Lovenia and Fajri Koto and Rifki Afina Putri and Emmanuel Dave and Jhonson Lee and Nuur Shadieq and Wawan Cenggoro and Salsabil Maulana Akbar and Muhammad Ihza Mahendra and Dea Annisayanti Putri and Bryan Wilie and Genta Indra Winata and Alham Fikri Aji and Ayu Purwarianti and Pascale Fung},
year={2024},
eprint={2404.06138},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{cahyawijaya-etal-2023-nusacrowd,
title = "{N}usa{C}rowd: Open Source Initiative for {I}ndonesian {NLP} Resources",
author = "Cahyawijaya, Samuel and
Lovenia, Holy and
Aji, Alham Fikri and
Winata, Genta and
Wilie, Bryan and
Koto, Fajri and
Mahendra, Rahmad and
Wibisono, Christian and
Romadhony, Ade and
Vincentio, Karissa and
Santoso, Jennifer and
Moeljadi, David and
Wirawan, Cahya and
Hudi, Frederikus and
Wicaksono, Muhammad Satrio and
Parmonangan, Ivan and
Alfina, Ika and
Putra, Ilham Firdausi and
Rahmadani, Samsul and
Oenang, Yulianti and
Septiandri, Ali and
Jaya, James and
Dhole, Kaustubh and
Suryani, Arie and
Putri, Rifki Afina and
Su, Dan and
Stevens, Keith and
Nityasya, Made Nindyatama and
Adilazuarda, Muhammad and
Hadiwijaya, Ryan and
Diandaru, Ryandito and
Yu, Tiezheng and
Ghifari, Vito and
Dai, Wenliang and
Xu, Yan and
Damapuspita, Dyah and
Wibowo, Haryo and
Tho, Cuk and
Karo Karo, Ichwanul and
Fatyanosa, Tirana and
Ji, Ziwei and
Neubig, Graham and
Baldwin, Timothy and
Ruder, Sebastian and
Fung, Pascale and
Sujaini, Herry and
Sakti, Sakriani and
Purwarianti, Ayu",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-acl.868",
doi = "10.18653/v1/2023.findings-acl.868",
pages = "13745--13818"
}
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
andr0m4da/blockassist-bc-grazing_hunting_boar_1754743060
|
andr0m4da
| 2025-08-09T12:40:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grazing hunting boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T12:40:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grazing hunting boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ess125/Anna
|
Ess125
| 2025-08-09T12:40:04Z | 10 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-03T18:51:20Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Annab
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Anna Baye
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `Annab` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754743054
|
kapalbalap
| 2025-08-09T12:38:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T12:38:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Axion-4B-v0.09-GGUF
|
mradermacher
| 2025-08-09T12:37:42Z | 1,057 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"base_model:oscar128372/Axion-4B-v0.09",
"base_model:quantized:oscar128372/Axion-4B-v0.09",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-09T12:14:32Z |
---
base_model: oscar128372/Axion-4B-v0.09
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/oscar128372/Axion-4B-v0.09
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Axion-4B-v0.09-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Axion-4B-v0.09-GGUF/resolve/main/Axion-4B-v0.09.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Axion-4B-v0.09-GGUF/resolve/main/Axion-4B-v0.09.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Axion-4B-v0.09-GGUF/resolve/main/Axion-4B-v0.09.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Axion-4B-v0.09-GGUF/resolve/main/Axion-4B-v0.09.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Axion-4B-v0.09-GGUF/resolve/main/Axion-4B-v0.09.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Axion-4B-v0.09-GGUF/resolve/main/Axion-4B-v0.09.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Axion-4B-v0.09-GGUF/resolve/main/Axion-4B-v0.09.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Axion-4B-v0.09-GGUF/resolve/main/Axion-4B-v0.09.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Axion-4B-v0.09-GGUF/resolve/main/Axion-4B-v0.09.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Axion-4B-v0.09-GGUF/resolve/main/Axion-4B-v0.09.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Axion-4B-v0.09-GGUF/resolve/main/Axion-4B-v0.09.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Axion-4B-v0.09-GGUF/resolve/main/Axion-4B-v0.09.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Omartificial-Intelligence-Space/gpt-oss-math-ar
|
Omartificial-Intelligence-Space
| 2025-08-09T12:34:12Z | 12 | 1 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"ar",
"en",
"dataset:Omartificial-Intelligence-Space/Arabic-gsm8k",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2025-08-09T12:08:15Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
library_name: peft
tags:
- base_model:adapter:unsloth/gpt-oss-20b-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
license: apache-2.0
datasets:
- Omartificial-Intelligence-Space/Arabic-gsm8k
language:
- ar
- en
---
# GPT OSS MATH AR

Arabic step-by-step math solver fine-tuned from **gpt-oss-20B** using **LoRA (PEFT)** on curated Arabic GSM8K-style problems. The model is instructed to reason **in Arabic** and explain each solution step clearly before giving the final answer.
- **Base model:** `unsloth/gpt-oss-20b-unsloth-bnb-4bit`
- **Parameter-efficient fine-tuning:** LoRA (PEFT) via Unsloth + TRL SFT
- **Primary objective:** Arabic chain-of-thought style arithmetic / word-problem reasoning (grade-school to early middle-school range)
- **License:** Apache-2.0
- **Maintainer:** **Omer Nacar** (Omartificial-Intelligence-Space)
---
# Model summary
- **Name:** `Omartificial-Intelligence-Space/gpt-oss-math-ar`
- **Size:** 20B (adapter on top of the base)
- **Languages:** Arabic (primary), English (instructions/support)
- **Capabilities:** Step-by-step solutions to math word problems, showing intermediate calculations in Arabic, ending with a concise final result line.
- **Intended use:** Educational assistance, practice solutions, and Arabic math reasoning research.
> ⚠️ **Note on reasoning:** The model is optimized to *write out* reasoning steps in Arabic. For sensitive use cases (exams, grading, or high-stakes evaluation), always verify outputs.
---
# Example usage (Transformers + Unsloth)
```python
from unsloth import FastLanguageModel
from transformers import TextStreamer
import torch
max_seq_length = 1024
dtype = None # auto-detect
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="Omartificial-Intelligence-Space/gpt-oss-math-ar",
dtype=dtype,
max_seq_length=max_seq_length,
load_in_4bit=True,
full_finetuning=False,
)
messages = [
{"role": "system", "content": "reasoning language: Arabic\n\nYou are an Arabic AI math questions solver that solves math problems step-by-step and explian in Arabic language only."},
{"role": "user", "content": "بطات جانيت تضع 16 بيضة في اليوم. فهي تأكل ثلاث منها على الفطور كل صباح وتخبز الكعك لأصدقائها كل يوم بأربع منها. إنها تبيع ما تبقى منها في سوق المزارعين كل يوم مقابل دولارين لكل بيضة بطازجة. كم تجني من الدولار كل يوم في سوق المزارعين؟"},
]
inputs = tokenizer2.apply_chat_template(
messages,
add_generation_prompt = True,
return_tensors = "pt",
return_dict = True,
reasoning_effort = "low",
).to(model2.device)
from transformers import TextStreamer
_ = model2.generate(**inputs, max_new_tokens = 256, streamer = TextStreamer(tokenizer2))
```
**Prompting tip:** Keep the **system** message as above so the model stays in Arabic and explains step-by-step.
---
# Training data
- **Primary dataset (fine-tuning):** `Omartificial-Intelligence-Space/Arabic-gsm8k-v2`
Curated Arabic word problems with gold step-by-step solutions.
- **Evaluation set (unseen):** `Omartificial-Intelligence-Space/Arabic-gsm8k` (`main_test` split)
### Sample training examples
```
[ { "content": "reasoning language: Arabic\n\nYou are an Arabic AI math questions solver that solves math problems step-by-step and explian in Arabic language only.", "role": "system" },
{ "content": "باعت نتاليا مشابك إلى 48 من أصدقائها في شهر أبريل، ثم باعت نصف ذلك العدد من المشابك في شهر مايو. كم عدد المشابك التي باعتها نتاليا في شهري أبريل ومايو معًا؟", "role": "user" },
{ "content": "باعت ناتاليا 48 ÷ 2 = 24 مشبكًا في مايو.\nباعت ناتاليا 48 + 24 = 72 مشبكًا في أبريل ومايو مجتمعين.\n#### 72", "role": "assistant" } ]
```
```
[ { "content": "reasoning language: Arabic\n\nYou are an Arabic AI math questions solver that solves math problems step-by-step and explian in Arabic language only.", "role": "system" },
{ "content": "تقرأ جولي كتاباً مؤلفاً من 120 صفحة. بالأمس، تمكنت من قراءة 12 صفحة، واليوم قرأت ضعف عدد الصفحات التي قرأتها بالأمس. إذا أرادت قراءة نصف ما تبقى من صفحات غداً، كم صفحة عليها أن تقرأ؟", "role": "user" },
{ "content": "قرأت مايلا اليوم 12 × 2 = 24 صفحة.\nلذلك فإن إجمالي ما قرأته منذ الأمس هو 12 + 24 = 36 صفحة.\nتبقى 120 - 36 = 84 صفحة ليتم قراءتها.\nوبما أنها تريد قراءة نصف الصفحات المتبقية غدًا، فيجب عليها أن تقرأ 84/2 = 42 صفحة.\n#### 42", "role": "assistant" } ]
```
---
# Training procedure
**Frameworks:** Unsloth, Transformers, TRL (SFT)
**Method:** Supervised fine-tuning with LoRA adapters
### LoRA & optimization (Unsloth)
```python
model = FastLanguageModel.get_peft_model(
model,
r = 16,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj"],
lora_alpha = 16,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = False,
loftq_config = None,
)
```
### SFT configuration (TRL)
```python
from trl import SFTConfig, SFTTrainer
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = dataset,
args = SFTConfig(
per_device_train_batch_size = 16,
gradient_accumulation_steps = 1,
warmup_steps = 100,
num_train_epochs = 3,
learning_rate = 2e-4,
logging_steps = 100,
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "linear",
seed = 3407,
output_dir = "outputs",
report_to = "none",
),
)
```
**Hardware:** Colab A100 40GB
**Seed:** 3407
---
**Recommended generation (starting point):**
- `max_new_tokens`: 128–384 for typical word problems
- `temperature`: 0.1–0.5 (lower for deterministic math)
- `top_p`: 0.8–0.95
- `repetition_penalty`: ~1.05 (optional)
---
# Prompting guide (Arabic)
- Keep the **system** instruction fixed to enforce Arabic step-by-step reasoning.
- Provide one math word problem per turn.
- Expect answers in this shape:
- Short steps showing operations
- A final line like: `#### <النتيجة>`
**Example:**
```
[system] reasoning language: Arabic
You are an Arabic AI math questions solver that solves math problems step-by-step and explian in Arabic language only.
[user] لدى متجر 75 قطعة حلوى. باع 18 قطـعة في الصباح و 23 في المساء. كم تبقى؟
```
---
# Evaluation
- **Unseen test set:** `Omartificial-Intelligence-Space/Arabic-gsm8k` (`main_test`)
- **Current status:** qualitative checks on arithmetic and simple word-problems; formal benchmark numbers can be added once computed.
- **Suggested protocol:** exact-match on the final `#### <number>` line; optional step-accuracy analysis for intermediate calculations.
---
# Intended use & limitations
**Intended use**
- Educational demos, tutoring aids, and research on Arabic mathematical reasoning.
- Generating step-by-step worked examples for practice problems.
**Limitations**
- May hallucinate or miscompute under distribution shift or very long contexts.
- Not a substitute for professional instruction or grading.
- Arabic is primary; performance in other languages is not targeted.
**Safety & responsible use**
- Verify outputs before use in assessment settings.
- Avoid using the model to complete academic work where external assistance is prohibited.
---
# Model card contacts & citation
**Author/Maintainer:** **Omer Nacar** — Omartificial-Intelligence-Space
**Model page:** https://huggingface.co/Omartificial-Intelligence-Space/gpt-oss-math-ar
**Please cite:**
```
@model{gpt_oss_math_ar_oi_space,
title = {gpt-oss-math-ar: Arabic Step-by-Step Math Reasoning Adapter for gpt-oss-20B},
author = {Omer Nacar},
year = {2025},
howpublished = {\url{https://huggingface.co/Omartificial-Intelligence-Space/gpt-oss-math-ar}}
}
```
Also cite the base and tooling:
- Unsloth, TRL, and Hugging Face Transformers
- Base model: `unsloth/gpt-oss-20b-unsloth-bnb-4bit`
- Datasets: `Omartificial-Intelligence-Space/Arabic-gsm8k` and `Arabic-gsm8k-v2`
---
# License
This adapter is released under **Apache-2.0**. Users must also comply with the licenses and terms of the **base model** and any datasets used.
---
# Changelog
- Initial public release of `gpt-oss-math-ar` (adapter on gpt-oss-20B) with Arabic step-by-step math reasoning and example inference code.
|
csikasote/whisper-medium-bigcgen-combined-5hrs-62
|
csikasote
| 2025-08-09T12:33:01Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:bigcgen",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-09T10:30:16Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- bigcgen
metrics:
- wer
model-index:
- name: whisper-medium-bigcgen-combined-5hrs-62
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: bigcgen
type: bigcgen
metrics:
- name: Wer
type: wer
value: 0.5565156468939748
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-bigcgen-combined-5hrs-62
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the bigcgen dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7209
- Wer: 0.5565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 62
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.0959 | 0.6079 | 200 | 0.9655 | 0.6610 |
| 0.6562 | 1.2158 | 400 | 0.8052 | 0.5699 |
| 0.6063 | 1.8237 | 600 | 0.7209 | 0.5565 |
| 0.402 | 2.4316 | 800 | 0.7347 | 0.5566 |
| 0.3066 | 3.0395 | 1000 | 0.7320 | 0.5467 |
| 0.2262 | 3.6474 | 1200 | 0.7329 | 0.5896 |
| 0.117 | 4.2553 | 1400 | 0.7819 | 0.5211 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
MMumtazSakho/Facencial-CoT-Lora
|
MMumtazSakho
| 2025-08-09T12:31:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-09T12:31:18Z |
---
base_model: unsloth/deepseek-r1-0528-qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MMumtazSakho
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-0528-qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
oleksiipsaruk/mistral-ar15-instruct
|
oleksiipsaruk
| 2025-08-09T12:29:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T16:02:02Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** oleksiipsaruk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ABHISHEKMONU2001/qwen2.5_3B_-finetuned-Inmate_FAQ
|
ABHISHEKMONU2001
| 2025-08-09T12:24:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-09T12:24:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
skyxyz/blockassist-bc-clawed_swift_ibis_1754742122
|
skyxyz
| 2025-08-09T12:24:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"clawed swift ibis",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T12:23:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- clawed swift ibis
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/india-wiki-hin-GGUF
|
mradermacher
| 2025-08-09T12:21:02Z | 819 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:XformAI-india/india-wiki-hin",
"base_model:quantized:XformAI-india/india-wiki-hin",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-09T12:18:35Z |
---
base_model: XformAI-india/india-wiki-hin
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/XformAI-india/india-wiki-hin
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#india-wiki-hin-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-GGUF/resolve/main/india-wiki-hin.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-GGUF/resolve/main/india-wiki-hin.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-GGUF/resolve/main/india-wiki-hin.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-GGUF/resolve/main/india-wiki-hin.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-GGUF/resolve/main/india-wiki-hin.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-GGUF/resolve/main/india-wiki-hin.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-GGUF/resolve/main/india-wiki-hin.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-GGUF/resolve/main/india-wiki-hin.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-GGUF/resolve/main/india-wiki-hin.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-GGUF/resolve/main/india-wiki-hin.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-GGUF/resolve/main/india-wiki-hin.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/india-wiki-hin-GGUF/resolve/main/india-wiki-hin.f16.gguf) | f16 | 1.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
JFernandoGRE/defensivetokens_llama31_8b
|
JFernandoGRE
| 2025-08-09T12:15:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-09T12:15:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rbelanec/train_record_1754594729
|
rbelanec
| 2025-08-09T12:14:53Z | 20 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-07T19:27:04Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prompt-tuning
- generated_from_trainer
model-index:
- name: train_record_1754594729
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_record_1754594729
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the record dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3460
- Num Input Tokens Seen: 464483424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:-----:|:------:|:---------------:|:-----------------:|
| 0.5711 | 0.5 | 15621 | 0.6434 | 23227520 |
| 0.4926 | 1.0 | 31242 | 0.5021 | 46454112 |
| 0.4776 | 1.5 | 46863 | 0.4279 | 69694624 |
| 0.2224 | 2.0 | 62484 | 0.3974 | 92908288 |
| 0.2756 | 2.5 | 78105 | 0.3910 | 116099296 |
| 0.315 | 3.0 | 93726 | 0.3748 | 139351808 |
| 0.3339 | 3.5 | 109347 | 0.3696 | 162566976 |
| 0.3642 | 4.0 | 124968 | 0.3616 | 185790304 |
| 0.2647 | 4.5 | 140589 | 0.3546 | 208997696 |
| 0.461 | 5.0 | 156210 | 0.3507 | 232243968 |
| 0.2471 | 5.5 | 171831 | 0.3520 | 255458112 |
| 0.389 | 6.0 | 187452 | 0.3460 | 278686752 |
| 0.3541 | 6.5 | 203073 | 0.3478 | 301925344 |
| 0.2941 | 7.0 | 218694 | 0.3463 | 325137568 |
| 0.4292 | 7.5 | 234315 | 0.3475 | 348361920 |
| 0.3843 | 8.0 | 249936 | 0.3474 | 371592704 |
| 0.2701 | 8.5 | 265557 | 0.3475 | 394838368 |
| 0.3541 | 9.0 | 281178 | 0.3473 | 418033696 |
| 0.5188 | 9.5 | 296799 | 0.3472 | 441282560 |
| 0.3273 | 10.0 | 312420 | 0.3472 | 464483424 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mradermacher/qwen3-4B-LinkedArt-GGUF
|
mradermacher
| 2025-08-09T12:14:44Z | 550 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:yale-cultural-heritage/qwen3-4B-LinkedArt",
"base_model:quantized:yale-cultural-heritage/qwen3-4B-LinkedArt",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-09T12:00:41Z |
---
base_model: yale-cultural-heritage/qwen3-4B-LinkedArt
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/yale-cultural-heritage/qwen3-4B-LinkedArt
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#qwen3-4B-LinkedArt-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/qwen3-4B-LinkedArt-GGUF/resolve/main/qwen3-4B-LinkedArt.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/qwen3-4B-LinkedArt-GGUF/resolve/main/qwen3-4B-LinkedArt.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/qwen3-4B-LinkedArt-GGUF/resolve/main/qwen3-4B-LinkedArt.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/qwen3-4B-LinkedArt-GGUF/resolve/main/qwen3-4B-LinkedArt.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/qwen3-4B-LinkedArt-GGUF/resolve/main/qwen3-4B-LinkedArt.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/qwen3-4B-LinkedArt-GGUF/resolve/main/qwen3-4B-LinkedArt.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen3-4B-LinkedArt-GGUF/resolve/main/qwen3-4B-LinkedArt.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qwen3-4B-LinkedArt-GGUF/resolve/main/qwen3-4B-LinkedArt.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/qwen3-4B-LinkedArt-GGUF/resolve/main/qwen3-4B-LinkedArt.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/qwen3-4B-LinkedArt-GGUF/resolve/main/qwen3-4B-LinkedArt.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/qwen3-4B-LinkedArt-GGUF/resolve/main/qwen3-4B-LinkedArt.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/qwen3-4B-LinkedArt-GGUF/resolve/main/qwen3-4B-LinkedArt.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kznmp3/blockassist-bc-lively_raging_hippo_1754741016
|
kznmp3
| 2025-08-09T12:08:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lively raging hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T12:03:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lively raging hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
giladgd/gpt-oss-120b-GGUF
|
giladgd
| 2025-08-09T12:03:07Z | 64 | 0 |
node-llama-cpp
|
[
"node-llama-cpp",
"gguf",
"llama.cpp",
"text-generation",
"base_model:openai/gpt-oss-120b",
"base_model:quantized:openai/gpt-oss-120b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T10:45:19Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: node-llama-cpp
tags:
- node-llama-cpp
- llama.cpp
base_model: openai/gpt-oss-120b
quantized_by: giladgd
---
# gpt-oss-120b-GGUF
> [!NOTE]
> Read [our guide](https://node-llama-cpp.withcat.ai/blog/v3.12-gpt-oss) on using `gpt-oss` to learn how to adjust its responses
<p align="center">
<img alt="gpt-oss-120b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-120b.svg">
</p>
# Highlights
* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
* **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory.
> [!NOTE]
> Refer to the [original model card](https://huggingface.co/openai/gpt-oss-120b) for more details on the model
# Quants
| Link | [URI](https://node-llama-cpp.withcat.ai/cli/pull) | Size |
|:-----|:--------------------------------------------------|-----:|
| [GGUF](https://huggingface.co/giladgd/gpt-oss-120b-GGUF/resolve/main/gpt-oss-120b.MXFP4.gguf) | `hf:giladgd/gpt-oss-120b-GGUF/gpt-oss-120b.MXFP4-00001-of-00002.gguf` | 63.4GB |
| [GGUF](https://huggingface.co/giladgd/gpt-oss-120b-GGUF/resolve/main/gpt-oss-120b.F16.gguf) | `hf:giladgd/gpt-oss-120b-GGUF/gpt-oss-120b.F16-00001-of-00002.gguf` | 65.4GB |
> [!TIP]
> Download a quant using `node-llama-cpp` ([more info](https://node-llama-cpp.withcat.ai/cli/pull)):
> ```bash
> npx -y node-llama-cpp pull <URI>
> ```
# Usage
## Use with [`node-llama-cpp`](https://node-llama-cpp.withcat.ai) (recommended)
### CLI
Chat with the model:
```bash
npx -y node-llama-cpp chat hf:giladgd/gpt-oss-120b-GGUF/gpt-oss-120b.MXFP4-00001-of-00002.gguf
```
> [!NOTE]
> Ensure that you have `node.js` installed first:
> ```bash
> brew install nodejs
> ```
### Code
Use it in your node.js project:
```bash
npm install node-llama-cpp
```
```typescript
import {getLlama, resolveModelFile, LlamaChatSession} from "node-llama-cpp";
const modelUri = "hf:giladgd/gpt-oss-120b-GGUF/gpt-oss-120b.MXFP4-00001-of-00002.gguf";
const llama = await getLlama();
const model = await llama.loadModel({
modelPath: await resolveModelFile(modelUri)
});
const context = await model.createContext();
const session = new LlamaChatSession({
contextSequence: context.getSequence()
});
const q1 = "Hi there, how are you?";
console.log("User: " + q1);
const a1 = await session.prompt(q1);
console.log("AI: " + a1);
```
> [!TIP]
> Read the [getting started guide](https://node-llama-cpp.withcat.ai/guide/) to quickly scaffold a new `node-llama-cpp` project
#### Customize inference options
Set [Harmoy](https://cookbook.openai.com/articles/openai-harmony) options using [`HarmonyChatWrapper`](https://node-llama-cpp.withcat.ai/api/classes/HarmonyChatWrapper):
```typescript
import {
getLlama, resolveModelFile, LlamaChatSession, HarmonyChatWrapper,
defineChatSessionFunction
} from "node-llama-cpp";
const modelUri = "hf:giladgd/gpt-oss-120b-GGUF/gpt-oss-120b.MXFP4-00001-of-00002.gguf";
const llama = await getLlama();
const model = await llama.loadModel({
modelPath: await resolveModelFile(modelUri)
});
const context = await model.createContext();
const session = new LlamaChatSession({
contextSequence: context.getSequence(),
chatWrapper: new HarmonyChatWrapper({
modelIdentity: "You are ChatGPT, a large language model trained by OpenAI.",
reasoningEffort: "high"
})
});
const functions = {
getCurrentWeather: defineChatSessionFunction({
description: "Gets the current weather in the provided location.",
params: {
type: "object",
properties: {
location: {
type: "string",
description: "The city and state, e.g. San Francisco, CA"
},
format: {
enum: ["celsius", "fahrenheit"]
}
}
},
handler({location, format}) {
console.log(`Getting current weather for "${location}" in ${format}`);
return {
// simulate a weather API response
temperature: format === "celsius" ? 20 : 68,
format
};
}
})
};
const q1 = "What is the weather like in SF?";
console.log("User: " + q1);
const a1 = await session.prompt(q1, {functions});
console.log("AI: " + a1);
```
## Use with [llama.cpp](https://github.com/ggml-org/llama.cpp)
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
### CLI
```bash
llama-cli --hf-repo giladgd/gpt-oss-120b-GGUF --hf-file gpt-oss-120b.MXFP4-00001-of-00002.gguf -p "The meaning to life and the universe is"
```
### Server
```bash
llama-server --hf-repo giladgd/gpt-oss-120b-GGUF --hf-file gpt-oss-120b.MXFP4-00001-of-00002.gguf -c 2048
```
|
dimireme/blockassist-bc-vigilant_leaping_hare_1754738842
|
dimireme
| 2025-08-09T12:03:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant leaping hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T12:02:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant leaping hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Qwen2-7B-S2R-PRL-GGUF
|
mradermacher
| 2025-08-09T12:00:06Z | 1,101 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:S2R-data/Qwen2-7B-S2R-PRL",
"base_model:quantized:S2R-data/Qwen2-7B-S2R-PRL",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-09T11:30:10Z |
---
base_model: S2R-data/Qwen2-7B-S2R-PRL
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/S2R-data/Qwen2-7B-S2R-PRL
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2-7B-S2R-PRL-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-PRL-GGUF/resolve/main/Qwen2-7B-S2R-PRL.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-PRL-GGUF/resolve/main/Qwen2-7B-S2R-PRL.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-PRL-GGUF/resolve/main/Qwen2-7B-S2R-PRL.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-PRL-GGUF/resolve/main/Qwen2-7B-S2R-PRL.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-PRL-GGUF/resolve/main/Qwen2-7B-S2R-PRL.IQ4_XS.gguf) | IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-PRL-GGUF/resolve/main/Qwen2-7B-S2R-PRL.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-PRL-GGUF/resolve/main/Qwen2-7B-S2R-PRL.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-PRL-GGUF/resolve/main/Qwen2-7B-S2R-PRL.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-PRL-GGUF/resolve/main/Qwen2-7B-S2R-PRL.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-PRL-GGUF/resolve/main/Qwen2-7B-S2R-PRL.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-PRL-GGUF/resolve/main/Qwen2-7B-S2R-PRL.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-PRL-GGUF/resolve/main/Qwen2-7B-S2R-PRL.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ncgc/ethics_statichh-pythia-1.4b-sft-bf16_bottom100_lr0.024
|
ncgc
| 2025-08-09T11:59:06Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T11:53:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
andr0m4da/blockassist-bc-grazing_hunting_boar_1754740564
|
andr0m4da
| 2025-08-09T11:57:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grazing hunting boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T11:57:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grazing hunting boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
asynctales/Qwen2.5-Coder-3B-Instruct-Q6_K-GGUF
|
asynctales
| 2025-08-09T11:56:09Z | 96 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-3B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-09T10:45:11Z |
---
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-3B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- llama-cpp
- gguf-my-repo
---
# asynctales/Qwen2.5-Coder-3B-Instruct-Q6_K-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) for more details on the model.
```bat
path\to\llama-server.exe -m path\to\qwen2.5-coder-3b-instruct-q6_k.gguf -ngl 99 -fa -ub 1024 -b 1024 --ctx-size 0 --cache-reuse 256 -np 2 --port [port] --temp 0.5(or customize ur temp.)
```
|
asynctales/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF
|
asynctales
| 2025-08-09T11:54:13Z | 191 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-09T11:31:04Z |
---
license: mit
library_name: transformers
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
tags:
- llama-cpp
- gguf-my-repo
---
# asynctales/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-GGUF
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) for more details on the model.
```bat
path\to\llama-server.exe -m path\to\deepseek-r1-distill-qwen-1.5b-q4_k_m.gguf --jinja --reasoning-format deepseek -ngl 99 -fa -sm row --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0 --presence-penalty 1.5 -c 40960 -n 32768 --no-context-shift --port [port]
```
|
warnerearlene6162/blockassist-bc-keen_extinct_pig_1754736677
|
warnerearlene6162
| 2025-08-09T11:51:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen extinct pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T11:50:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen extinct pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
asynctales/Qwen2.5-Coder-0.5B-Instruct-Q4_K_M-GGUF
|
asynctales
| 2025-08-09T11:50:10Z | 101 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-09T11:36:16Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-0.5B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- llama-cpp
- gguf-my-repo
---
# asynctales/Qwen2.5-Coder-0.5B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-0.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct) for more details on the model.
```bat
path\to\llama-server.exe -m path\to\qwen2.5-coder-0.5b-instruct-q4_k_m.gguf -ngl 99 -fa -ub 1024 -b 1024 --ctx-size 0 --cache-reuse 256 -np 2 --port [port]
```
|
PMon/mib-circuits-warmstart-edge-pruning-z-attribution
|
PMon
| 2025-08-09T11:50:02Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-09T11:32:50Z |
---
license: apache-2.0
---
|
skyxyz/blockassist-bc-clawed_swift_ibis_1754739946
|
skyxyz
| 2025-08-09T11:47:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"clawed swift ibis",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T11:46:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- clawed swift ibis
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
prasanthntu/Qwen2.5-3B-Reasoning-GGUF
|
prasanthntu
| 2025-08-09T11:46:56Z | 39 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-09T11:46:23Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** prasanthntu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/gpt-oss-20b-uncensored-bf16-GGUF
|
mradermacher
| 2025-08-09T11:45:37Z | 3,977 | 3 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:huizimao/gpt-oss-20b-uncensored-bf16",
"base_model:quantized:huizimao/gpt-oss-20b-uncensored-bf16",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-09T10:06:03Z |
---
base_model: huizimao/gpt-oss-20b-uncensored-bf16
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/huizimao/gpt-oss-20b-uncensored-bf16
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#gpt-oss-20b-uncensored-bf16-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-bf16-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-bf16-GGUF/resolve/main/gpt-oss-20b-uncensored-bf16.Q3_K_S.gguf) | Q3_K_S | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-bf16-GGUF/resolve/main/gpt-oss-20b-uncensored-bf16.Q2_K.gguf) | Q2_K | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-bf16-GGUF/resolve/main/gpt-oss-20b-uncensored-bf16.IQ4_XS.gguf) | IQ4_XS | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-bf16-GGUF/resolve/main/gpt-oss-20b-uncensored-bf16.Q3_K_M.gguf) | Q3_K_M | 13.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-bf16-GGUF/resolve/main/gpt-oss-20b-uncensored-bf16.Q3_K_L.gguf) | Q3_K_L | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-bf16-GGUF/resolve/main/gpt-oss-20b-uncensored-bf16.Q4_K_S.gguf) | Q4_K_S | 14.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-bf16-GGUF/resolve/main/gpt-oss-20b-uncensored-bf16.Q4_K_M.gguf) | Q4_K_M | 15.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-bf16-GGUF/resolve/main/gpt-oss-20b-uncensored-bf16.Q5_K_S.gguf) | Q5_K_S | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-bf16-GGUF/resolve/main/gpt-oss-20b-uncensored-bf16.Q5_K_M.gguf) | Q5_K_M | 17.0 | |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-bf16-GGUF/resolve/main/gpt-oss-20b-uncensored-bf16.Q6_K.gguf) | Q6_K | 22.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gpt-oss-20b-uncensored-bf16-GGUF/resolve/main/gpt-oss-20b-uncensored-bf16.Q8_0.gguf) | Q8_0 | 22.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
tcpipuk/rednote-hilab-dots.ocr-GGUF
|
tcpipuk
| 2025-08-09T11:45:18Z | 3,996 | 1 |
gguf
|
[
"gguf",
"document-parse",
"f16",
"formula",
"image-to-text",
"layout",
"ocr",
"table",
"base_model:rednote-hilab/dots.ocr",
"base_model:quantized:rednote-hilab/dots.ocr",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-09T08:59:00Z |
---
license: mit
library_name: gguf
base_model: rednote-hilab/dots.ocr
tags:
- document-parse
- f16
- formula
- gguf
- image-to-text
- layout
- ocr
- table
---
# rednote-hilab-dots.ocr-GGUF
GGUF quantisations of [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr) using
[Bartowski](https://huggingface.co/bartowski)'s method. Created with [llm-gguf-tools](https://git.tomfos.tr/tom/llm-gguf-tools)
which replicates Bartowski's quantisation profiles.
| Variant | Configuration | Status |
|---|---|---|
| **Q3_K_M** | Q6:E Q5:Av Q4:Fd Q3:O/Aqk/Fgu | ⚠️ Skipped |
| **Q3_K_L** | Q6:E Q5:Av Q4:O/Fd Q3:Aqk/Fgu | ❌ Failed |
| **Q3_K_XL** | Q5:Av Q4:E/O/Fd Q3:Aqk/Fgu | ❌ Failed |
| **Q4_0** | Q4_0 all layers | 🔄 Processing... |
| **Q4_K_M** | Q6:E/Av/Fd Q4:O/Aqk/Fgu | ❌ Failed |
| **Q4_K_L** | Q6:Av/Fd Q4:E/O/Aqk/Fgu | ❌ Failed |
| **Q5_0** | Q5_0 all layers | ⏳ Queued |
| **Q5_K_M** | Q6:E/Av/Fd Q5:O/Aqk/Fgu | ❌ Failed |
| **Q5_K_L** | Q6:Av/Fd Q5:O/Aqk/Fgu Q4:E | ❌ Failed |
| **Q6_0** | Q6_0 all layers | ⏳ Queued |
| **Q6_K** | Q8:E/Av Q6:O/Aqk/F | ❌ Failed |
| **Q6_K_L** | Q6:E/A/F Q4:O | ❌ Failed |
| **Q8_0** | Q8_0 all layers | ⏳ Queued |
| **Q8_K** | Q8_K all layers | ❌ Failed |
| **F16** | Full precision GGUF (F16/F32 mixed) | ⬆️ Uploading... (6.6GB) |
**Key:** `E` = Embeddings, `O` = Output, `A` = Attention, `F` = FFN
⚠️ **Note:** This model uses the `dotsocr` architecture, which is not yet supported by llama.cpp for quantisation. If quantisations failed, this is why - llama.cpp cannot quantise architectures it doesn't recognise. The F16 GGUF file is provided as a full-precision fallback (requires ~2x model size in VRAM). For `dotsocr` support, check with your inference software or wait for llama.cpp updates.
See [Bartowski Analysis](https://git.tomfos.tr/tom/llm-gguf-tools/src/branch/main/docs/bartowski_analysis.md) for detailed quantisation strategies and [Documentation](https://git.tomfos.tr/tom/llm-gguf-tools/src/branch/main/docs/) for more on the tools and methods I use.
## Original Model Card
---
<div align="center">
<p align="center">
<img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/logo.png" width="300"/>
<p>
<h1 align="center">
dots.ocr: Multilingual Document Layout Parsing in a Single Vision-Language Model
</h1>
[](https://github.com/rednote-hilab/dots.ocr/blob/master/assets/blog.md)
[](https://huggingface.co/rednote-hilab/dots.ocr)
<div align="center">
<a href="https://dotsocr.xiaohongshu.com" target="_blank" rel="noopener noreferrer"><strong>🖥️ Live Demo</strong></a> |
<a href="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/wechat.jpg" target="_blank" rel="noopener noreferrer"><strong>💬 WeChat</strong></a> |
<a href="https://www.xiaohongshu.com/user/profile/683ffe42000000001d021a4c" target="_blank" rel="noopener noreferrer"><strong>📕 rednote</strong></a>
</div>
</div>
## Introduction
**dots.ocr** is a powerful, multilingual document parser that unifies layout detection and content recognition within a single vision-language model while maintaining good reading order. Despite its compact 1.7B-parameter LLM foundation, it achieves state-of-the-art(SOTA) performance.
1. **Powerful Performance:** **dots.ocr** achieves SOTA performance for text, tables, and reading order on [OmniDocBench](https://github.com/opendatalab/OmniDocBench), while delivering formula recognition results comparable to much larger models like Doubao-1.5 and gemini2.5-pro.
2. **Multilingual Support:** **dots.ocr** demonstrates robust parsing capabilities for low-resource languages, achieving decisive advantages across both layout detection and content recognition on our in-house multilingual documents benchmark.
3. **Unified and Simple Architecture:** By leveraging a single vision-language model, **dots.ocr** offers a significantly more streamlined architecture than conventional methods that rely on complex, multi-model pipelines. Switching between tasks is accomplished simply by altering the input prompt, proving that a VLM can achieve competitive detection results compared to traditional detection models like DocLayout-YOLO.
4. **Efficient and Fast Performance:** Built upon a compact 1.7B LLM, **dots.ocr** provides faster inference speeds than many other high-performing models based on larger foundations.
### Performance Comparison: dots.ocr vs. Competing Models
<img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/chart.png" border="0" />
> **Notes:**
>
> - The EN, ZH metrics are the end2end evaluation results of [OmniDocBench](https://github.com/opendatalab/OmniDocBench), and Multilingual metric is the end2end evaluation results of dots.ocr-bench.
## News
- ```2025.07.30``` 🚀 We release [dots.ocr](https://github.com/rednote-hilab/dots.ocr), — a multilingual documents parsing model based on 1.7b llm, with SOTA performance.
## Benchmark Results
### 1. OmniDocBench
#### The end-to-end evaluation results of different tasks
<table>
<thead>
<tr>
<th rowspan="2"><strong>Model<br>Type</strong></th>
<th rowspan="2"><strong>Methods</strong></th>
<th colspan="2"><strong>Overall<sup>Edit</sup>↓</strong></th>
<th colspan="2"><strong>Text<sup>Edit</sup>↓</strong></th>
<th colspan="2"><strong>Formula<sup>Edit</sup>↓</strong></th>
<th colspan="2"><strong>Table<sup>TEDS</sup>↑</strong></th>
<th colspan="2"><strong>Table<sup>Edit</sup>↓</strong></th>
<th colspan="2"><strong>Read Order<sup>Edit</sup>↓</strong></th>
</tr>
<tr>
<th><em>EN</em></th>
<th><em>ZH</em></th>
<th><em>EN</em></th>
<th><em>ZH</em></th>
<th><em>EN</em></th>
<th><em>ZH</em></th>
<th><em>EN</em></th>
<th><em>ZH</em></th>
<th><em>EN</em></th>
<th><em>ZH</em></th>
<th><em>EN</em></th>
<th><em>ZH</em></th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="8"><strong>Pipeline<br>Tools</strong></td>
<td>MinerU</td>
<td>0.150</td>
<td>0.357</td>
<td>0.061</td>
<td>0.215</td>
<td>0.278</td>
<td>0.577</td>
<td>78.6</td>
<td>62.1</td>
<td>0.180</td>
<td>0.344</td>
<td>0.079</td>
<td>0.292</td>
</tr>
<tr>
<td>Marker</td>
<td>0.336</td>
<td>0.556</td>
<td>0.080</td>
<td>0.315</td>
<td>0.530</td>
<td>0.883</td>
<td>67.6</td>
<td>49.2</td>
<td>0.619</td>
<td>0.685</td>
<td>0.114</td>
<td>0.340</td>
</tr>
<tr>
<td>Mathpix</td>
<td>0.191</td>
<td>0.365</td>
<td>0.105</td>
<td>0.384</td>
<td>0.306</td>
<td>0.454</td>
<td>77.0</td>
<td>67.1</td>
<td>0.243</td>
<td>0.320</td>
<td>0.108</td>
<td>0.304</td>
</tr>
<tr>
<td>Docling</td>
<td>0.589</td>
<td>0.909</td>
<td>0.416</td>
<td>0.987</td>
<td>0.999</td>
<td>1</td>
<td>61.3</td>
<td>25.0</td>
<td>0.627</td>
<td>0.810</td>
<td>0.313</td>
<td>0.837</td>
</tr>
<tr>
<td>Pix2Text</td>
<td>0.320</td>
<td>0.528</td>
<td>0.138</td>
<td>0.356</td>
<td>0.276</td>
<td>0.611</td>
<td>73.6</td>
<td>66.2</td>
<td>0.584</td>
<td>0.645</td>
<td>0.281</td>
<td>0.499</td>
</tr>
<tr>
<td>Unstructured</td>
<td>0.586</td>
<td>0.716</td>
<td>0.198</td>
<td>0.481</td>
<td>0.999</td>
<td>1</td>
<td>0</td>
<td>0.06</td>
<td>1</td>
<td>0.998</td>
<td>0.145</td>
<td>0.387</td>
</tr>
<tr>
<td>OpenParse</td>
<td>0.646</td>
<td>0.814</td>
<td>0.681</td>
<td>0.974</td>
<td>0.996</td>
<td>1</td>
<td>64.8</td>
<td>27.5</td>
<td>0.284</td>
<td>0.639</td>
<td>0.595</td>
<td>0.641</td>
</tr>
<tr>
<td>PPStruct-V3</td>
<td>0.145</td>
<td>0.206</td>
<td>0.058</td>
<td>0.088</td>
<td>0.295</td>
<td>0.535</td>
<td>-</td>
<td>-</td>
<td>0.159</td>
<td>0.109</td>
<td>0.069</td>
<td>0.091</td>
</tr>
<tr>
<td rowspan="9"><strong>Expert<br>VLMs</strong></td>
<td>GOT-OCR</td>
<td>0.287</td>
<td>0.411</td>
<td>0.189</td>
<td>0.315</td>
<td>0.360</td>
<td>0.528</td>
<td>53.2</td>
<td>47.2</td>
<td>0.459</td>
<td>0.520</td>
<td>0.141</td>
<td>0.280</td>
</tr>
<tr>
<td>Nougat</td>
<td>0.452</td>
<td>0.973</td>
<td>0.365</td>
<td>0.998</td>
<td>0.488</td>
<td>0.941</td>
<td>39.9</td>
<td>0</td>
<td>0.572</td>
<td>1.000</td>
<td>0.382</td>
<td>0.954</td>
</tr>
<tr>
<td>Mistral OCR</td>
<td>0.268</td>
<td>0.439</td>
<td>0.072</td>
<td>0.325</td>
<td>0.318</td>
<td>0.495</td>
<td>75.8</td>
<td>63.6</td>
<td>0.600</td>
<td>0.650</td>
<td>0.083</td>
<td>0.284</td>
</tr>
<tr>
<td>OLMOCR-sglang</td>
<td>0.326</td>
<td>0.469</td>
<td>0.097</td>
<td>0.293</td>
<td>0.455</td>
<td>0.655</td>
<td>68.1</td>
<td>61.3</td>
<td>0.608</td>
<td>0.652</td>
<td>0.145</td>
<td>0.277</td>
</tr>
<tr>
<td>SmolDocling-256M</td>
<td>0.493</td>
<td>0.816</td>
<td>0.262</td>
<td>0.838</td>
<td>0.753</td>
<td>0.997</td>
<td>44.9</td>
<td>16.5</td>
<td>0.729</td>
<td>0.907</td>
<td>0.227</td>
<td>0.522</td>
</tr>
<tr>
<td>Dolphin</td>
<td>0.206</td>
<td>0.306</td>
<td>0.107</td>
<td>0.197</td>
<td>0.447</td>
<td>0.580</td>
<td>77.3</td>
<td>67.2</td>
<td>0.180</td>
<td>0.285</td>
<td>0.091</td>
<td>0.162</td>
</tr>
<tr>
<td>MinerU 2</td>
<td>0.139</td>
<td>0.240</td>
<td>0.047</td>
<td>0.109</td>
<td>0.297</td>
<td>0.536</td>
<td>82.5</td>
<td>79.0</td>
<td>0.141</td>
<td>0.195</td>
<td>0.069<</td>
<td>0.118</td>
</tr>
<tr>
<td>OCRFlux</td>
<td>0.195</td>
<td>0.281</td>
<td>0.064</td>
<td>0.183</td>
<td>0.379</td>
<td>0.613</td>
<td>71.6</td>
<td>81.3</td>
<td>0.253</td>
<td>0.139</td>
<td>0.086</td>
<td>0.187</td>
</tr>
<tr>
<td>MonkeyOCR-pro-3B</td>
<td>0.138</td>
<td>0.206</td>
<td>0.067</td>
<td>0.107</td>
<td><strong>0.246</strong></td>
<td>0.421</td>
<td>81.5</td>
<td>87.5</td>
<td>0.139</td>
<td>0.111</td>
<td>0.100</td>
<td>0.185</td>
</tr>
<tr>
<td rowspan="5"><strong>General<br>VLMs</strong></td>
<td>GPT4o</td>
<td>0.233</td>
<td>0.399</td>
<td>0.144</td>
<td>0.409</td>
<td>0.425</td>
<td>0.606</td>
<td>72.0</td>
<td>62.9</td>
<td>0.234</td>
<td>0.329</td>
<td>0.128</td>
<td>0.251</td>
</tr>
<tr>
<td>Qwen2-VL-72B</td>
<td>0.252</td>
<td>0.327</td>
<td>0.096</td>
<td>0.218</td>
<td>0.404</td>
<td>0.487</td>
<td>76.8</td>
<td>76.4</td>
<td>0.387</td>
<td>0.408</td>
<td>0.119</td>
<td>0.193</td>
</tr>
<tr>
<td>Qwen2.5-VL-72B</td>
<td>0.214</td>
<td>0.261</td>
<td>0.092</td>
<td>0.18</td>
<td>0.315</td>
<td>0.434</td>
<td>82.9</td>
<td>83.9</td>
<td>0.341</td>
<td>0.262</td>
<td>0.106</td>
<td>0.168</td>
</tr>
<tr>
<td>Gemini2.5-Pro</td>
<td>0.148</td>
<td>0.212</td>
<td>0.055</td>
<td>0.168</td>
<td>0.356</td>
<td>0.439</td>
<td>85.8</td>
<td>86.4</td>
<td>0.13</td>
<td>0.119</td>
<td>0.049</td>
<td>0.121</td>
</tr>
<tr>
<td>doubao-1-5-thinking-vision-pro-250428</td>
<td>0.140</td>
<td>0.162</td>
<td>0.043</td>
<td>0.085</td>
<td>0.295</td>
<td><strong>0.384</strong></td>
<td>83.3</td>
<td><strong>89.3</strong></td>
<td>0.165</td>
<td><strong>0.085</strong></td>
<td>0.058</td>
<td>0.094</td>
</tr>
<tr>
<td rowspan="1"><strong>Expert VLMs</strong></td>
<td><strong>dots.ocr</strong></td>
<td><strong>0.125</strong></td>
<td><strong>0.160</strong></td>
<td><strong>0.032</strong></td>
<td><strong>0.066</strong></td>
<td>0.329</td>
<td>0.416</td>
<td><strong>88.6</strong></td>
<td>89.0</td>
<td><strong>0.099</strong></td>
<td>0.092</td>
<td><strong>0.040</strong></td>
<td><strong>0.067</strong></td>
</tr>
<tr>
</tbody>
</table>
#### The end-to-end text recognition performance across 9 PDF page types
<table>
<thead>
<tr>
<th><strong>Model<br>Type</strong></th>
<th><strong>Models</strong></th>
<th><strong>Book</strong></th>
<th><strong>Slides</strong></th>
<th><strong>Financial<br>Report</strong></th>
<th><strong>Textbook</strong></th>
<th><strong>Exam<br>Paper</strong></th>
<th><strong>Magazine</strong></th>
<th><strong>Academic<br>Papers</strong></th>
<th><strong>Notes</strong></th>
<th><strong>Newspaper</strong></th>
<th><strong>Overall</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3"><strong>Pipeline<br>Tools</strong></td>
<td>MinerU</td>
<td>0.055</td>
<td>0.124</td>
<td><u>0.033</u></td>
<td>0.102</td>
<td>0.159</td>
<td><strong>0.072</strong></td>
<td><u>0.025</u></td>
<td>0.984</td>
<td>0.171</td>
<td>0.206</td>
</tr>
<tr>
<td>Marker</td>
<td>0.074</td>
<td>0.340</td>
<td>0.089</td>
<td>0.319</td>
<td>0.452</td>
<td>0.153</td>
<td>0.059</td>
<td>0.651</td>
<td>0.192</td>
<td>0.274</td>
</tr>
<tr>
<td>Mathpix</td>
<td>0.131</td>
<td>0.220</td>
<td>0.202</td>
<td>0.216</td>
<td>0.278</td>
<td>0.147</td>
<td>0.091</td>
<td>0.634</td>
<td>0.690</td>
<td>0.300</td>
</tr>
<tr>
<td rowspan="5"><strong>Expert<br>VLMs</strong></td>
<td>GOT-OCR</td>
<td>0.111</td>
<td>0.222</td>
<td>0.067</td>
<td>0.132</td>
<td>0.204</td>
<td>0.198</td>
<td>0.179</td>
<td>0.388</td>
<td>0.771</td>
<td>0.267</td>
</tr>
<tr>
<td>Nougat</td>
<td>0.734</td>
<td>0.958</td>
<td>1.000</td>
<td>0.820</td>
<td>0.930</td>
<td>0.830</td>
<td>0.214</td>
<td>0.991</td>
<td>0.871</td>
<td>0.806</td>
</tr>
<tr>
<td>Dolphin</td>
<td>0.091</td>
<td>0.131</td>
<td>0.057</td>
<td>0.146</td>
<td>0.231</td>
<td>0.121</td>
<td>0.074</td>
<td>0.363</td>
<td>0.307</td>
<td>0.177</td>
</tr>
<tr>
<td>OCRFlux</td>
<td>0.068</td>
<td>0.125</td>
<td>0.092</td>
<td>0.102</td>
<td>0.119</td>
<td>0.083</td>
<td>0.047</td>
<td>0.223</td>
<td>0.536</td>
<td>0.149</td>
</tr>
<tr>
<td>MonkeyOCR-pro-3B</td>
<td>0.084</td>
<td>0.129</td>
<td>0.060</td>
<td>0.090</td>
<td>0.107</td>
<td>0.073</td>
<td>0.050</td>
<td>0.171</td>
<td>0.107</td>
<td>0.100</td>
</tr>
<tr>
<td rowspan="4"><strong>General<br>VLMs</strong></td>
<td>GPT4o</td>
<td>0.157</td>
<td>0.163</td>
<td>0.348</td>
<td>0.187</td>
<td>0.281</td>
<td>0.173</td>
<td>0.146</td>
<td>0.607</td>
<td>0.751</td>
<td>0.316</td>
</tr>
<tr>
<td>Qwen2.5-VL-7B</td>
<td>0.148</td>
<td>0.053</td>
<td>0.111</td>
<td>0.137</td>
<td>0.189</td>
<td>0.117</td>
<td>0.134</td>
<td>0.204</td>
<td>0.706</td>
<td>0.205</td>
</tr>
<tr>
<td>InternVL3-8B</td>
<td>0.163</td>
<td>0.056</td>
<td>0.107</td>
<td>0.109</td>
<td>0.129</td>
<td>0.100</td>
<td>0.159</td>
<td>0.150</td>
<td>0.681</td>
<td>0.188</td>
</tr>
<tr>
<td>doubao-1-5-thinking-vision-pro-250428</td>
<td>0.048</td>
<td>0.048</td>
<td>0.024</td>
<td><strong>0.062</strong></td>
<td>0.085</td>
<td>0.051</td>
<td>0.039</td>
<td><strong>0.096</strong></td>
<td>0.181</td>
<td>0.073</td>
</tr>
<tr>
<td rowspan="1"><strong>Expert VLMs</strong></td>
<td><strong>dots.ocr</strong></td>
<td><strong>0.031</strong></td>
<td><strong>0.047</strong></td>
<td><strong>0.011</strong></td>
<td>0.082</td>
<td><strong>0.079</strong></td>
<td><strong>0.028</strong></td>
<td><strong>0.029</strong></td>
<td>0.109</td>
<td><strong>0.056</strong></td>
<td><strong>0.055</strong></td>
</tr>
</tbody>
</table>
> **Notes:**
>
> - The metrics are from [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR), [OmniDocBench](https://github.com/opendatalab/OmniDocBench), and our own internal evaluations.
> - We delete the Page-header and Page-footer cells in the result markdown.
> - We use tikz_preprocess pipeline to upsample the images to dpi 200.
### 2. **dots.ocr-bench**
This is an inhouse benchmark which contain 1493 pdf images with 100 languages.
#### The end-to-end evaluation results of different tasks
<table>
<thead>
<tr>
<th rowspan="1"><strong>Methods</strong></th>
<th colspan="1"><strong>Overall<sup>Edit</sup>↓</strong></th>
<th colspan="1"><strong>Text<sup>Edit</sup>↓</strong></th>
<th colspan="1"><strong>Formula<sup>Edit</sup>↓</strong></th>
<th colspan="1"><strong>Table<sup>TEDS</sup>↑</strong></th>
<th colspan="1"><strong>Table<sup>Edit</sup>↓</strong></th>
<th colspan="1"><strong>Read Order<sup>Edit</sup>↓</strong></th>
</tr>
</thead>
<tbody>
<td>MonkeyOCR-3B</td>
<td>0.483</td>
<td>0.445</td>
<td>0.627</td>
<td>50.93</td>
<td>0.452</td>
<td>0.409</td>
</tr>
<tr>
<td>doubao-1-5-thinking-vision-pro-250428</td>
<td>0.291</td>
<td>0.226</td>
<td>0.440</td>
<td>71.2</td>
<td>0.260</td>
<td>0.238</td>
</tr>
<tr>
<td>doubao-1-6</td>
<td>0.299</td>
<td>0.270</td>
<td>0.417</td>
<td>71.0</td>
<td>0.258</td>
<td>0.253</td>
</tr>
<tr>
<td>Gemini2.5-Pro</td>
<td>0.251</td>
<td>0.163</td>
<td>0.402</td>
<td>77.1</td>
<td>0.236</td>
<td>0.202</td>
</tr>
<tr>
<td><strong>dots.ocr</strong> </td>
<td><strong>0.177</strong></td>
<td><strong>0.075</strong></td>
<td><strong>0.297</strong></td>
<td><strong>79.2</strong></td>
<td><strong>0.186</strong></td>
<td><strong>0.152</strong></td>
</tr>
</tbody>
</table>
> **Notes:**
>
> - We use the same metric calculation pipeline of [OmniDocBench](https://github.com/opendatalab/OmniDocBench).
> - We delete the Page-header and Page-footer cells in the result markdown.
#### Layout Detection
<table>
<thead>
<tr>
<th rowspan="2"><strong>Method</strong></th>
<th colspan="5" style="text-align: center;"><strong>F1@IoU=.50:.05:.95↑</strong></th>
<th colspan="5" style="text-align: center;"><strong>F1@IoU=.50↑</strong></th>
</tr>
<tr>
<th>Overall</th>
<th>Text</th>
<th>Formula</th>
<th>Table</th>
<th>Picture</th>
<th>Overall</th>
<th>Text</th>
<th>Formula</th>
<th>Table</th>
<th>Picture</th>
</tr>
</thead>
<tbody>
<td>DocLayout-YOLO-DocStructBench</td>
<td>0.733</td>
<td>0.694</td>
<td>0.480</td>
<td>0.803</td>
<td>0.619</td>
<td>0.806</td>
<td>0.779</td>
<td>0.620</td>
<td>0.858</td>
<td>0.678</td>
</tr>
<tr>
<td>dots.ocr-parse all</td>
<td>0.831</td>
<td>0.801</td>
<td>0.654</td>
<td>0.838</td>
<td>0.748</td>
<td>0.922</td>
<td>0.909</td>
<td>0.770</td>
<td>0.888</td>
<td>0.831</td>
</tr>
<tr>
<td> <strong>dots.ocr-detection only</strong> </td>
<td><strong>0.845</strong></td>
<td><strong>0.816</strong></td>
<td><strong>0.716</strong></td>
<td><strong>0.875</strong></td>
<td><strong>0.765</strong></td>
<td><strong>0.930</strong></td>
<td><strong>0.917</strong></td>
<td><strong>0.832</strong></td>
<td><strong>0.918</strong></td>
<td><strong>0.843</strong></td>
</tr>
</tbody>
</table>
> **Notes:**
>
> - prompt_layout_all_en for **parse all**, prompt_layout_only_en for **detection only**, please refer to [prompts](https://github.com/rednote-hilab/dots.ocr/blob/master/dots_ocr/utils/prompts.py)
### 3. olmOCR-bench
<table>
<thead>
<tr>
<th>Model</th>
<th>ArXiv</th>
<th>Old Scans<br>Math</th>
<th>Tables</th>
<th>Old Scans</th>
<th>Headers and<br>Footers</th>
<th>Multi<br>column</th>
<th>Long Tiny<br>Text</th>
<th>Base</th>
<th>Overall</th>
</tr>
</thead>
<tbody>
<tr>
<td>GOT OCR</td>
<td>52.7</td>
<td>52.0</td>
<td>0.2</td>
<td>22.1</td>
<td>93.6</td>
<td>42.0</td>
<td>29.9</td>
<td>94.0</td>
<td>48.3 ± 1.1</td>
</tr>
<tr>
<td>Marker</td>
<td>76.0</td>
<td>57.9</td>
<td>57.6</td>
<td>27.8</td>
<td>84.9</td>
<td>72.9</td>
<td>84.6</td>
<td>99.1</td>
<td>70.1 ± 1.1</td>
</tr>
<tr>
<td>MinerU</td>
<td>75.4</td>
<td>47.4</td>
<td>60.9</td>
<td>17.3</td>
<td><strong>96.6</strong></td>
<td>59.0</td>
<td>39.1</td>
<td>96.6</td>
<td>61.5 ± 1.1</td>
</tr>
<tr>
<td>Mistral OCR</td>
<td>77.2</td>
<td>67.5</td>
<td>60.6</td>
<td>29.3</td>
<td>93.6</td>
<td>71.3</td>
<td>77.1</td>
<td>99.4</td>
<td>72.0 ± 1.1</td>
</tr>
<tr>
<td>Nanonets OCR</td>
<td>67.0</td>
<td>68.6</td>
<td>77.7</td>
<td>39.5</td>
<td>40.7</td>
<td>69.9</td>
<td>53.4</td>
<td>99.3</td>
<td>64.5 ± 1.1</td>
</tr>
<tr>
<td>GPT-4o<br>(No Anchor)</td>
<td>51.5</td>
<td><strong>75.5</strong></td>
<td>69.1</td>
<td>40.9</td>
<td>94.2</td>
<td>68.9</td>
<td>54.1</td>
<td>96.7</td>
<td>68.9 ± 1.1</td>
</tr>
<tr>
<td>GPT-4o<br>(Anchored)</td>
<td>53.5</td>
<td>74.5</td>
<td>70.0</td>
<td>40.7</td>
<td>93.8</td>
<td>69.3</td>
<td>60.6</td>
<td>96.8</td>
<td>69.9 ± 1.1</td>
</tr>
<tr>
<td>Gemini Flash 2<br>(No Anchor)</td>
<td>32.1</td>
<td>56.3</td>
<td>61.4</td>
<td>27.8</td>
<td>48.0</td>
<td>58.7</td>
<td><strong>84.4</strong></td>
<td>94.0</td>
<td>57.8 ± 1.1</td>
</tr>
<tr>
<td>Gemini Flash 2<br>(Anchored)</td>
<td>54.5</td>
<td>56.1</td>
<td>72.1</td>
<td>34.2</td>
<td>64.7</td>
<td>61.5</td>
<td>71.5</td>
<td>95.6</td>
<td>63.8 ± 1.2</td>
</tr>
<tr>
<td>Qwen 2 VL<br>(No Anchor)</td>
<td>19.7</td>
<td>31.7</td>
<td>24.2</td>
<td>17.1</td>
<td>88.9</td>
<td>8.3</td>
<td>6.8</td>
<td>55.5</td>
<td>31.5 ± 0.9</td>
</tr>
<tr>
<td>Qwen 2.5 VL<br>(No Anchor)</td>
<td>63.1</td>
<td>65.7</td>
<td>67.3</td>
<td>38.6</td>
<td>73.6</td>
<td>68.3</td>
<td>49.1</td>
<td>98.3</td>
<td>65.5 ± 1.2</td>
</tr>
<tr>
<td>olmOCR v0.1.75<br>(No Anchor)</td>
<td>71.5</td>
<td>71.4</td>
<td>71.4</td>
<td><strong>42.8</strong></td>
<td>94.1</td>
<td>77.7</td>
<td>71.0</td>
<td>97.8</td>
<td>74.7 ± 1.1</td>
</tr>
<tr>
<td>olmOCR v0.1.75<br>(Anchored)</td>
<td>74.9</td>
<td>71.2</td>
<td>71.0</td>
<td>42.2</td>
<td>94.5</td>
<td>78.3</td>
<td>73.3</td>
<td>98.3</td>
<td>75.5 ± 1.0</td>
</tr>
<tr>
<td>MonkeyOCR-pro-3B</td>
<td><strong>83.8</strong></td>
<td>68.8</td>
<td>74.6</td>
<td>36.1</td>
<td>91.2</td>
<td>76.6</td>
<td>80.1</td>
<td>95.3</td>
<td>75.8 ± 1.0</td>
</tr>
<tr>
<td><strong>dots.ocr</strong></td>
<td>82.1</td>
<td>64.2</td>
<td><strong>88.3</strong></td>
<td>40.9</td>
<td>94.1</td>
<td><strong>82.4</strong></td>
<td>81.2</td>
<td><strong>99.5</strong></td>
<td><strong>79.1 ± 1.0</strong></td>
</tr>
</tbody>
</table>
> **Note:**
>
> - The metrics are from [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR),
[olmocr](https://github.com/allenai/olmocr), and our own internal evaluations.
> - We delete the Page-header and Page-footer cells in the result markdown.
# Quick Start
## 1. Installation
### Install dots.ocr
```shell
conda create -n dots_ocr python=3.12
conda activate dots_ocr
git clone https://github.com/rednote-hilab/dots.ocr.git
cd dots.ocr
# Install pytorch, see https://pytorch.org/get-started/previous-versions/ for your cuda version
pip install torch==2.7.0 torchvision==0.22.0 torchaudio==2.7.0 --index-url https://download.pytorch.org/whl/cu128
pip install -e .
```
If you have trouble with the installation, try our [Docker Image](https://hub.docker.com/r/rednotehilab/dots.ocr) for an easier setup, and follow these steps:
```shell
git clone https://github.com/rednote-hilab/dots.ocr.git
cd dots.ocr
pip install -e .
```
### Download Model Weights
>
> 💡**Note:** Please use a directory name without periods (e.g., `DotsOCR` instead of `dots.ocr`) for the model save path. This is a temporary workaround pending our integration with Transformers.
```shell
python3 tools/download_model.py
```
## 2. Deployment
### vLLM inference
We highly recommend using vllm for deployment and inference. All of our evaluations results are based on vllm version 0.9.1.
The [Docker Image](https://hub.docker.com/r/rednotehilab/dots.ocr) is based on the official vllm image. You can also follow [Dockerfile](https://github.com/rednote-hilab/dots.ocr/blob/master/docker/Dockerfile) to build the deployment environment by yourself.
```shell
# You need to register model to vllm at first
python3 tools/download_model.py
export hf_model_path=./weights/DotsOCR # Path to your downloaded model weights, Please use a directory name without periods (e.g., `DotsOCR` instead of `dots.ocr`) for the model save path. This is a temporary workaround pending our integration with Transformers.
export PYTHONPATH=$(dirname "$hf_model_path"):$PYTHONPATH
sed -i '/^from vllm\.entrypoints\.cli\.main import main$/a\
from DotsOCR import modeling_dots_ocr_vllm' `which vllm` # If you downloaded model weights by yourself, please replace `DotsOCR` by your model saved directory name, and remember to use a directory name without periods (e.g., `DotsOCR` instead of `dots.ocr`)
# launch vllm server
CUDA_VISIBLE_DEVICES=0 vllm serve ${hf_model_path} --tensor-parallel-size 1 --gpu-memory-utilization 0.95 --chat-template-content-format string --served-model-name model --trust-remote-code
# If you get a ModuleNotFoundError: No module named 'DotsOCR', please check the note above on the saved model directory name.
# vllm api demo
python3 ./demo/demo_vllm.py --prompt_mode prompt_layout_all_en
```
### Hugginface inference
```shell
python3 demo/demo_hf.py
```
<details>
<summary><b>Hugginface inference details</b></summary>
```python
import torch
from transformers import AutoModelForCausalLM, AutoProcessor, AutoTokenizer
from qwen_vl_utils import process_vision_info
from dots_ocr.utils import dict_promptmode_to_prompt
model_path = "./weights/DotsOCR"
model = AutoModelForCausalLM.from_pretrained(
model_path,
attn_implementation="flash_attention_2",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
image_path = "demo/demo_image1.jpg"
prompt = """Please output the layout information from the PDF image, including each layout element's bbox, its category, and the corresponding text content within the bbox.
1. Bbox format: [x1, y1, x2, y2]
2. Layout Categories: The possible categories are ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title'].
3. Text Extraction & Formatting Rules:
- Picture: For the 'Picture' category, the text field should be omitted.
- Formula: Format its text as LaTeX.
- Table: Format its text as HTML.
- All Others (Text, Title, etc.): Format their text as Markdown.
4. Constraints:
- The output text must be the original text from the image, with no translation.
- All layout elements must be sorted according to human reading order.
5. Final Output: The entire output must be a single JSON object.
"""
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": image_path
},
{"type": "text", "text": prompt}
]
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=24000)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
## 3. Document Parse
**Based on vLLM server**, you can parse an image or a pdf file using the following commands:
```bash
# Parse all layout info, both detection and recognition
# Parse a single image
python3 dots_ocr/parser.py demo/demo_image1.jpg
# Parse a single PDF
python3 dots_ocr/parser.py demo/demo_pdf1.pdf --num_threads 64 # try bigger num_threads for pdf with a large number of pages
# Layout detection only
python3 dots_ocr/parser.py demo/demo_image1.jpg --prompt prompt_layout_only_en
# Parse text only, except Page-header and Page-footer
python3 dots_ocr/parser.py demo/demo_image1.jpg --prompt prompt_ocr
# Parse layout info by bbox
python3 dots_ocr/parser.py demo/demo_image1.jpg --prompt prompt_grounding_ocr --bbox 163 241 1536 705
```
<details>
<summary><b>Output Results</b></summary>
1. **Structured Layout Data** (`demo_image1.json`): A JSON file containing the detected layout elements, including their bounding boxes, categories, and extracted text.
2. **Processed Markdown File** (`demo_image1.md`): A Markdown file generated from the concatenated text of all detected cells.
- An additional version, `demo_image1_nohf.md`, is also provided, which excludes page headers and footers for compatibility with benchmarks like Omnidocbench and olmOCR-bench.
3. **Layout Visualization** (`demo_image1.jpg`): The original image with the detected layout bounding boxes drawn on it.
</details>
## 4. Demo
You can run the demo with the following command, or try directly at [live demo](https://dotsocr.xiaohongshu.com/)
```bash
python demo/demo_gradio.py
```
We also provide a demo for grounding ocr:
```bash
python demo/demo_gradio_annotion.py
```
### Example for formula document
<img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/formula1.png" alt="formula1.png" border="0" />
<img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/formula2.png" alt="formula2.png" border="0" />
<img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/formula3.png" alt="formula3.png" border="0" />
### Example for table document
<img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/table1.png" alt="table1.png" border="0" />
<img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/table2.png" alt="table2.png" border="0" />
<img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/table3.png" alt="table3.png" border="0" />
### Example for multilingual document
<img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/Tibetan.png" alt="Tibetan.png" border="0" />
<img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/tradition_zh.png" alt="tradition_zh.png" border="0" />
<img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/nl.png" alt="nl.png" border="0" />
<img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/kannada.png" alt="kannada.png" border="0" />
<img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/russian.png" alt="russian.png" border="0" />
### Example for reading order
<img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/reading_order.png" alt="reading_order.png" border="0" />
### Example for grounding ocr
<img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/showcase/grounding.png" alt="grounding.png" border="0" />
## Acknowledgments
We would like to thank [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL), [aimv2](https://github.com/apple/ml-aim), [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR),
[OmniDocBench](https://github.com/opendatalab/OmniDocBench), [PyMuPDF](https://github.com/pymupdf/PyMuPDF), for providing code and models.
We also thank [DocLayNet](https://github.com/DS4SD/DocLayNet), [M6Doc](https://github.com/HCIILAB/M6Doc), [CDLA](https://github.com/buptlihang/CDLA), [D4LA](https://github.com/AlibabaResearch/AdvancedLiterateMachinery) for providing valuable datasets.
## Limitation & Future Work
- **Complex Document Elements:**
- **Table&Formula**: dots.ocr is not yet perfect for high-complexity tables and formula extraction.
- **Picture**: Pictures in documents are currently not parsed.
- **Parsing Failures:** The model may fail to parse under certain conditions:
- When the character-to-pixel ratio is excessively high. Try enlarging the image or increasing the PDF parsing DPI (a setting of 200 is recommended). However, please note that the model performs optimally on images with a resolution under 11289600 pixels.
- Continuous special characters, such as ellipses (`...`) and underscores (`_`), may cause the prediction output to repeat endlessly. In such scenarios, consider using alternative prompts like `prompt_layout_only_en`, `prompt_ocr`, or `prompt_grounding_ocr` ([details here](https://github.com/rednote-hilab/dots.ocr/blob/master/dots_ocr/utils/prompts.py)).
- **Performance Bottleneck:** Despite its 1.7B parameter LLM foundation, **dots.ocr** is not yet optimized for high-throughput processing of large PDF volumes.
We are committed to achieving more accurate table and formula parsing, as well as enhancing the model's OCR capabilities for broader generalization, all while aiming for **a more powerful, more efficient model**. Furthermore, we are actively considering the development of **a more general-purpose perception model** based on Vision-Language Models (VLMs), which would integrate general detection, image captioning, and OCR tasks into a unified framework. **Parsing the content of the pictures in the documents** is also a key priority for our future work.
We believe that collaboration is the key to tackling these exciting challenges. If you are passionate about advancing the frontiers of document intelligence and are interested in contributing to these future endeavors, we would love to hear from you. Please reach out to us via email at: [[email protected]].
|
mynaked/blockassist-bc-soft_sniffing_beaver_1754739680
|
mynaked
| 2025-08-09T11:44:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soft sniffing beaver",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T11:43:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soft sniffing beaver
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nightmedia/Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated-q8-hi-mlx
|
nightmedia
| 2025-08-09T11:43:39Z | 70 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3_moe",
"abliterated",
"uncensored",
"text-generation",
"conversational",
"en",
"base_model:huihui-ai/Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated",
"base_model:quantized:huihui-ai/Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-09T03:32:54Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507/blob/main/LICENSE
language:
- en
base_model: huihui-ai/Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated
pipeline_tag: text-generation
library_name: mlx
tags:
- abliterated
- uncensored
- mlx
---
# Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated-q8-hi-mlx
This model [Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated-q8-hi-mlx](https://huggingface.co/Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated-q8-hi-mlx) was
converted to MLX format from [huihui-ai/Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated-q8-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
nightmedia/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-q6-hi-mlx
|
nightmedia
| 2025-08-09T11:40:57Z | 25 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3_moe",
"abliterated",
"uncensored",
"text-generation",
"conversational",
"en",
"base_model:huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated",
"base_model:quantized:huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated",
"license:apache-2.0",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-09T03:11:57Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507/blob/main/LICENSE
language:
- en
base_model: huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated
pipeline_tag: text-generation
library_name: mlx
tags:
- abliterated
- uncensored
- mlx
---
# Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-q6-hi-mlx
This model [Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-q6-hi-mlx](https://huggingface.co/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-q6-hi-mlx) was
converted to MLX format from [huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-q6-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
nightmedia/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-q8-hi-mlx
|
nightmedia
| 2025-08-09T11:40:46Z | 21 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3_moe",
"abliterated",
"uncensored",
"text-generation",
"conversational",
"en",
"base_model:huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated",
"base_model:quantized:huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-09T03:49:17Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507/blob/main/LICENSE
language:
- en
base_model: huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated
pipeline_tag: text-generation
library_name: mlx
tags:
- abliterated
- uncensored
- mlx
---
# Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-q8-hi-mlx
This model [Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-q8-hi-mlx](https://huggingface.co/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-q8-hi-mlx) was
converted to MLX format from [huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-q8-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
nuxxor/blockassist-bc-knobby_scaly_flea_1754739193
|
nuxxor
| 2025-08-09T11:35:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"knobby scaly flea",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T11:34:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- knobby scaly flea
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
reuben256/contractNLI-v2
|
reuben256
| 2025-08-09T11:32:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"endpoints_compatible",
"region:us"
] | null | 2025-08-09T07:41:42Z |
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
library_name: transformers
model_name: contractNLI-v2
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for contractNLI-v2
This model is a fine-tuned version of [unsloth/qwen3-14b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-14b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="reuben256/contractNLI-v2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bapi2025/blockassist-bc-lanky_silky_duck_1754736962
|
bapi2025
| 2025-08-09T11:30:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lanky silky duck",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T11:21:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lanky silky duck
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TAUR-dev/M-hardcoded_search_function__low_lr-sft
|
TAUR-dev
| 2025-08-09T11:26:12Z | 5 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-09T11:25:04Z |
# M-hardcoded_search_function__low_lr-sft
This model was created as part of the **hardcoded_search_function__low_lr** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: hardcoded_search_function__low_lr
## Training Configuration
{"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__D_SFT_C_hardcoded_search_function__low_lr_sft_data__sft_train", "template": "qwen", "cutoff_len": 16384, "max_samples": 1000000, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/hardcoded_search_function/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__hardcoded_search_function__low_lr__v1", "sf_eval_before_training": false, "sf_wandb_project": "hardcoded_search_function__low_lr_sft", "sf_eval_steps": null, "run_name": "hardcoded_search_function__low_lr_sft"}
## Experiment Tracking
🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__hardcoded_search_function__low_lr__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-hardcoded_search_function__low_lr-sft")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-hardcoded_search_function__low_lr-sft")
```
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754738266
|
kapalbalap
| 2025-08-09T11:25:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T11:24:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rmdhirr/gemma-sft-instr-one-2700
|
rmdhirr
| 2025-08-09T11:24:55Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-3-12b-pt",
"base_model:adapter:google/gemma-3-12b-pt",
"region:us"
] | null | 2025-08-09T11:20:27Z |
---
base_model: google/gemma-3-12b-pt
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
sukatune/CodeVa-7B-Instruct-v8
|
sukatune
| 2025-08-09T11:21:44Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-09T11:21:44Z |
---
license: apache-2.0
---
|
roeker/blockassist-bc-quick_wiry_owl_1754737509
|
roeker
| 2025-08-09T11:06:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T11:05:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Johnson1Chan/v1
|
Johnson1Chan
| 2025-08-09T11:05:29Z | 19 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-05-09T06:15:25Z |
---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper mediconnect
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mediconnect
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper mediconnect
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mediconnect dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Wer: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:---:|
| 0.0009 | 200.0 | 200 | 0.0005 | 0.0 |
| 0.0001 | 400.0 | 400 | 0.0001 | 0.0 |
| 0.0001 | 600.0 | 600 | 0.0000 | 0.0 |
| 0.0 | 800.0 | 800 | 0.0000 | 0.0 |
| 0.0 | 1000.0 | 1000 | 0.0000 | 0.0 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.4
|
nightmedia/Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated-q6-hi-mlx
|
nightmedia
| 2025-08-09T11:05:28Z | 22 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3_moe",
"abliterated",
"uncensored",
"text-generation",
"conversational",
"en",
"base_model:huihui-ai/Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated",
"base_model:quantized:huihui-ai/Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated",
"license:apache-2.0",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-09T03:07:46Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507/blob/main/LICENSE
language:
- en
base_model: huihui-ai/Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated
pipeline_tag: text-generation
library_name: mlx
tags:
- abliterated
- uncensored
- mlx
---
# Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated-q6-hi-mlx
This model [Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated-q6-hi-mlx](https://huggingface.co/Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated-q6-hi-mlx) was
converted to MLX format from [huihui-ai/Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated-q6-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
loganpowell/llama-3-8B-function-calling-lora-v0.1-F16-GGUF
|
loganpowell
| 2025-08-09T11:04:30Z | 9 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"llama-cpp",
"gguf-my-lora",
"en",
"base_model:zenja/llama-3-8B-function-calling-lora-v0.1",
"base_model:quantized:zenja/llama-3-8B-function-calling-lora-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-09T11:04:27Z |
---
base_model: zenja/llama-3-8B-function-calling-lora-v0.1
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- llama-cpp
- gguf-my-lora
---
# loganpowell/llama-3-8B-function-calling-lora-v0.1-F16-GGUF
This LoRA adapter was converted to GGUF format from [`zenja/llama-3-8B-function-calling-lora-v0.1`](https://huggingface.co/zenja/llama-3-8B-function-calling-lora-v0.1) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/zenja/llama-3-8B-function-calling-lora-v0.1) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora llama-3-8B-function-calling-lora-v0.1-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora llama-3-8B-function-calling-lora-v0.1-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
ijustabi/blockassist-bc-lethal_nimble_cockroach_1754737330
|
ijustabi
| 2025-08-09T11:03:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lethal nimble cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T11:03:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lethal nimble cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Yi3852/MuFun-Base
|
Yi3852
| 2025-08-09T11:00:20Z | 10 | 0 | null |
[
"safetensors",
"tinyllava",
"audio-text-to-text",
"custom_code",
"en",
"zh",
"arxiv:2508.01178",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"region:us"
] |
audio-text-to-text
| 2025-08-04T08:37:20Z |
---
license: apache-2.0
pipeline_tag: audio-text-to-text
language:
- en
- zh
base_model:
- Qwen/Qwen3-8B-Base
- openai/whisper-large-v3
---
MuFun model proposed in [Advancing the Foundation Model for Music Understanding](https://arxiv.org/abs/2508.01178)
train code: https://github.com/laitselec/MuFun
## Usage
some audio processing packages like mutagen, torchaudio are needed to be installed
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
hf_path = 'Yi3852/MuFun-Base'
tokenizer = AutoTokenizer.from_pretrained(hf_path, use_fast=False)
device='cuda'
model = AutoModelForCausalLM.from_pretrained(hf_path, trust_remote_code=True, torch_dtype="bfloat16")
model.to(device)
# single audio
# during inference the audio(converted to a sequence of embeddings) will be placed in the position of <audio> tag in the prompt
aud="/path/to/your/song.mp3"
inp="\n<audio>Can you listen to this song and tell me its lyrics?"
res=model.chat(prompt=inp, audio_files=aud, tokenizer=tokenizer)
print(res)
# multiple audios
# for multiple songs each will be placed in the coresponding <audio> tag in the prompt
aud=["/path/to/your/song1.mp3", '/path/to/your/song2.mp3']
inp="\n<audio> This is song1. <audio> This is song2. Which song do you like more? Tell me the reason."
res=model.chat(prompt=inp, audio_files=aud, tokenizer=tokenizer)
print(res)
# analyze only a specific segment of audio using the segs parameter
# format is [start_time, end_time](in seconds), for multiple audios segs can be passed like [[0,30],[60,90]], [None,[0,30.0]]
aud="/path/to/your/song.mp3"
inp="\n<audio>How is the rhythm of this music clip?"
res=model.chat(prompt=inp, audio_files=aud, segs=[0,30.0], tokenizer=tokenizer)
print(res)
# set audio_files=None will work, however it is not recommended to use it as a text model
```
## Citation
```bibtex
@misc{jiang2025advancingfoundationmodelmusic,
title={Advancing the Foundation Model for Music Understanding},
author={Yi Jiang and Wei Wang and Xianwen Guo and Huiyun Liu and Hanrui Wang and Youri Xu and Haoqi Gu and Zhongqian Xie and Chuanjiang Luo},
year={2025},
eprint={2508.01178},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2508.01178},
}
|
koloni/blockassist-bc-deadly_graceful_stingray_1754732802
|
koloni
| 2025-08-09T10:57:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T10:57:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rmdhirr/gemma-sft-instr-one-2400
|
rmdhirr
| 2025-08-09T10:55:22Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-3-12b-pt",
"base_model:adapter:google/gemma-3-12b-pt",
"region:us"
] | null | 2025-08-09T10:50:59Z |
---
base_model: google/gemma-3-12b-pt
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
0xAgo/blockassist-bc-agile_tough_camel_1754735855
|
0xAgo
| 2025-08-09T10:49:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"agile tough camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T10:49:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- agile tough camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
uname0x96/blockassist-bc-rough_scavenging_narwhal_1754736253
|
uname0x96
| 2025-08-09T10:46:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rough scavenging narwhal",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T10:46:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rough scavenging narwhal
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Qwen2-7B-S2R-ORL-GGUF
|
mradermacher
| 2025-08-09T10:44:21Z | 1,064 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:S2R-data/Qwen2-7B-S2R-ORL",
"base_model:quantized:S2R-data/Qwen2-7B-S2R-ORL",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-09T10:02:57Z |
---
base_model: S2R-data/Qwen2-7B-S2R-ORL
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/S2R-data/Qwen2-7B-S2R-ORL
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2-7B-S2R-ORL-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-ORL-GGUF/resolve/main/Qwen2-7B-S2R-ORL.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-ORL-GGUF/resolve/main/Qwen2-7B-S2R-ORL.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-ORL-GGUF/resolve/main/Qwen2-7B-S2R-ORL.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-ORL-GGUF/resolve/main/Qwen2-7B-S2R-ORL.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-ORL-GGUF/resolve/main/Qwen2-7B-S2R-ORL.IQ4_XS.gguf) | IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-ORL-GGUF/resolve/main/Qwen2-7B-S2R-ORL.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-ORL-GGUF/resolve/main/Qwen2-7B-S2R-ORL.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-ORL-GGUF/resolve/main/Qwen2-7B-S2R-ORL.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-ORL-GGUF/resolve/main/Qwen2-7B-S2R-ORL.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-ORL-GGUF/resolve/main/Qwen2-7B-S2R-ORL.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-ORL-GGUF/resolve/main/Qwen2-7B-S2R-ORL.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-S2R-ORL-GGUF/resolve/main/Qwen2-7B-S2R-ORL.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
roeker/blockassist-bc-quick_wiry_owl_1754736133
|
roeker
| 2025-08-09T10:43:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-09T10:43:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qyuan/Qwen2-VL_exp45
|
qyuan
| 2025-08-09T10:43:07Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-to-text",
"generated_from_trainer",
"arxiv:2402.03300",
"base_model:qyuan/Qwen2-VL_exp32_2_based_7b",
"base_model:finetune:qyuan/Qwen2-VL_exp32_2_based_7b",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-09T10:40:03Z |
---
base_model: qyuan/Qwen2-VL_exp32_2_based_7b
library_name: transformers
model_name: Qwen2-VL_exp45
tags:
- generated_from_trainer
licence: license
---
# Model Card for Qwen2-VL_exp45
This model is a fine-tuned version of [qyuan/Qwen2-VL_exp32_2_based_7b](https://huggingface.co/qyuan/Qwen2-VL_exp32_2_based_7b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qyuan/Qwen2-VL_exp45", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/wandbuser54-xidian-university/huggingface/runs/a2paz0rs)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.49.0
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Frywind/qwen-7b-0722-asft
|
Frywind
| 2025-08-09T10:42:50Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-08T16:42:05Z |
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
</body>
</html>
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.