modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
bilalzafar/CBDC-BERT
|
bilalzafar
| 2025-08-25T14:28:31Z | 24 | 0 | null |
[
"safetensors",
"bert",
"BERT",
"Finance",
"CBDC",
"Central Bank",
"Central Bank Speeches",
"Central Bank Digital Currency",
"NLP",
"Finance-NLP",
"BIS",
"CB-BERT",
"text-classification",
"en",
"base_model:bilalzafar/CentralBank-BERT",
"base_model:finetune:bilalzafar/CentralBank-BERT",
"license:mit",
"region:us"
] |
text-classification
| 2025-07-13T21:56:48Z |
---
license: mit
language:
- en
metrics:
- accuracy
- f1
base_model:
- bilalzafar/cb-bert-mlm
pipeline_tag: text-classification
tags:
- BERT
- Finance
- CBDC
- Central Bank
- Central Bank Speeches
- Central Bank Digital Currency
- NLP
- Finance-NLP
- BIS
- CB-BERT
---
# CBDC-BERT: Identifying Central Bank Digital Currency Discourse in Policy Speeches
**CBDC-BERT** is a sentence classification model fine-tuned to detect **CBDC-related** statements in English-language central bank speeches. It is built on the **domain-adapted** checkpoint [`bilalzafar/CentralBank-BERT`](https://huggingface.co/bilalzafar/CentralBank-BERT)โa BERT model pre-trained on over **2 million sentences** from BIS central bank speeches (1996โ2024)โand uses a **WordPiece tokenizer** with a maximum input length of **128 tokens**.
The model performs **binary classification**:
* `0` = Non-CBDC
* `1` = CBDC
**Training data:** The dataset was sourced from the **BIS Central Bank Speeches** corpus (1996โ2024). A **balanced subset of 11,000 sentences** was manually labeled for CBDC relevance. The **CBDC** class contains **5,390 sentences**, and the **Non-CBDC** class contains **5,610 sentences**. The data was split **80/20** (8,800 training / 2,200 test) with stratification by label.
**Intended use:** **CBDC-BERT** is intended for research on CBDC discourse across time and jurisdictions, for pre-filtering or flagging CBDC-related sentences in large central-bank speech corpora, and as an input to dashboards, indices, or downstream NLP pipelines used in central banking and finance.
## Training Details
- **Base checkpoint:** [`bilalzafar/CentralBank-BERT`](https://huggingface.co/bilalzafar/CentralBank-BERT)
- **Architecture:** `BertForSequenceClassification` (binary head randomly initialized)
- **Tokenizer:** from base checkpoint, `max_length=128`
- **Library:** Transformers (`Trainer`)
- **Epochs:** 3
- **Batch size:** 8 (per device)
- **Optimizer:** AdamW (Transformers default)
- **Learning rate:** 5e-5 (Transformers default)
- **Loss:** CrossEntropyLoss
- **Evaluation:** per epoch; best model by F1
- **Hardware:** Google Colab GPU
## Performance & Robustness
On the full test split (n=2,200), the model achieves **Accuracy = 0.9950** and **F1 (binary) = 0.9949**. In a separate confusion-matrix run on valid rows (n=2,175), it records **TP=1,065**, **FP=4**, **FN=1**, **TN=1,105**, yielding **Accuracy = 0.9977**, **Precision (CBDC) = 0.9963**, **Recall (CBDC) = 0.9991**, **ROC-AUC = 1.0000**, and a **Brier score = 0.0024**; the class balance is **Non-CBDC = 1,109** and **CBDC = 1,066**. Compared to TF-IDF baselinesโ**Logistic Regression (0.97)**, **Naive Bayes (0.92)**, **Random Forest (0.98)**, and **XGBoost (0.99)**, CBDC-BERT **matches or exceeds** these results while delivering **near-perfect ROC-AUC** with **well-calibrated probabilities** (low Brier). Robustness checks across **edge cases**, **noise-injected**, **syntactically altered**, and **paraphrased (โtranslated-likeโ)** inputs each show **8/10 correct (80%)**, and sentence-length bias is low (**ฯ โ 0.1222**).
---
## Other CBDC Models
This model is part of the **CentralBank-BERT / CBDC model family**, a suite of domain-adapted classifiers for analyzing central-bank communication.
| **Model** | **Purpose** | **Intended Use** | **Link** |
| ------------------------------- | ------------------------------------------------------------------- | ------------------------------------------------------------------- | ---------------------------------------------------------------------- |
| **bilalzafar/CentralBank-BERT** | Domain-adaptive masked LM trained on BIS speeches (1996โ2024). | Base encoder for CBDC downstream tasks; fill-mask tasks. | [CentralBank-BERT](https://huggingface.co/bilalzafar/CentralBank-BERT) |
| **bilalzafar/CBDC-BERT** | Binary classifier: CBDC vs. Non-CBDC. | Flagging CBDC-related discourse in large corpora. | [CBDC-BERT](https://huggingface.co/bilalzafar/CBDC-BERT) |
| **bilalzafar/CBDC-Stance** | 3-class stance model (Pro, Wait-and-See, Anti). | Research on policy stances and discourse monitoring. | [CBDC-Stance](https://huggingface.co/bilalzafar/CBDC-Stance) |
| **bilalzafar/CBDC-Sentiment** | 3-class sentiment model (Positive, Neutral, Negative). | Tone analysis in central bank communications. | [CBDC-Sentiment](https://huggingface.co/bilalzafar/CBDC-Sentiment) |
| **bilalzafar/CBDC-Type** | Classifies Retail, Wholesale, General CBDC mentions. | Distinguishing policy focus (retail vs wholesale). | [CBDC-Type](https://huggingface.co/bilalzafar/CBDC-Type) |
| **bilalzafar/CBDC-Discourse** | 3-class discourse classifier (Feature, Process, Risk-Benefit). | Structured categorization of CBDC communications. | [CBDC-Discourse](https://huggingface.co/bilalzafar/CBDC-Discourse) |
| **bilalzafar/CentralBank-NER** | Named Entity Recognition (NER) model for central banking discourse. | Identifying institutions, persons, and policy entities in speeches. | [CentralBank-NER](https://huggingface.co/bilalzafar/CentralBank-NER) |
## Repository and Replication Package
All **training pipelines, preprocessing scripts, evaluation notebooks, and result outputs** are available in the companion GitHub repository:
๐ **[https://github.com/bilalezafar/CentralBank-BERT](https://github.com/bilalezafar/CentralBank-BERT)**
---
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("bilalzafar/cbdc-bert")
model = AutoModelForSequenceClassification.from_pretrained("bilalzafar/cbdc-bert")
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
label_map = {"LABEL_0": "Non-CBDC", "LABEL_1": "CBDC"}
# Example sentence
text = "The central bank is exploring the issuance of a retail digital currency."
result = classifier(text)[0]
print(f"Prediction: {label_map[result['label']]} | Confidence: {result['score']:.4f}")
# Output example:
# [{Prediction: CBDC | Confidence: 0.9993}]
```
---
## Citation
If you use this model, please cite as:
**Zafar, M. B. (2025). *CentralBank-BERT: Machine Learning Evidence on Central Bank Digital Currency Discourse*. SSRN. [https://papers.ssrn.com/abstract=5404456](https://papers.ssrn.com/abstract=5404456)**
```bibtex
@article{zafar2025centralbankbert,
title={CentralBank-BERT: Machine Learning Evidence on Central Bank Digital Currency Discourse},
author={Zafar, Muhammad Bilal},
year={2025},
journal={SSRN Electronic Journal},
url={https://papers.ssrn.com/abstract=5404456}
}
|
eshanroy5678/blockassist-bc-untamed_dextrous_dingo_1756131620
|
eshanroy5678
| 2025-08-25T14:28:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed dextrous dingo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:24:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed dextrous dingo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
motza0025/blockassist-bc-scampering_scaly_salmon_1756130427
|
motza0025
| 2025-08-25T14:26:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scampering scaly salmon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:26:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scampering scaly salmon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gensynme/blockassist-bc-tiny_fierce_bee_1756132001
|
gensynme
| 2025-08-25T14:26:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tiny fierce bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:26:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tiny fierce bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MaestroDev19/CyberGemma-3-4b-v2
|
MaestroDev19
| 2025-08-25T14:26:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T14:26:26Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MaestroDev19
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Lucifer82982/blockassist-bc-bipedal_leggy_rhino_1756131942
|
Lucifer82982
| 2025-08-25T14:26:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bipedal leggy rhino",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:26:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bipedal leggy rhino
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ijustabi/blockassist-bc-lethal_nimble_cockroach_1756131936
|
ijustabi
| 2025-08-25T14:26:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lethal nimble cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:25:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lethal nimble cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Hopelesslyhype/ailan_merged_final-q6.gguf
|
Hopelesslyhype
| 2025-08-25T14:25:40Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-25T14:05:44Z |
---
license: apache-2.0
---
|
mushroomfleet/4bstr4ct-style
|
mushroomfleet
| 2025-08-25T14:25:31Z | 0 | 0 | null |
[
"text-to-image",
"base_model:Wan-AI/Wan2.2-T2V-A14B",
"base_model:finetune:Wan-AI/Wan2.2-T2V-A14B",
"region:us"
] |
text-to-image
| 2025-08-24T23:02:36Z |
---
base_model:
- Wan-AI/Wan2.2-T2V-A14B
pipeline_tag: text-to-image
---
[](https://opensource.org/licenses/MIT)
[](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B)
[](https://huggingface.co/models?pipeline_tag=text-to-image)
# 4BSTR4CT-cosmic LoRA
A specialized LoRA model for generating cosmic architectural surrealism with swirling atmospheric effects and vibrant abstract styling.
## Repository Structure ๐
```
โโโ 4BSTR4CT-style-v0.safetensors
โโโ captions/
โ โโโ [training caption text files]
โโโ ENH/
โ โโโ [prompt enhancer system prompts]
โ โโโ [user alteration templates]
โโโ images/
โ โโโ [sample images from ollama workflows]
โโโ workflows/
โโโ [ComfyUI workflow files]
```
## LoRA Model ๐จ
**4BSTR4CT-cosmic** is a cutting-edge LoRA specializing in cosmic architectural surrealism that transforms ordinary scenes into breathtaking otherworldly vistas. This model excels at creating fantastical cityscapes floating among swirling, vibrant clouds with intricate layered atmospheric effects. The aesthetic combines futuristic architecture with celestial elements, featuring ornate domes, towering spires, and complex mechanical structures set against cosmic backdrops filled with planets, stars, and nebulae.
The model's signature style emphasizes dynamic swirling patterns, flowing cloud formations, and rich color gradients that transition seamlessly from warm oranges and pinks to cool blues and teals. Each generated image captures a dreamlike quality where architectural grandeur meets cosmic wonder, creating compositions that feel both mystical and technologically advanced.
## Additional Resources ๐
- **ENH/**: Contains specialized prompt enhancement templates designed to maximize the 4BSTR4CT aesthetic, including atmospheric descriptors and architectural terminology
- **captions/**: Training data insights showcasing the model's expertise in cosmic surrealism and architectural fantasy
- **workflows/**: Optimized ComfyUI workflows for generating consistent cosmic architectural scenes with proper atmospheric effects
- **images/**: Sample gallery demonstrating the full range of the model's capabilities across different cosmic scenarios
## Technical Specifications โ๏ธ
- **Training Model:** WAN 14B 2.1
- **Inference Model:** WAN 14B 2.2 (A14B) low noise
- **Trigger Word:** `4BSTR4CT`
- **Optimal Strength:** 0.7-1.0
- **Resolution:** 1024x1024 recommended
- **Steps:** 20-30 for optimal detail
- **Guidance Scale:** 7.5-12
## Getting Started ๐
### Basic Usage:
```
4BSTR4CT style cosmic cityscape with swirling clouds and floating architecture
```
### Advanced Prompting:
```
4BSTR4CT style futuristic palace with ornate domes rising above vibrant swirling clouds, cosmic backdrop with planets and stars, intricate layered atmospheric effects, dramatic lighting, otherworldly architecture
```
### Negative Prompts:
```
flat, static, realistic photography, plain sky, simple clouds, modern buildings, earth-like architecture
```
## Best Practices ๐ก
1. **Always include the trigger word** `4BSTR4CT` at the beginning of your prompt
2. **Layer atmospheric elements**: Combine "swirling clouds", "atmospheric effects", and "cosmic backdrop" for best results
3. **Architectural details**: Use terms like "ornate domes", "towering spires", "futuristic structures" to enhance architectural elements
4. **Color guidance**: Specify color palettes like "vibrant blues and oranges" or "warm sunset hues" for targeted results
5. **Depth and movement**: Include "layered patterns", "dynamic swirls", and "flowing forms" for characteristic movement
6. **Scale emphasis**: Use "majestic", "towering", "expansive" to achieve the model's signature grandeur
## Style Characteristics ๐
### **Cosmic Surrealism Era**
Drawing inspiration from digital cosmic art and sci-fi concept design, this style represents a contemporary fusion of architectural fantasy and space art aesthetics.
### **Atmospheric Mastery**
- **Swirling Cloud Systems**: Intricate, layered cloud formations with organic flow patterns
- **Cosmic Integration**: Seamless blending of earthly architecture with celestial elements
- **Color Harmonics**: Rich gradients transitioning from warm to cool tones
- **Dynamic Movement**: Flowing, ribbon-like patterns that create depth and motion
### **Architectural Fantasy**
- **Ornate Structures**: Complex domes, spires, and mechanical details
- **Floating Cities**: Gravity-defying architecture suspended in cosmic space
- **Futuristic Elements**: Advanced technological integration with classical forms
- **Scale Grandeur**: Monumental proportions that inspire awe and wonder
### **Visual Signature**
The 4BSTR4CT style is immediately recognizable through its combination of:
- Vibrant, saturated color palettes with smooth gradients
- Intricate layering that creates incredible depth
- Organic flow patterns mixed with geometric precision
- Dreamlike atmosphere that blurs reality and imagination
- Perfect balance between architectural detail and cosmic vastness
*Perfect for creating otherworldly concept art, sci-fi environments, cosmic fantasy scenes, and any artwork requiring a blend of architectural grandeur with cosmic wonder.* โจ
|
mushroomfleet/ch40s-style
|
mushroomfleet
| 2025-08-25T14:25:06Z | 0 | 0 | null |
[
"text-to-image",
"base_model:Wan-AI/Wan2.2-T2V-A14B",
"base_model:finetune:Wan-AI/Wan2.2-T2V-A14B",
"region:us"
] |
text-to-image
| 2025-08-24T23:03:07Z |
---
base_model:
- Wan-AI/Wan2.2-T2V-A14B
pipeline_tag: text-to-image
---
[](https://opensource.org/licenses/MIT)
[](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B)
[](https://huggingface.co/models?pipeline_tag=text-to-image)
# CH40S-cosmic-v1 ๐
Transform your AI art into surreal cosmic odysseys with intricate otherworldly landscapes that blend organic and architectural elements.
## Repository Structure ๐
```
โโโ CH40S-style-v0.safetensors
โโโ captions/
โ โโโ [training caption text files]
โโโ ENH/
โ โโโ [prompt enhancer system prompts]
โ โโโ [user alteration templates]
โโโ images/
โ โโโ [sample images from ollama workflows]
โโโ workflows/
โโโ [ComfyUI workflow files]
```
## LoRA Model ๐ฏ
This specialized LoRA transforms ordinary prompts into **surreal cosmic landscapes** featuring the distinctive **CH40S style**. The model excels at generating:
- **Otherworldly cityscapes** suspended between realms
- **Cosmic phenomena** with swirling nebulae and celestial bodies
- **Dual-realm compositions** bridging terrestrial and cosmic elements
- **Organic-mechanical fusion** with root-like technological structures
- **Intricate architectural forms** in impossible geometries
- **Ethereal atmospheres** with dramatic light-shadow contrasts
The CH40S style specializes in creating **multi-layered fantasy environments** where futuristic cities merge seamlessly with cosmic landscapes, connected by intricate networks of organic pathways and celestial phenomena.
## Additional Resources ๐
### `/captions/`
Original training descriptions showcasing surreal cosmic landscapes, otherworldly cityscapes, and the signature CH40S aesthetic with detailed scene compositions.
### `/ENH/`
Advanced prompt enhancement templates designed specifically for cosmic landscape generation, featuring atmospheric descriptors and architectural terminology.
### `/images/`
Curated sample outputs demonstrating the LoRA's capability across different cosmic scenarios, from ethereal celestial cities to complex multi-realm compositions.
### `/workflows/`
Optimized ComfyUI workflows for generating CH40S-style cosmic landscapes with recommended settings and node configurations.
## Technical Specifications โ๏ธ
- **Training Model**: WAN 14B 2.1
- **Inference Model**: WAN 14B 2.2 (A14B) low noise
- **Training Focus**: Surreal cosmic landscapes, otherworldly architecture, organic-mechanical fusion
- **Trigger Word**: `CH40S style`
- **Optimal Weight Range**: 0.7-1.2
- **Resolution**: Best results at 1024x1024 and higher
## Getting Started ๐
### Basic Usage
```
"A futuristic city floating in space, CH40S style"
```
### Enhanced Prompting
```
"Intricate cosmic landscape with sprawling otherworldly cityscape, swirling nebulae and celestial bodies, organic root-like structures connecting dual realms, dramatic light contrasts, CH40S style"
```
### Advanced Composition
```
"Surreal divided landscape, upper realm with ethereal cosmic city and floating planets, lower realm with fiery organic networks, connected by luminous pathways, intricate architectural details, dreamlike atmosphere, CH40S style"
```
## Best Practices ๐ก
### Optimal Settings
- **CFG Scale**: 7-12 for best detail retention
- **Steps**: 25-35 for complex compositions
- **Sampler**: DPM++ 2M Karras or Euler A
- **LoRA Weight**: 0.8-1.1 for authentic CH40S aesthetic
### Prompt Enhancement Tips
- Include **architectural descriptors**: "sprawling cityscape", "towering spires", "intricate bridges"
- Add **cosmic elements**: "swirling nebulae", "celestial bodies", "cosmic phenomena"
- Specify **organic components**: "root-like networks", "organic pathways", "twisted tendrils"
- Emphasize **atmospheric qualities**: "otherworldly", "ethereal", "dreamlike", "surreal"
### Composition Guidelines
- **Dual-realm scenes** work exceptionally well (upper/lower world divisions)
- **Horizontal compositions** emphasize the signature cityscape elements
- **Central focal points** with radiating organic networks create depth
- **Contrast lighting** enhances the dramatic cosmic atmosphere
## Style Characteristics ๐จ
### Visual Aesthetics
- **Surreal Realism**: Fantastical elements rendered with intricate detail
- **Cosmic Grandeur**: Vast scales with intimate architectural complexity
- **Organic-Tech Fusion**: Natural forms seamlessly integrated with futuristic structures
- **Atmospheric Depth**: Multiple visual layers creating immersive environments
### Color Palettes
- **Cosmic Blues**: Deep space backgrounds with ethereal highlights
- **Earth Tones**: Warm browns and oranges for organic elements
- **Fiery Accents**: Dramatic reds for energy and depth contrast
- **Ethereal Highlights**: Luminous whites and golds for celestial phenomena
### Signature Elements
- **Multi-layered Landscapes**: Complex vertical compositions spanning multiple realms
- **Interconnected Pathways**: Organic networks linking disparate elements
- **Celestial Integration**: Planets and cosmic phenomena as compositional anchors
- **Architectural Impossibility**: Structures that defy conventional physics
- **Atmospheric Drama**: Dynamic cloud formations and ethereal lighting
### Emotional Resonance
The CH40S style evokes **wonder and contemplation**, creating spaces that feel simultaneously **alien and familiar**. These cosmic landscapes invite exploration while maintaining an **ethereal, dreamlike quality** that transcends traditional landscape art.
---
*Ready to explore infinite cosmic realms? This LoRA transforms every prompt into an otherworldly journey through surreal landscapes where imagination meets the cosmos.* โจ
|
amgmbkiev/blockassist-bc-skilled_mighty_grouse_1756130807
|
amgmbkiev
| 2025-08-25T14:24:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"skilled mighty grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:24:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- skilled mighty grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mushroomfleet/p0st3r-style
|
mushroomfleet
| 2025-08-25T14:24:16Z | 0 | 0 | null |
[
"text-to-image",
"base_model:Wan-AI/Wan2.2-T2V-A14B",
"base_model:finetune:Wan-AI/Wan2.2-T2V-A14B",
"region:us"
] |
text-to-image
| 2025-08-24T23:04:00Z |
---
base_model:
- Wan-AI/Wan2.2-T2V-A14B
pipeline_tag: text-to-image
---
[](https://opensource.org/licenses/MIT)
[](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B)
[](https://huggingface.co/models?pipeline_tag=text-to-image)
# P0ST3R-style-v1 LoRA
**A vibrant poster-style LoRA that creates whimsical, advertisement-inspired artwork featuring dynamic portraits with fantastical elements, butterflies, halos, and explosive rainbow backgrounds.**
## Repository Structure ๐
```
โโโ P0ST3R-style-v1.safetensors
โโโ captions/
โ โโโ [training caption text files]
โโโ ENH/
โ โโโ [prompt enhancer system prompts]
โ โโโ [user alteration templates]
โโโ images/
โ โโโ [sample images from ollama workflows]
โโโ workflows/
โโโ [ComfyUI workflow files]
```
## ๐จ LoRA Model: P0ST3R-style-v1
This specialized LoRA transforms portraits into vibrant, poster-style masterpieces that blend contemporary digital art with whimsical fantasy elements. The model excels at creating dynamic compositions featuring:
**Core Visual Elements:**
- **Explosive Rainbow Backgrounds**: Vivid, multi-colored backdrops with geometric patterns
- **Fantastical Character Enhancement**: Glowing halos, butterfly companions, and ethereal lighting
- **Advertisement Aesthetic**: Bold, eye-catching layouts reminiscent of modern digital advertising
- **Neo-Expressionist Flair**: Vibrant color palettes with emotional impact and dynamic brushwork
**Character Specialization:**
- Portrait subjects with various expressions (surprised, joyful, contemplative)
- Period costume integration (18th-century wigs meets futuristic elements)
- Dynamic poses and gestures that command attention
- Seamless blend of realistic portraiture with fantastical embellishments
## ๐ Additional Resources
### `/captions` Folder
Contains the comprehensive training dataset captions showcasing the model's range across different character types, expressions, and fantastical scenarios. These captions demonstrate the LoRA's ability to generate everything from serene, halo-adorned figures to exuberant characters surrounded by butterflies and geometric patterns.
### `/ENH` Folder
**Prompt Enhancement System**: Curated system prompts and user templates designed to maximize the poster-style aesthetic. Includes specific guidance for achieving optimal rainbow backgrounds, butterfly integration, and the signature whimsical advertisement look.
### `/images` Folder
**Reference Gallery**: Sample outputs from ollama workflows showcasing the full spectrum of the model's capabilities, from subtle portrait enhancements to full fantastical transformations.
### `/workflows` Folder
**ComfyUI Integration**: Pre-configured workflow files optimized for poster-style generation, including specific node configurations for achieving the signature vibrant backgrounds and fantastical element integration.
## โ๏ธ Technical Specifications
- **Training Model**: WAN 14B 2.1
- **Inference Model**: WAN 14B 2.2 (A14B) low noise
- **License**: MIT
- **Base Model**: Wan-AI/Wan2.2-T2V-A14B
- **Pipeline Tag**: text-to-image
- **Optimization**: Trained for maximum color vibrancy and fantastical element integration
## ๐ Getting Started
### Basic Usage
```
A poster of a [person] with [expression], surrounded by butterflies, with a rainbow background, in a whimsical poster style
```
### Advanced Prompting
```
A vibrant digital illustration of a [character description] with a glowing halo, standing in front of a colorful geometric background with butterflies, neon lights, and fantastical elements, poster-style advertisement aesthetic
```
### Recommended Settings
- **CFG Scale**: 7-9 for optimal color saturation
- **Sampling Steps**: 25-35 for detailed fantastical elements
- **Resolution**: 768x768 or 512x768 for portrait compositions
- **Sampler**: DPM++ 2M Karras for smooth color gradients
## ๐ก Best Practices
### Maximizing Poster Aesthetic โจ
- Include "poster-style", "advertisement", or "digital illustration" in prompts
- Specify background elements: "rainbow background", "colorful geometric shapes", "neon lights"
- Add fantastical elements: "butterflies", "glowing halo", "whimsical"
### Color Optimization ๐
- Use descriptors like "vibrant", "bright", "colorful", "neon"
- Specify color combinations: "pink, blue, yellow, and green hues"
- Mention lighting: "soft lighting", "glowing", "dynamic lighting"
### Character Enhancement ๐ค
- Specify expressions: "surprised", "smiling", "joyful", "contemplative"
- Include costume details: "black shirt", "white robe", "period wig"
- Add dynamic poses: "arms outstretched", "looking to the side", "dynamic pose"
## ๐ญ Style Characteristics
### **Era & Time Period**
Contemporary digital art meets 18th-century baroque portraiture, creating a unique temporal fusion that feels both timeless and futuristic.
### **Primary Themes**
- **Whimsical Advertisement Aesthetics**: Bold, commercial-inspired layouts
- **Fantasy Portrait Enhancement**: Everyday people transformed into ethereal beings
- **Neo-Pop Expressionism**: Emotional color work with commercial sensibilities
- **Digital Surrealism**: Fantastical elements seamlessly integrated into realistic portraits
### **Visual Signature**
- **Color Palette**: Explosive rainbows, neon highlights, and saturated primaries
- **Composition**: Dynamic, advertisement-inspired layouts with strong focal points
- **Lighting**: Soft, ethereal glows combined with bold, dramatic contrasts
- **Texture**: Smooth digital finish with painterly expressionist touches
### **Atmospheric Qualities**
The P0ST3R-style aesthetic creates an atmosphere of **joyful surrealism** - where the mundane transforms into the magical through vibrant color work and whimsical fantastical elements. Each generation feels like a celebration of both human expression and digital artistry, perfect for creating memorable, eye-catching artwork that bridges the gap between fine art and commercial design.
---
*Transform your portraits into vibrant poster masterpieces with P0ST3R-style-v1! ๐ฆโจ*
|
mushroomfleet/4rt4rt-style
|
mushroomfleet
| 2025-08-25T14:23:53Z | 0 | 0 | null |
[
"text-to-image",
"base_model:Wan-AI/Wan2.2-T2V-A14B",
"base_model:finetune:Wan-AI/Wan2.2-T2V-A14B",
"region:us"
] |
text-to-image
| 2025-08-24T23:04:30Z |
---
base_model:
- Wan-AI/Wan2.2-T2V-A14B
pipeline_tag: text-to-image
---
[](https://opensource.org/licenses/MIT)
[](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B)
[](https://huggingface.co/models?pipeline_tag=text-to-image)
# 4RT4RT-Gallery LoRA
A specialized LoRA for generating opulent gallery scenes and classical art collections in the distinctive 4rt4rt style.
## Repository Structure ๐
```
โโโ 4rt4rt-style-v0c.safetensors
โโโ captions/
โ โโโ [training caption text files]
โโโ ENH/
โ โโโ [prompt enhancer system prompts]
โ โโโ [user alteration templates]
โโโ images/
โ โโโ [sample images from ollama workflows]
โโโ workflows/
โโโ [ComfyUI workflow files]
```
## LoRA Model ๐จ
This LoRA specializes in creating luxurious gallery scenes and classical art collections rendered in the distinctive **4rt4rt style**. Drawing inspiration from European baroque and renaissance gallery traditions, it excels at generating:
- **Opulent Gallery Interiors**: Grand rooms with densely packed artwork covering every wall surface
- **Classical Art Collections**: Sophisticated arrangements of paintings, sculptures, and busts
- **Period Architecture**: Ornate ceilings, decorative moldings, and elaborate architectural details
- **Cultural Sophistication**: Elegantly dressed figures engaged in artistic appreciation and scholarly discourse
- **Historical Grandeur**: 18th-century European salon culture and Wunderkammer aesthetics
The model captures the essence of classical art patronage and collecting culture, creating scenes that blend historical accuracy with artistic magnificence in the unique 4rt4rt visual language.
## Additional Resources ๐
### `/captions` Folder
Contains the original training captions featuring detailed descriptions of gallery scenes, architectural elements, and period figures that define the 4rt4rt aesthetic.
### `/ENH` Folder
Houses prompt enhancement tools and templates specifically calibrated for gallery and classical art generation, helping users achieve optimal results when working with historical and architectural subjects.
### `/workflows` Folder
Includes specialized ComfyUI workflows optimized for creating complex multi-element gallery scenes with proper lighting, perspective, and classical composition principles.
## Technical Specifications โ๏ธ
- **Training Model**: WAN 14B 2.1
- **Inference Model**: WAN 14B 2.2 (A14B) low noise
- **License**: MIT
- **Base Model**: Wan-AI/Wan2.2-T2V-A14B
- **Pipeline Tag**: text-to-image
- **Trigger Word**: `4rt4rt style`
## Getting Started ๐
### Basic Usage
```
A grand art gallery filled with classical paintings in 4rt4rt style
```
### Enhanced Prompting
```
An opulent 18th-century gallery room with densely packed paintings covering the walls, elegant figures in period attire examining artwork, ornate gold frames, classical sculptures on pedestals, chandelier lighting, in 4rt4rt style
```
### Advanced Scene Building
```
A lavish European collector's salon featuring baroque paintings, marble busts of historical figures, elegantly dressed connoisseurs in animated discussion, rich red walls with gold-framed artwork, ornate architectural details, natural lighting from tall windows, in detailed 4rt4rt style
```
## Best Practices ๐ก
- **Use architectural descriptors** like "ornate," "baroque," "classical," and "opulent" for enhanced results
- **Include period details** such as "18th-century attire," "gold frames," and "marble sculptures"
- **Specify lighting conditions** like "chandelier," "natural light," or "gallery lighting"
- **Add human elements** with terms like "connoisseurs," "collectors," or "elegantly dressed figures"
- **Layer complexity** by mentioning multiple art forms: paintings, sculptures, busts, and decorative objects
## Style Characteristics ๐ญ
### **Era & Atmosphere**
- **Time Period**: 18th-century European salon culture, Renaissance collecting traditions
- **Mood**: Intellectual sophistication, cultural refinement, scholarly appreciation
- **Setting**: Grand galleries, private collections, architectural showcases
### **Visual Elements**
- **Composition**: Dense wall coverage with carefully arranged artworks
- **Architecture**: Ornate ceilings, decorative moldings, classical columns and arches
- **Color Palette**: Rich golds, deep reds, marble whites, and warm gallery lighting
- **Details**: Intricate frames, period clothing, classical sculptures, architectural elements
### **Character Focus**
- **Figures**: Elegantly dressed patrons, collectors, and connoisseurs
- **Poses**: Animated discussion, scholarly examination, artistic appreciation
- **Attire**: 18th-century European fashion, formal period clothing
### **Artistic Style**
- **Medium**: Detailed classical painting tradition with baroque influences
- **Technique**: Realistic representation with rich color saturation
- **Perspective**: Complex architectural spaces with multiple focal points
- **Aesthetic**: Cultural grandeur meets historical accuracy in the distinctive 4rt4rt visual language
---
*Experience the golden age of art collecting and gallery culture with this specialized LoRA, designed to transport viewers into the opulent world of classical European patronage and cultural sophistication.* ๐๏ธโจ
|
sposso22/dummy-tokenizer
|
sposso22
| 2025-08-25T14:22:56Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T14:22:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fujiantiiazhraa/blockassist-bc-marine_robust_bee_1756130279
|
fujiantiiazhraa
| 2025-08-25T14:22:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"marine robust bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:22:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- marine robust bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alpcaferoglu/Qwen2.5-Coder-3B-Instruct_bd_cs_t2s_r32_a32_e2_bs2_gas4_lr0.0001_fs6f_cvdt_sftreason
|
alpcaferoglu
| 2025-08-25T14:22:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T03:58:03Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ijustabi/blockassist-bc-lethal_nimble_cockroach_1756131683
|
ijustabi
| 2025-08-25T14:21:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lethal nimble cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:21:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lethal nimble cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gensynme/blockassist-bc-iridescent_aquatic_parrot_1756131649
|
gensynme
| 2025-08-25T14:21:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent aquatic parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:20:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent aquatic parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
maxibillion1975/blockassist-bc-iridescent_squeaky_sandpiper_1756130082
|
maxibillion1975
| 2025-08-25T14:20:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent squeaky sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:20:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent squeaky sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ricodr/blockassist-bc-twitchy_toothy_clam_1756131464
|
ricodr
| 2025-08-25T14:18:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"twitchy toothy clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:18:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- twitchy toothy clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ijustabi/blockassist-bc-lethal_nimble_cockroach_1756131439
|
ijustabi
| 2025-08-25T14:17:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lethal nimble cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:17:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lethal nimble cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1756129817
|
indoempatnol
| 2025-08-25T14:17:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:17:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
YYZStart/TriP-LLM
|
YYZStart
| 2025-08-25T14:17:01Z | 0 | 0 | null |
[
"custom",
"arxiv:2508.00047",
"region:us"
] | null | 2025-08-25T13:51:54Z |
# TriP-LLM
This is the official checkpoints release for the **TriP-LLM**, a novel framework for unsupervised anomaly detection in multivariate time-series data using pretrained Large Language Models (LLMs).
## Model Description
- **Name**: TriP-LLM
- **Task**: Time-Series Anomaly Detection
- **Framework**: PyTorch
- **Repository**: [GitHub โ YYZStart/TriP-LLM](https://github.com/YYZStart/TriP-LLM)
## Usage
Please refer to our [GitHub repository](https://github.com/YYZStart/TriP-LLM)
for model definitions, training code, and usage examples.
## ๐ Citation
If you find this repository useful for your research, please cite our paper:
```bibtex
@misc{TriPLLM,
title={TriP-LLM: A Tri-Branch Patch-wise Large Language Model Framework for Time-Series Anomaly Detection},
author={Yuan-Cheng Yu and Yen-Chieh Ouyang and Chun-An Lin},
year={2025},
eprint={2508.00047},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.00047},
}
```
|
indrarg/blockassist-bc-pensive_zealous_hyena_1756131334
|
indrarg
| 2025-08-25T14:16:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pensive zealous hyena",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:16:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pensive zealous hyena
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ricodr/blockassist-bc-twitchy_toothy_clam_1756131256
|
ricodr
| 2025-08-25T14:14:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"twitchy toothy clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:14:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- twitchy toothy clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nightmedia/Panacea-MegaScience-Qwen3-1.7B-q4-hi-mlx
|
nightmedia
| 2025-08-25T14:14:46Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation-inference",
"moe",
"trl",
"biology",
"chemistry",
"medical",
"mega-science",
"text-generation",
"conversational",
"en",
"zh",
"dataset:MegaScience/MegaScience",
"base_model:prithivMLmods/Panacea-MegaScience-Qwen3-1.7B",
"base_model:quantized:prithivMLmods/Panacea-MegaScience-Qwen3-1.7B",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-25T13:55:47Z |
---
license: apache-2.0
datasets:
- MegaScience/MegaScience
language:
- en
- zh
base_model: prithivMLmods/Panacea-MegaScience-Qwen3-1.7B
pipeline_tag: text-generation
library_name: mlx
tags:
- text-generation-inference
- moe
- trl
- biology
- chemistry
- medical
- mega-science
- mlx
---
# Panacea-MegaScience-Qwen3-1.7B-q4-hi-mlx
Top Quantizations
โ
Recommended High-Performers
๐ฅ q5 Quantization
```bash
Why: Highest winogrande (0.694 vs avg. 0.574), excellent ARC-Easy (avg. ~0.398).
Strength: Best balance of accuracy and robustness across tasks (especially winogrande & ARC-Easy).
Ideal for: Production deployments needing top end-to-end accuracy.
```
๐ฅ q6-hi Quantization
```bash
Why: Best ARC-Easy (0.398) + near-best winogrande (0.696).
Strength: Great precision for ARC tasks with minimal loss in boolq (0.622).
Ideal for: ARC-focused QA tasks or mixed-training pipelines.
```
๐ฅ q4-hi Quantization
```bash
Why: Best boolq (0.622 = tied with q5/q6), competitive hellaswag.
Strength: Lightweight optimization for quick inference on boolq/data-centric tasks.
```
Performance Insights
```bash
Winogrande Champion: q5 (0.694) โ optimal for complex reasoning tasks.
Consistency King: q5 + q6-hi (both hit >90% of max winogrande scores).
Surprise: bf16 underperforms slightly on winogrande despite high ARC-Easy โ good for baseline testing.
Cost-Saver: q4-hi (best boolq) but with minimal setup overhead.
```
Recommendation Summary
```bash
Use Case Top Quant Key Advantage
Highest winogrande accuracy q5 +26% vs bf16 (0.694 โ 0.550)
ARC-Easy focus q6-hi Highest ARC-Easy (0.398)
BoolQ-centric workflows q4-hi Best boolq (0.622)
Balanced end-to-end q5 Best holistic median score
```
๐ก Pro Tip: If latency is critical, deploy q5 for accuracy and q4-hi as a backup (minimal trade-off in boolq + wins on winogrande).
Visual Summary
```bash
Winogrande (โ) โ q5 ๐ฅ = 0.694
ARC-Easy (โ) โ q6-hi ๐ฅ = 0.398
BoolQ (โ) โ q4-hi ๐ฅ = 0.622
Consistency โ q5/q6-hi (โ
โ
โ
โ
โ)
```
This model [Panacea-MegaScience-Qwen3-1.7B-q4-hi-mlx](https://huggingface.co/Panacea-MegaScience-Qwen3-1.7B-q4-hi-mlx) was
converted to MLX format from [prithivMLmods/Panacea-MegaScience-Qwen3-1.7B](https://huggingface.co/prithivMLmods/Panacea-MegaScience-Qwen3-1.7B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Panacea-MegaScience-Qwen3-1.7B-q4-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1756129746
|
vwzyrraz7l
| 2025-08-25T14:14:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:14:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chainway9/blockassist-bc-untamed_quick_eel_1756129676
|
chainway9
| 2025-08-25T14:14:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:13:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lemonhat/Qwen2.5-7B-Instruct-t1_25k_v2_tag5
|
lemonhat
| 2025-08-25T14:13:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T14:01:52Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: t1_25k_v2_tag5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t1_25k_v2_tag5
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the t1_25k_v2_tag5 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3469 | 0.0817 | 100 | 0.3211 |
| 0.2675 | 0.1634 | 200 | 0.3055 |
| 0.2778 | 0.2451 | 300 | 0.2926 |
| 0.2742 | 0.3268 | 400 | 0.2844 |
| 0.3024 | 0.4085 | 500 | 0.2788 |
| 0.2813 | 0.4902 | 600 | 0.2755 |
| 0.3025 | 0.5719 | 700 | 0.2691 |
| 0.3087 | 0.6536 | 800 | 0.2677 |
| 0.2335 | 0.7353 | 900 | 0.2613 |
| 0.2558 | 0.8170 | 1000 | 0.2606 |
| 0.2533 | 0.8987 | 1100 | 0.2592 |
| 0.2447 | 0.9804 | 1200 | 0.2585 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Solaneta/blockassist-bc-nocturnal_tame_mole_1756129619
|
Solaneta
| 2025-08-25T14:13:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"nocturnal tame mole",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:13:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- nocturnal tame mole
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indrarg/blockassist-bc-pensive_zealous_hyena_1756131075
|
indrarg
| 2025-08-25T14:12:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pensive zealous hyena",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:11:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pensive zealous hyena
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1756129447
|
lisaozill03
| 2025-08-25T14:10:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:10:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indrarg/blockassist-bc-pensive_zealous_hyena_1756130849
|
indrarg
| 2025-08-25T14:08:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pensive zealous hyena",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:08:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pensive zealous hyena
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
CineAI/Yolo-models
|
CineAI
| 2025-08-25T14:07:56Z | 0 | 0 | null |
[
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2025-08-13T14:07:29Z |
---
license: apache-2.0
---
|
indrarg/blockassist-bc-pensive_zealous_hyena_1756130611
|
indrarg
| 2025-08-25T14:04:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pensive zealous hyena",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:04:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pensive zealous hyena
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jasonhuang3/bpo-qwen-2-5-7b-math-ep2-our_6_alpha_0.3_lora_28k
|
jasonhuang3
| 2025-08-25T14:03:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"dpo",
"trl",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-23T17:40:21Z |
---
base_model: Qwen/Qwen2.5-Math-7B
library_name: transformers
model_name: bpo-qwen-2-5-7b-math-ep2-our_6_alpha_0.3_lora_28k
tags:
- generated_from_trainer
- dpo
- trl
licence: license
---
# Model Card for bpo-qwen-2-5-7b-math-ep2-our_6_alpha_0.3_lora_28k
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jasonhuang3/bpo-qwen-2-5-7b-math-ep2-our_6_alpha_0.3_lora_28k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jasonhuang3-school/huggingface/runs/8bcaz9hg)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.1
- Pytorch: 2.4.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ovedrive/qwen-image-edit-4bit
|
ovedrive
| 2025-08-25T14:02:48Z | 1,398 | 13 |
diffusers
|
[
"diffusers",
"safetensors",
"image-to-image",
"en",
"zh",
"arxiv:2508.02324",
"base_model:Qwen/Qwen-Image-Edit",
"base_model:quantized:Qwen/Qwen-Image-Edit",
"license:apache-2.0",
"diffusers:QwenImageEditPipeline",
"region:us"
] |
image-to-image
| 2025-08-19T01:14:27Z |
---
license: apache-2.0
language:
- en
- zh
library_name: diffusers
pipeline_tag: image-to-image
quantized_by: A Dujari
base_model:
- Qwen/Qwen-Image-Edit
base_model_relation: quantized
---
This is an NF4 quantized model of Qwen-image-edit so it can run on GPUs using 20GB VRAM. You can run it on lower VRAM like 16GB.
There were other NF4 models but they made the mistake of blindly quantizing all layers in the transformer.
This one does not. We retain some layers at full precision in order to ensure that we get quality output.
You can use the original Qwen-Image-Edit parameters.
This model is `not yet` available for inference at JustLab.ai
Model tested: Working perfectly even with 10 steps.
Contact: [JustLab.ai](https://justlab.ai) for commercial support
### Performance on rtx4090
- 20 steps about 78 seconds.
- 10 steps about 40 seconds.
Interestingly I was under the impression that the Qwen-VL could not be quantized which is why several projects use the full 15Gb model.
Here I have quantized it too and it seems to be workign fine.
Sample script. (min 20GB VRAM)
```python
import os
from PIL import Image
import torch
from diffusers import QwenImageEditPipeline
model_path = "ovedrive/qwen-image-edit-4bit"
pipeline = QwenImageEditPipeline.from_pretrained(model_path, torch_dtype=torch.bfloat16)
print("pipeline loaded") # not true but whatever. do not move to cuda
pipeline.set_progress_bar_config(disable=None)
pipeline.enable_model_cpu_offload() #if you have enough VRAM replace this line with `pipeline.to("cuda")` which is 20GB VRAM
image = Image.open("./example.png").convert("RGB")
prompt = "Remove the lady head with white hair"
inputs = {
"image": image,
"prompt": prompt,
"generator": torch.manual_seed(0),
"true_cfg_scale": 4.0,
"negative_prompt": " ",
"num_inference_steps": 20, # even 10 steps should be enough in many cases
}
with torch.inference_mode():
output = pipeline(**inputs)
output_image = output.images[0]
output_image.save("output_image_edit.png")
print("image saved at", os.path.abspath("output_image_edit.png"))
```
The original Qwen-Image attributions are included verbatim below.
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/qwen_image_edit_logo.png" width="400"/>
<p>
<p align="center">
๐ <a href="https://chat.qwen.ai/"><b>Qwen Chat</b></a>   |   ๐ค <a href="https://huggingface.co/Qwen/Qwen-Image-Edit">Hugging Face</a>   |   ๐ค <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit">ModelScope</a>   |    ๐ <a href="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/Qwen_Image.pdf">Tech Report</a>    |    ๐ <a href="https://qwenlm.github.io/blog/qwen-image-edit/">Blog</a>   
<br>
๐ฅ๏ธ <a href="https://huggingface.co/spaces/Qwen/Qwen-Image-Edit">Demo</a>   |   ๐ฌ <a href="https://github.com/QwenLM/Qwen-Image/blob/main/assets/wechat.png">WeChat (ๅพฎไฟก)</a>   |   ๐ซจ <a href="https://discord.gg/CV4E9rpNSD">Discord</a>  |    <a href="https://github.com/QwenLM/Qwen-Image">Github</a>  
</p>
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_homepage.jpg" width="1600"/>
<p>
# Introduction
We are excited to introduce Qwen-Image-Edit, the image editing version of Qwen-Image. Built upon our 20B Qwen-Image model, Qwen-Image-Edit successfully extends Qwen-Imageโs unique text rendering capabilities to image editing tasks, enabling precise text editing. Furthermore, Qwen-Image-Edit simultaneously feeds the input image into Qwen2.5-VL (for visual semantic control) and the VAE Encoder (for visual appearance control), achieving capabilities in both semantic and appearance editing. To experience the latest model, visit [Qwen Chat](https://qwen.ai) and select the "Image Editing" feature.
Key Features:
* **Semantic and Appearance Editing**: Qwen-Image-Edit supports both low-level visual appearance editing (such as adding, removing, or modifying elements, requiring all other regions of the image to remain completely unchanged) and high-level visual semantic editing (such as IP creation, object rotation, and style transfer, allowing overall pixel changes while maintaining semantic consistency).
* **Precise Text Editing**: Qwen-Image-Edit supports bilingual (Chinese and English) text editing, allowing direct addition, deletion, and modification of text in images while preserving the original font, size, and style.
* **Strong Benchmark Performance**: Evaluations on multiple public benchmarks demonstrate that Qwen-Image-Edit achieves state-of-the-art (SOTA) performance in image editing tasks, establishing it as a powerful foundation model for image editing.
## Quick Start
Install the latest version of diffusers
```
pip install git+https://github.com/huggingface/diffusers
```
The following contains a code snippet illustrating how to use the model to generate images based on text prompts:
```python
import os
from PIL import Image
import torch
from diffusers import QwenImageEditPipeline
pipeline = QwenImageEditPipeline.from_pretrained("Qwen/Qwen-Image-Edit")
print("pipeline loaded")
pipeline.to(torch.bfloat16)
pipeline.to("cuda")
pipeline.set_progress_bar_config(disable=None)
image = Image.open("./input.png").convert("RGB")
prompt = "Change the rabbit's color to purple, with a flash light background."
inputs = {
"image": image,
"prompt": prompt,
"generator": torch.manual_seed(0),
"true_cfg_scale": 4.0,
"negative_prompt": " ",
"num_inference_steps": 50,
}
with torch.inference_mode():
output = pipeline(**inputs)
output_image = output.images[0]
output_image.save("output_image_edit.png")
print("image saved at", os.path.abspath("output_image_edit.png"))
```
## Showcase
One of the highlights of Qwen-Image-Edit lies in its powerful capabilities for semantic and appearance editing. Semantic editing refers to modifying image content while preserving the original visual semantics. To intuitively demonstrate this capability, let's take Qwen's mascotโCapybaraโas an example:

As can be seen, although most pixels in the edited image differ from those in the input image (the leftmost image), the character consistency of Capybara is perfectly preserved. Qwen-Image-Edit's powerful semantic editing capability enables effortless and diverse creation of original IP content.
Furthermore, on Qwen Chat, we designed a series of editing prompts centered around the 16 MBTI personality types. Leveraging these prompts, we successfully created a set of MBTI-themed emoji packs based on our mascot Capybara, effortlessly expanding the IP's reach and expression.

Moreover, novel view synthesis is another key application scenario in semantic editing. As shown in the two example images below, Qwen-Image-Edit can not only rotate objects by 90 degrees, but also perform a full 180-degree rotation, allowing us to directly see the back side of the object:


Another typical application of semantic editing is style transfer. For instance, given an input portrait, Qwen-Image-Edit can easily transform it into various artistic styles such as Studio Ghibli. This capability holds significant value in applications like virtual avatar creation:

In addition to semantic editing, appearance editing is another common image editing requirement. Appearance editing emphasizes keeping certain regions of the image completely unchanged while adding, removing, or modifying specific elements. The image below illustrates a case where a signboard is added to the scene.
As shown, Qwen-Image-Edit not only successfully inserts the signboard but also generates a corresponding reflection, demonstrating exceptional attention to detail.

Below is another interesting example, demonstrating how to remove fine hair strands and other small objects from an image.

Additionally, the color of a specific letter "n" in the image can be modified to blue, enabling precise editing of particular elements.

Appearance editing also has wide-ranging applications in scenarios such as adjusting a person's background or changing clothing. The three images below demonstrate these practical use cases respectively.


Another standout feature of Qwen-Image-Edit is its accurate text editing capability, which stems from Qwen-Image's deep expertise in text rendering. As shown below, the following two cases vividly demonstrate Qwen-Image-Edit's powerful performance in editing English text:


Qwen-Image-Edit can also directly edit Chinese posters, enabling not only modifications to large headline text but also precise adjustments to even small and intricate text elements.

Finally, let's walk through a concrete image editing example to demonstrate how to use a chained editing approach to progressively correct errors in a calligraphy artwork generated by Qwen-Image:

In this artwork, several Chinese characters contain generation errors. We can leverage Qwen-Image-Edit to correct them step by step. For instance, we can draw bounding boxes on the original image to mark the regions that need correction, instructing Qwen-Image-Edit to fix these specific areas. Here, we want the character "็จฝ" to be correctly written within the red box, and the character "ไบญ" to be accurately rendered in the blue region.

However, in practice, the character "็จฝ" is relatively obscure, and the model fails to correct it correctly in one step. The lower-right component of "็จฝ" should be "ๆจ" rather than "ๆฅ". At this point, we can further highlight the "ๆฅ" portion with a red box, instructing Qwen-Image-Edit to fine-tune this detail and replace it with "ๆจ".

Isn't it amazing? With this chained, step-by-step editing approach, we can continuously correct character errors until the desired final result is achieved.





Finally, we have successfully obtained a completely correct calligraphy version of *Lantingji Xu (Orchid Pavilion Preface)*!
In summary, we hope that Qwen-Image-Edit can further advance the field of image generation, truly lower the technical barriers to visual content creation, and inspire even more innovative applications.
## License Agreement
Qwen-Image is licensed under Apache 2.0.
## Citation
We kindly encourage citation of our work if you find it useful.
```bibtex
@misc{wu2025qwenimagetechnicalreport,
title={Qwen-Image Technical Report},
author={Chenfei Wu and Jiahao Li and Jingren Zhou and Junyang Lin and Kaiyuan Gao and Kun Yan and Sheng-ming Yin and Shuai Bai and Xiao Xu and Yilei Chen and Yuxiang Chen and Zecheng Tang and Zekai Zhang and Zhengyi Wang and An Yang and Bowen Yu and Chen Cheng and Dayiheng Liu and Deqing Li and Hang Zhang and Hao Meng and Hu Wei and Jingyuan Ni and Kai Chen and Kuan Cao and Liang Peng and Lin Qu and Minggang Wu and Peng Wang and Shuting Yu and Tingkun Wen and Wensen Feng and Xiaoxiao Xu and Yi Wang and Yichang Zhang and Yongqiang Zhu and Yujia Wu and Yuxuan Cai and Zenan Liu},
year={2025},
eprint={2508.02324},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.02324},
}
```
|
apriasmoro/472fc470-7ffb-481a-832e-2b54f6c9fdce
|
apriasmoro
| 2025-08-25T14:00:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:jingyeom/seal3.1.6n_7b",
"base_model:adapter:jingyeom/seal3.1.6n_7b",
"region:us"
] | null | 2025-08-25T14:00:33Z |
---
base_model: jingyeom/seal3.1.6n_7b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
koloni/blockassist-bc-deadly_graceful_stingray_1756128864
|
koloni
| 2025-08-25T14:00:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:00:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756130372
|
Ferdi3425
| 2025-08-25T14:00:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T14:00:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Mo-alaa/Qwen2.5-0.5b
|
Mo-alaa
| 2025-08-25T14:00:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T13:45:50Z |
---
base_model: unsloth/qwen2.5-0.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Mo-alaa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-0.5b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vserifsaglam/Qwen3-Reranker-4B-4bit-MLX
|
vserifsaglam
| 2025-08-25T13:59:25Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-Reranker-4B",
"base_model:quantized:Qwen/Qwen3-Reranker-4B",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-25T13:39:00Z |
---
license: apache-2.0
base_model: Qwen/Qwen3-Reranker-4B
library_name: mlx
pipeline_tag: text-generation
tags:
- mlx
---
# vserifsaglam/Qwen3-Reranker-4B-4bit-MLX
This model [vserifsaglam/Qwen3-Reranker-4B-4bit-MLX](https://huggingface.co/vserifsaglam/Qwen3-Reranker-4B-4bit-MLX) was
converted to MLX format from [Qwen/Qwen3-Reranker-4B](https://huggingface.co/Qwen/Qwen3-Reranker-4B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("vserifsaglam/Qwen3-Reranker-4B-4bit-MLX")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Kwokou/Mini-Spyra-v.3.6
|
Kwokou
| 2025-08-25T13:59:14Z | 0 | 0 | null |
[
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-08-25T11:37:43Z |
---
license: apache-2.0
---
|
ASethi04/meta-llama-Meta-Llama-3-8B-gsm8k-third-lora-4-0.0001
|
ASethi04
| 2025-08-25T13:59:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T13:05:18Z |
---
base_model: meta-llama/Meta-Llama-3-8B
library_name: transformers
model_name: meta-llama-Meta-Llama-3-8B-gsm8k-third-lora-4-0.0001
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for meta-llama-Meta-Llama-3-8B-gsm8k-third-lora-4-0.0001
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/meta-llama-Meta-Llama-3-8B-gsm8k-third-lora-4-0.0001", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/n3wddtnj)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
maidacundo/annie-lite-v0.2.10-qwen3-8b
|
maidacundo
| 2025-08-25T13:57:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T13:51:42Z |
---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** maidacundo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
eldad-akhaumere/whisper-small-ha-bleu-v1
|
eldad-akhaumere
| 2025-08-25T13:57:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ha",
"dataset:eldad-akhaumere/common_voice_16_0_",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-25T13:17:30Z |
---
library_name: transformers
language:
- ha
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- eldad-akhaumere/common_voice_16_0_
metrics:
- bleu
model-index:
- name: Whisper Small_Ha Bleu - Eldad Akhaumere
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 16.0_
type: eldad-akhaumere/common_voice_16_0_
config: ha
split: None
args: 'config: ha, split: test'
metrics:
- name: Bleu
type: bleu
value: 11.764900582287524
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small_Ha Bleu - Eldad Akhaumere
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 16.0_ dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8567
- Bleu: 11.7649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.4391 | 0.3185 | 50 | 1.9226 | 7.4601 |
| 0.3422 | 0.6369 | 100 | 1.9320 | 9.7294 |
| 0.3862 | 0.9554 | 150 | 1.8567 | 11.7649 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.8.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.4
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756130085
|
liukevin666
| 2025-08-25T13:55:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T13:55:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1756128336
|
calegpedia
| 2025-08-25T13:55:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T13:55:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756130076
|
Ferdi3425
| 2025-08-25T13:55:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T13:55:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FatihAtalayy/blockassist-bc-tangled_large_sheep_1756129964
|
FatihAtalayy
| 2025-08-25T13:53:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled large sheep",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T13:53:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled large sheep
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ChillPower/lotr-swordsmith
|
ChillPower
| 2025-08-25T13:52:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T13:51:57Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ChillPower
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1756129854
|
Stasonelison
| 2025-08-25T13:51:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T13:51:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ricodr/blockassist-bc-twitchy_toothy_clam_1756129846
|
ricodr
| 2025-08-25T13:51:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"twitchy toothy clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T13:51:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- twitchy toothy clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zarude/blockassist-bc-rabid_timid_rat_1756129837
|
zarude
| 2025-08-25T13:51:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rabid timid rat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T13:51:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rabid timid rat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
douhu881a/blockassist-bc-leaping_rangy_yak_1756129851
|
douhu881a
| 2025-08-25T13:51:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"leaping rangy yak",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T13:51:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- leaping rangy yak
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kugler/gbert-large-AmDi.small-synset-classifier
|
kugler
| 2025-08-25T13:50:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:deepset/gbert-large",
"base_model:finetune:deepset/gbert-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-25T13:49:44Z |
---
library_name: transformers
license: mit
base_model: deepset/gbert-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: gbert_synset_classifier_amdi_small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gbert_synset_classifier_amdi_small
This model is a fine-tuned version of [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6371
- Accuracy: 0.8443
- F1: 0.8414
- Precision: 0.8523
- Recall: 0.8443
- F1 Macro: 0.7742
- Precision Macro: 0.7539
- Recall Macro: 0.8118
- F1 Micro: 0.8443
- Precision Micro: 0.8443
- Recall Micro: 0.8443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | F1 Macro | Precision Macro | Recall Macro | F1 Micro | Precision Micro | Recall Micro |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:--------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|
| 3.1817 | 0.6483 | 100 | 1.7424 | 0.6200 | 0.5455 | 0.5576 | 0.6200 | 0.2894 | 0.3465 | 0.2954 | 0.6200 | 0.6200 | 0.6200 |
| 1.0711 | 1.2966 | 200 | 0.7171 | 0.8140 | 0.7971 | 0.7992 | 0.8140 | 0.5958 | 0.5870 | 0.6238 | 0.8140 | 0.8140 | 0.8140 |
| 0.649 | 1.9449 | 300 | 0.6003 | 0.8275 | 0.8184 | 0.8282 | 0.8275 | 0.6797 | 0.6812 | 0.7138 | 0.8275 | 0.8275 | 0.8275 |
| 0.4903 | 2.5932 | 400 | 0.5668 | 0.8336 | 0.8268 | 0.8375 | 0.8336 | 0.6942 | 0.6869 | 0.7271 | 0.8336 | 0.8336 | 0.8336 |
| 0.4095 | 3.2415 | 500 | 0.5511 | 0.8387 | 0.8351 | 0.8398 | 0.8387 | 0.7224 | 0.7198 | 0.7414 | 0.8387 | 0.8387 | 0.8387 |
| 0.3586 | 3.8898 | 600 | 0.5313 | 0.8415 | 0.8360 | 0.8452 | 0.8415 | 0.7188 | 0.7075 | 0.7481 | 0.8415 | 0.8415 | 0.8415 |
| 0.2813 | 4.5381 | 700 | 0.5442 | 0.8485 | 0.8451 | 0.8502 | 0.8485 | 0.7290 | 0.7355 | 0.7419 | 0.8485 | 0.8485 | 0.8485 |
| 0.2543 | 5.1864 | 800 | 0.5736 | 0.8494 | 0.8461 | 0.8515 | 0.8494 | 0.7812 | 0.7708 | 0.8047 | 0.8494 | 0.8494 | 0.8494 |
| 0.1928 | 5.8347 | 900 | 0.5791 | 0.8448 | 0.8419 | 0.8484 | 0.8448 | 0.7646 | 0.7536 | 0.7899 | 0.8448 | 0.8448 | 0.8448 |
| 0.1645 | 6.4830 | 1000 | 0.6371 | 0.8443 | 0.8414 | 0.8523 | 0.8443 | 0.7742 | 0.7539 | 0.8118 | 0.8443 | 0.8443 | 0.8443 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.20.3
|
douhu881a/blockassist-bc-leaping_rangy_yak_1756129730
|
douhu881a
| 2025-08-25T13:49:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"leaping rangy yak",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T13:49:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- leaping rangy yak
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756129714
|
Ferdi3425
| 2025-08-25T13:49:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T13:49:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1756129650
|
Stasonelison
| 2025-08-25T13:48:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T13:48:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1756129393
|
lqpl
| 2025-08-25T13:46:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T13:44:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
raniero/latest-001
|
raniero
| 2025-08-25T13:46:43Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-08-25T13:46:36Z |
# Submission latest-001
- Base model: mistralai/Mistral-7B-Instruct-v0.2
- Repo: raniero/latest-001
- SHA256: `d7264971076d0f3f5e8f91ff52b8d18ec2ccf0f13220908ea42907dd5ec6dc4e`
- Task: latest-001
|
rpreite/Llama-3.2-1B-Instruct-INT4-W4A16
|
rpreite
| 2025-08-25T13:46:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-08-25T13:39:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Bisher/swept-cloud-16-lora
|
Bisher
| 2025-08-25T13:46:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-12b-it",
"base_model:finetune:unsloth/gemma-3-12b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T13:44:16Z |
---
base_model: unsloth/gemma-3-12b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Bisher
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-12b-it
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Arnold145/whisper_finetuned
|
Arnold145
| 2025-08-25T12:28:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-25T12:07:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
khangnguyen0/Qwen3-0.6B-Gensyn-Swarm-keen_shaggy_grouse
|
khangnguyen0
| 2025-08-25T12:27:17Z | 81 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am keen_shaggy_grouse",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-24T16:37:08Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am keen_shaggy_grouse
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pidbu/blockassist-bc-whistling_alert_shrew_1756124614
|
pidbu
| 2025-08-25T12:25:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:24:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sweatSmile/HF-SmolLM3-3B-Math-Formulas-4bit
|
sweatSmile
| 2025-08-25T12:25:50Z | 2 | 1 |
peft
|
[
"peft",
"safetensors",
"smollm3",
"maths",
"lora",
"bitsandbytes",
"small_model",
"4_bit",
"en",
"dataset:ddrg/math_formulas",
"base_model:HuggingFaceTB/SmolLM3-3B",
"base_model:adapter:HuggingFaceTB/SmolLM3-3B",
"region:us"
] | null | 2025-08-25T02:09:10Z |
---
datasets:
- ddrg/math_formulas
language:
- en
base_model:
- HuggingFaceTB/SmolLM3-3B
tags:
- maths
- lora
- peft
- bitsandbytes
- small_model
- 4_bit
---
# SmolLM3-3B-Math-Formulas-4bit
## Model Description
**SmolLM3-3B-Math-Formulas-4bit** is a fine-tuned version of [HuggingFaceTB/SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B) specialized for mathematical formula understanding and generation. The model has been optimized using 4-bit quantization (NF4) with LoRA adapters for efficient training and inference.
- **Base Model**: HuggingFaceTB/SmolLM3-3B
- **Model Type**: Causal Language Model
- **Quantization**: 4-bit NF4 with double quantization
- **Fine-tuning Method**: QLoRA (Quantized Low-Rank Adaptation)
- **Specialization**: Mathematical formulas and expressions
## Training Details
### Dataset
- **Source**: [ddrg/math_formulas](https://huggingface.co/datasets/ddrg/math_formulas)
- **Size**: 1,000 samples (randomly selected from 2.89M total)
- **Content**: Mathematical formulas, equations, and expressions in LaTeX format
### Training Configuration
- **Training Loss**: 0.589 (final)
- **Epochs**: 6
- **Batch Size**: 8 (per device)
- **Learning Rate**: 2.5e-4 with cosine scheduler
- **Max Sequence Length**: 128 tokens
- **Gradient Accumulation**: 2 steps
- **Optimizer**: AdamW with 0.01 weight decay
- **Precision**: FP16
- **LoRA Configuration**:
- r=4, alpha=8
- Dropout: 0.1
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
### Hardware & Performance
- **Training Time**: 265 seconds (4.4 minutes)
- **Training Speed**: 5.68 samples/second
- **Total Steps**: 96
- **Memory Efficiency**: 4-bit quantization for reduced VRAM usage
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load the model and tokenizer
model_name = "sweatSmile/HF-SmolLM3-3B-Math-Formulas-4bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# Generate mathematical content
prompt = "Explain this mathematical formula:"
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=150,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Intended Use Cases
- **Mathematical Education**: Explaining mathematical formulas and concepts
- **LaTeX Generation**: Creating properly formatted mathematical expressions
- **Formula Analysis**: Understanding and breaking down complex mathematical equations
- **Mathematical Problem Solving**: Assisting with mathematical computations and derivations
## Limitations
- **Domain Specific**: Optimized primarily for mathematical content
- **Training Data Size**: Fine-tuned on only 1,000 samples
- **Quantization Effects**: 4-bit quantization may introduce minor precision loss
- **Context Length**: Limited to 128 tokens for mathematical expressions
- **Language**: Primarily trained on English mathematical notation
## Performance Metrics
- **Final Training Loss**: 0.589
- **Convergence**: Achieved in 6 epochs (efficient training)
- **Improvement**: 52% loss reduction compared to baseline configuration
- **Efficiency**: 51% faster training compared to initial setup
## Model Architecture
Based on SmolLM3-3B with the following modifications:
- 4-bit NF4 quantization for memory efficiency
- LoRA adapters for parameter-efficient fine-tuning
- Specialized for mathematical formula understanding
## Citation
If you use this model, please cite:
```bibtex
@model{smollm3-math-formulas-4bit,
title={SmolLM3-3B-Math-Formulas-4bit},
author={sweatSmile},
year={2025},
base_model={HuggingFaceTB/SmolLM3-3B},
dataset={ddrg/math_formulas},
method={QLoRA fine-tuning with 4-bit quantization}
}
```
## License
This model inherits the license from the base SmolLM3-3B model. Please refer to the original model's license for usage terms.
## Acknowledgments
- **Base Model**: HuggingFace Team for SmolLM3-3B
- **Dataset**: Dresden Database Research Group for the math_formulas dataset
- **Training Framework**: Hugging Face Transformers and TRL libraries
- **Quantization**: bitsandbytes library for 4-bit optimization
|
mia-project-2025/T5-base-adapter-natural-questions-shortQA
|
mia-project-2025
| 2025-08-25T12:25:23Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-24T08:39:31Z |
---
license: apache-2.0
---
|
akunode/blockassist-bc-long_prickly_eel_1756124605
|
akunode
| 2025-08-25T12:24:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"long prickly eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:24:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- long prickly eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hutaba-dev/Qwen3-0.6B-Gensyn-Swarm-armored_pesty_mule
|
hutaba-dev
| 2025-08-25T12:23:54Z | 46 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am armored_pesty_mule",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-24T18:51:57Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am armored_pesty_mule
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kznmp3/blockassist-bc-lively_raging_hippo_1756124289
|
kznmp3
| 2025-08-25T12:23:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lively raging hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:19:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lively raging hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756124523
|
Ferdi3425
| 2025-08-25T12:22:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:22:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
unitova/blockassist-bc-zealous_sneaky_raven_1756122807
|
unitova
| 2025-08-25T12:21:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:21:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zineczku/blockassist-bc-shy_fierce_dog_1756124373
|
zineczku
| 2025-08-25T12:20:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shy fierce dog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:19:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shy fierce dog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
biswac2021/blockassist-bc-wiry_patterned_clam_1756124336
|
biswac2021
| 2025-08-25T12:19:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry patterned clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:19:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry patterned clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Mostefa-Terbeche/diabetic-retinopathy-combined-vit_b_16-original-20250718-101851
|
Mostefa-Terbeche
| 2025-08-25T12:19:11Z | 0 | 0 | null |
[
"diabetic-retinopathy",
"medical-imaging",
"pytorch",
"computer-vision",
"retinal-imaging",
"dataset:combined",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-08-25T09:04:12Z |
---
license: apache-2.0
tags:
- diabetic-retinopathy
- medical-imaging
- pytorch
- computer-vision
- retinal-imaging
datasets:
- combined
metrics:
- accuracy
- quadratic-kappa
- auc
model-index:
- name: combined_vit_b_16_original
results:
- task:
type: image-classification
name: Diabetic Retinopathy Classification
dataset:
type: combined
name: COMBINED
metrics:
- type: accuracy
value: 0.5573101490418737
- type: quadratic-kappa
value: 0.727149685302214
---
# Diabetic Retinopathy Classification Model
## Model Description
This model is trained for diabetic retinopathy classification using the vit_b_16 architecture on the combined dataset with original preprocessing.
## Model Details
- **Architecture**: vit_b_16
- **Dataset**: combined
- **Preprocessing**: original
- **Training Date**: 20250718-101851
- **Task**: 5-class diabetic retinopathy grading (0-4)
- **Directory**: combined_vit_b_16_20250718-101851_new
## Performance
- **Test Accuracy**: 0.5573101490418737
- **Test Quadratic Kappa**: 0.727149685302214
- **Validation Kappa**: 0.727149685302214
## Usage
```python
import torch
from huggingface_hub import hf_hub_download
# Download model
model_path = hf_hub_download(
repo_id="your-username/diabetic-retinopathy-combined-vit_b_16-original",
filename="model_best.pt"
)
# Load model
model = torch.load(model_path, map_location='cpu')
```
## Classes
- 0: No DR (No diabetic retinopathy)
- 1: Mild DR (Mild non-proliferative diabetic retinopathy)
- 2: Moderate DR (Moderate non-proliferative diabetic retinopathy)
- 3: Severe DR (Severe non-proliferative diabetic retinopathy)
- 4: Proliferative DR (Proliferative diabetic retinopathy)
## Citation
If you use this model, please cite your research paper/thesis.
|
maheshkommuri/lora
|
maheshkommuri
| 2025-08-25T12:19:10Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-25T12:17:29Z |
---
license: apache-2.0
---
|
maorerock424/loramaorerory24
|
maorerock424
| 2025-08-25T12:18:57Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2025-08-25T12:18:40Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/1731837397596-t4dh258vd.png
text: '-'
- output:
url: images/1731836027803-jft6hp2w7.png
text: '-'
- output:
url: images/1731837998444-uatvb6n9o.png
text: '-'
- output:
url: images/1731837998444-uatvb6n9o.png
text: '-'
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: maorerory424
license: mit
---
# maor_face_lora
<Gallery />
## Model description
a photo of maorerory24, 39 years old israeli musician man, with full beard and black hair, 174 cm tall and 75 kg weight, fit body, fit face,
## Trigger words
You should use `maorerory424` to trigger the image generation.
## Download model
[Download](/maorerock424/loramaorerory24/tree/main) them in the Files & versions tab.
|
OLEGATRON123/blockassist-bc-amphibious_whiskered_opossum_1756124182
|
OLEGATRON123
| 2025-08-25T12:18:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious whiskered opossum",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:18:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious whiskered opossum
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Matteo-mls/Qwen2.5-7B-Instruct-abliterated
|
Matteo-mls
| 2025-08-25T12:18:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T12:16:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Naman1309/my-codellama-finetuned-test
|
Naman1309
| 2025-08-25T12:18:25Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:codellama/CodeLlama-7b-Instruct-hf",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"region:us"
] |
text-generation
| 2025-08-25T12:13:18Z |
---
base_model: codellama/CodeLlama-7b-Instruct-hf
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:codellama/CodeLlama-7b-Instruct-hf
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756124114
|
liukevin666
| 2025-08-25T12:17:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:16:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Thireus/DeepSeek-R1-0528-THIREUS-IQ4_KS_R4-SPECIAL_SPLIT
|
Thireus
| 2025-08-25T12:16:50Z | 1 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-07-11T23:29:51Z |
---
license: mit
---
# DeepSeek-R1-0528
## ๐ค What is this [HuggingFace repository](https://huggingface.co/Thireus/DeepSeek-R1-0528-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the DeepSeek-R1-0528 model (official repo: https://huggingface.co/deepseek-ai/DeepSeek-R1-0528). These GGUF shards are designed to be used with **Thireusโ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization โrecipesโ effortlessly.
- ๐ Read more: https://github.com/Thireus/GGUF-Tool-Suite
- ๐ Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- ๐ ๏ธ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- ๐ Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/DeepSeek-R1-0528/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/ik_llama.cpp_recipes/DeepSeek-R1-0528.THIREUS-1.9364bpw-4.3533ppl.151GB-GGUF_11GB-GPU_140GB-CPU.3c88ec6_9fd615d.recipe
# Other recipe examples can be found at https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
# Launch ik_llama's llama-cli:
ulimit -n 9999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m DeepSeek-R1-0528-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-mla 3 -fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<๏ฝbeginโofโsentence๏ฝ><๏ฝUser๏ฝ>What is the solution of x+5=-2?<๏ฝAssistant๏ฝ><think>\n'
```
</details>
---
## โ Why does this Tool Suite exist?
1. **Compatibility & Speed** โ [unsloth](https://huggingface.co/unsloth)โs dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** โ No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** โ To my knowledge, there was no open source flexible, automated method to minimize perplexity for any bits-per-weight (bpw) targetโso I created one with excellent results!
---
## ๐ How does it compare to other GGUFs?
Hereโs how DeepSeek-R1-0528 quantized with **Thireusโ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you โ just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
---
## ๐ How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) โ focus on these sections:
1. โ ๏ธ **Requirements** โ Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. ๐ฅ **Download Model Shards** โ Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. ๐ง **Run a Downloaded Model** โ Sample usage with `llama-cli`.
4. ๐ ๏ธ **Generate a Custom Recipe** โ Produce recipes tailored to your VRAM/RAM target usage for optimum perplexity.
---
## โ
Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## ๐คทโโ๏ธ Will I release baked dynamic quant GGUFs?
No, because I believe in **tailored quantization** for each userโs hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them, or rely on generic GGUF dynamic quants such as [unsloth](https://huggingface.co/unsloth)'s.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Note that recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who donโt trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## ๐ฆ Whatโs in this repository?
- **00001 GGUF header shard** โ Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** โ Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** โ `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** โ Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authorsโor alternatively self-quantizeโto avoid potential exploits.
---
## ๐ก Pro Tips
You can easily download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! ๐
|
iresiragusa/uni_anita_20
|
iresiragusa
| 2025-08-25T12:14:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T12:04:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Koagonzalo11/tungsten
|
Koagonzalo11
| 2025-08-25T12:09:42Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"finance",
"code",
"text-generation-inference",
"dataset:HuggingFaceFW/fineweb",
"dataset:fka/awesome-chatgpt-prompts",
"base_model:openai/gpt-oss-120b",
"base_model:adapter:openai/gpt-oss-120b",
"license:mit",
"region:us"
] | null | 2025-08-25T11:36:26Z |
---
license: mit
datasets:
- HuggingFaceFW/fineweb
- fka/awesome-chatgpt-prompts
metrics:
- accuracy
base_model:
- openai/gpt-oss-120b
new_version: openai/gpt-oss-120b
library_name: adapter-transformers
tags:
- finance
- code
- text-generation-inference
---
|
bboppp/blockassist-bc-melodic_shiny_coral_1756123740
|
bboppp
| 2025-08-25T12:09:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"melodic shiny coral",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:09:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- melodic shiny coral
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
maheshkommuri/grayscale
|
maheshkommuri
| 2025-08-25T12:07:49Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-25T12:03:24Z |
---
license: apache-2.0
---
|
bboppp/blockassist-bc-stinky_chattering_shrew_1756123609
|
bboppp
| 2025-08-25T12:07:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinky chattering shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:06:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinky chattering shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
valiantcat/Qwen-Image-Edit-Remover-General-LoRA
|
valiantcat
| 2025-08-25T12:05:27Z | 7 | 0 |
diffusers
|
[
"diffusers",
"image-generation",
"lora",
"Qwen-Image",
"image-to-image",
"en",
"base_model:Qwen/Qwen-Image-Edit",
"base_model:adapter:Qwen/Qwen-Image-Edit",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2025-08-25T02:34:03Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen-Image-Edit
tags:
- image-generation
- lora
- Qwen-Image
pipeline_tag: image-to-image
library_name: diffusers
widget:
- text: >-
็งป้ค็ซ็ซ
output:
url: result/result1.png
- text: >-
ๅๆถ็งป้ค่ฟไธชๅฐ็ทๅญฉๅ่ช่ก่ฝฆ
output:
url: result/result2.png
- text: >-
ๅฎๅ
จ็งป้ค่ฟไธชๅฅณไบบ
output:
url: result/result3.png
---
# valiantcat Qwen-Image-Edit LoRA
<Gallery />
## Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is a model for object removal, trained on ```Qwen/Qwen-Image-Edit```, and is suitable for object removal tasks of e-commerce images, character images, and object images.For use in ```ComfyUI```.
The greatest advantage of using this LORA is that it maintains the consistency of the original image without changing any parts.
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">ComfyUI Workflow</h2>
<p>This LoRA works with a modified version of <a href="https://huggingface.co/valiantcat/Qwen-Image-Edit-Remover-General-LoRA/blob/main/Qwen-Edit-LORA.json" style="color: #0366d6; text-decoration: none;">Comfy's Qwen-Image-Edit workflow</a>. The main modification is adding a Qwen-Image-Edit LoRA node connected to the base model.</p>
<p>See the Downloads section above for the modified workflow.</p>
</div>
### Direct Use
```
from diffusers import QwenImageEditPipeline
import torch
from PIL import Image
# Load the pipeline
pipeline = QwenImageEditPipeline.from_pretrained("Qwen/Qwen-Image-Edit")
pipeline.to(torch.bfloat16)
pipeline.to("cuda")
# Load trained LoRA weights for in-scene editing
pipeline.load_lora_weights("valiantcat/Qwen-Image-Edit-Remover-General-LoRA",weight_name="qwen-edit-remover.safetensors")
# Load input image
image = Image.open("./result/test.png").convert("RGB")
# Define in-scene editing prompt
prompt = "็งป้ค็ซ็ซ"
# Generate edited image with enhanced scene understanding
inputs = {
"image": image,
"prompt": prompt,
"generator": torch.manual_seed(12345),
"true_cfg_scale": 4.0,
"negative_prompt": " ",
"num_inference_steps": 50,
}
with torch.inference_mode():
output = pipeline(**inputs)
output_image = output.images[0]
output_image.save("restlt.png")
```
## Trigger phrase
```ไปๅบๆฏไธญ็งป้คXXX```
There is no fixed trigger word. The specific removal prompt needs to be tested more
## Download model
Weights for this model are available in Safetensors format.
[Download](https://huggingface.co/valiantcat/Qwen-Image-Edit-Remover-General-LoRA)
## Training at Chongqing Valiant Cat
This model was trained by the AI Laboratory of Chongqing Valiant Cat Technology Co., LTD(```https://vvicat.com/```).Business cooperation is welcome
|
chainway9/blockassist-bc-untamed_quick_eel_1756121906
|
chainway9
| 2025-08-25T12:05:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:05:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
khapeshduken/blockassist-bc-pale_sedate_quail_1756123443
|
khapeshduken
| 2025-08-25T12:04:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pale sedate quail",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:04:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pale sedate quail
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
starsfriday/Qwen-Image-Edit-Remover-General-LoRA
|
starsfriday
| 2025-08-25T12:04:48Z | 7 | 0 |
diffusers
|
[
"diffusers",
"image-generation",
"lora",
"Qwen-Image",
"image-to-image",
"en",
"base_model:Qwen/Qwen-Image-Edit",
"base_model:adapter:Qwen/Qwen-Image-Edit",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2025-08-25T02:29:10Z |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen-Image-Edit
tags:
- image-generation
- lora
- Qwen-Image
pipeline_tag: image-to-image
library_name: diffusers
widget:
- text: >-
็งป้ค็ซ็ซ
output:
url: result/result1.png
- text: >-
ๅๆถ็งป้ค่ฟไธชๅฐ็ทๅญฉๅ่ช่ก่ฝฆ
output:
url: result/result2.png
- text: >-
ๅฎๅ
จ็งป้ค่ฟไธชๅฅณไบบ
output:
url: result/result3.png
---
# starsfriday Qwen-Image-Edit LoRA
<Gallery />
## Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is a model for object removal, trained on ```Qwen/Qwen-Image-Edit```, and is suitable for object removal tasks of e-commerce images, character images, and object images.For use in ```ComfyUI```.
The greatest advantage of using this LORA is that it maintains the consistency of the original image without changing any parts.
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">ComfyUI Workflow</h2>
<p>This LoRA works with a modified version of <a href="https://huggingface.co/starsfriday/Qwen-Image-Edit-Remover-General-LoRA/blob/main/Qwen-Edit-LORA.json" style="color: #0366d6; text-decoration: none;">Comfy's Qwen-Image-Edit workflow</a>. The main modification is adding a Qwen-Image-Edit LoRA node connected to the base model.</p>
<p>See the Downloads section above for the modified workflow.</p>
</div>
### Direct Use
```
from diffusers import QwenImageEditPipeline
import torch
from PIL import Image
# Load the pipeline
pipeline = QwenImageEditPipeline.from_pretrained("Qwen/Qwen-Image-Edit")
pipeline.to(torch.bfloat16)
pipeline.to("cuda")
# Load trained LoRA weights for in-scene editing
pipeline.load_lora_weights("starsfriday/Qwen-Image-Edit-Remover-General-LoRA",weight_name="qwen-edit-remover.safetensors")
# Load input image
image = Image.open("./result/test.png").convert("RGB")
# Define in-scene editing prompt
prompt = "็งป้ค็ซ็ซ"
# Generate edited image with enhanced scene understanding
inputs = {
"image": image,
"prompt": prompt,
"generator": torch.manual_seed(12345),
"true_cfg_scale": 4.0,
"negative_prompt": " ",
"num_inference_steps": 50,
}
with torch.inference_mode():
output = pipeline(**inputs)
output_image = output.images[0]
output_image.save("restlt.png")
```
## Trigger phrase
```ไปๅบๆฏไธญ็งป้คXXX```
There is no fixed trigger word. The specific removal prompt needs to be tested more
## Download model
Weights for this model are available in Safetensors format.
[Download](https://huggingface.co/starsfriday/Qwen-Image-Edit-Remover-General-LoRA)
## Training at Chongqing Valiant Cat
This model was trained by the AI Laboratory of Chongqing Valiant Cat Technology Co., LTD(```https://vvicat.com/```).Business cooperation is welcome
|
motza0025/blockassist-bc-graceful_beaked_robin_1756121919
|
motza0025
| 2025-08-25T12:04:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"graceful beaked robin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:04:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- graceful beaked robin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1756121980
|
vwzyrraz7l
| 2025-08-25T12:04:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:04:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1756121811
|
katanyasekolah
| 2025-08-25T12:04:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:04:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
esi777/blockassist-bc-camouflaged_trotting_eel_1756123268
|
esi777
| 2025-08-25T12:02:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:01:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1756123262
|
matherchodhuuu
| 2025-08-25T12:02:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:01:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cryptoaga/blockassist-bc-rapid_finicky_bison_1756123250
|
cryptoaga
| 2025-08-25T12:01:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rapid finicky bison",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:01:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rapid finicky bison
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amir-ali-ai/amoozeshyar-beta-0.2
|
amir-ali-ai
| 2025-08-25T12:01:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T12:00:50Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** amir-ali-ai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1756121499
|
kojeklollipop
| 2025-08-25T12:00:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-25T12:00:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.