Takara.ai Logo

From the Frontier Research Team at Takara.ai we present Flux.1 Q_4_k, a quantized GGUF model optimized for stable-diffusion.cpp, enabling efficient image generation on lower-end hardware. This model was used to create the Kurai Toori Dark Streets dataset.


Features

  • Optimized for lower-end hardware through 4-bit quantization
  • High-quality image generation despite compression
  • Efficient performance with minimal quality degradation
  • Wide-ranging capabilities beyond dark street scenes

Usage

  1. Clone and set up stable-diffusion.cpp:
    git clone https://github.com/leejet/stable-diffusion.cpp.git
    cd stable-diffusion.cpp
    # Follow setup instructions in the stable-diffusion.cpp README
    
  2. Download the GGUF model file from this repository.
  3. Run the model using stable-diffusion.cpp, pointing to the downloaded file:
    ./sd -m path/to/flux.1-q_4_k.gguf -p "your prompt here"
    

Performance Benefits

  • Reduced memory usage compared to full-precision models
  • Faster inference times on consumer hardware
  • Runs on less powerful hardware without significant quality loss
  • Ideal for experimentation and rapid prototyping

Technical Details

This model is a 4-bit quantized version of the FLUX.1-schnell base model from Black Forest Labs. The quantization process preserves the creative capabilities of the original model while dramatically reducing its memory footprint and computational requirements.

Example Use Cases

  • Generating urban nightscapes and cityscapes
  • Creating artistic interpretations for creative projects
  • Rapid prototyping of visual concepts
  • Accessible AI image generation on consumer hardware

For research inquiries and press, please reach out to [email protected]

ไบบ้กžใ‚’ๅค‰้ฉใ™ใ‚‹

Downloads last month
25
GGUF
Model size
11.9B params
Architecture
undefined
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for takara-ai/Flux1-Schnell-Quantized

Quantized
(20)
this model