BigLoveKleinFp8
FP8-quantized version of FLUX.2-klein-base-9B by Black Forest Labs.
Model Details
- Base Model: black-forest-labs/FLUX.2-klein-base-9B
- Quantization: FP8 (E4M3FN)
- Architecture: Rectified Flow Transformer (9B parameters)
- License: cc-by-nc-4.0
Description
This is an FP8-quantized variant of the FLUX.2 Klein Base 9B model, optimized for reduced VRAM usage while maintaining high image quality. The FP8 format significantly reduces memory footprint compared to the full-precision model.
Usage
ComfyUI
Place the model file in your ComfyUI/models/diffusion_models/ (or unet) folder and select it in the appropriate loader node.
Diffusers
from diffusers import FluxPipeline
import torch
pipe = FluxPipeline.from_pretrained(
"Granddyser/BigLoveKleinFp8",
torch_dtype=torch.bfloat16
)
pipe.to("cuda")
image = pipe(
prompt="your prompt here",
num_inference_steps=4,
guidance_scale=0.0,
).images[0]
image.save("output.png")
License
FLUX.2-klein-base-9B Model is licensed by Black Forest Labs. Inc. under the FLUX.2-klein-base-9B Non-Commercial License. Copyright Black Forest Labs. Inc.
- Downloads last month
- 170
Model tree for Granddyser/BigLoveKleinFp8
Base model
black-forest-labs/FLUX.2-klein-base-9B