π¬ Wan2.2 Distilled Models
β‘ High-Performance Video Generation with 4-Step Inference
Distillation-accelerated version of Wan2.2 - Dramatically faster speed with excellent quality
π What's Special?
π¦ Model Catalog
π₯ Model Types
π― Precision Versions
| Precision | Model Identifier | Model Size | Framework | Quality vs Speed |
|---|---|---|---|---|
| π BF16 | lightx2v_4step |
~28.6 GB | LightX2V | βββββ Highest Quality |
| β‘ FP8 | scaled_fp8_e4m3_lightx2v_4step |
~15 GB | LightX2V | ββββ Excellent Balance |
| π― INT8 | int8_lightx2v_4step |
~15 GB | LightX2V | ββββ Fast & Efficient |
| π· FP8 ComfyUI | scaled_fp8_e4m3_lightx2v_4step_comfyui |
~15 GB | ComfyUI | βββ ComfyUI Ready |
π Naming Convention
# Format: wan2.2_{task}_A14b_{noise_level}_{precision}_lightx2v_4step.safetensors
# I2V Examples:
wan2.2_i2v_A14b_high_noise_lightx2v_4step.safetensors # I2V High Noise - BF16
wan2.2_i2v_A14b_high_noise_scaled_fp8_e4m3_lightx2v_4step.safetensors # I2V High Noise - FP8
wan2.2_i2v_A14b_low_noise_int8_lightx2v_4step.safetensors # I2V Low Noise - INT8
wan2.2_i2v_A14b_low_noise_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensors # I2V Low Noise - FP8 ComfyUI
π‘ Browse All Models: View Full Model Collection β
π Usage
Method 1: LightX2V (Recommended β)
LightX2V is a high-performance inference framework optimized for these models, approximately 2x faster than ComfyUI with better quantization accuracy. Highly recommended!
Quick Start
- Download model (using I2V FP8 as example)
huggingface-cli download lightx2v/Wan2.2-Distill-Models \
--local-dir ./models/wan2.2_i2v \
--include "wan2.2_i2v_A14b_high_noise_scaled_fp8_e4m3_lightx2v_4step.safetensors"
huggingface-cli download lightx2v/Wan2.2-Distill-Models \
--local-dir ./models/wan2.2_i2v \
--include "wan2.2_i2v_A14b_low_noise_scaled_fp8_e4m3_lightx2v_4step.safetensors"
π‘ Tip: For T2V models, follow the same steps but replace
i2vwitht2vin the filenames
- Clone LightX2V repository
git clone https://github.com/ModelTC/LightX2V.git
cd LightX2V
- Install dependencies
pip install -r requirements.txt
Or refer to Quick Start Documentation to use docker
- Select and modify configuration file
Choose appropriate configuration based on your GPU memory:
80GB+ GPUs (A100/H100)
24GB+ GPUs (RTX 4090)
- Run inference (using I2V as example)
cd scripts
bash wan22/run_wan22_moe_i2v_distill.sh
π Note: Update model paths in the script to point to your Wan2.2 model. Also refer to LightX2V Model Structure Documentation
LightX2V Documentation
- Quick Start Guide: LightX2V Quick Start
- Complete Usage Guide: LightX2V Model Structure Documentation
- Configuration File Instructions: Configuration Files
- Quantized Model Usage: Quantization Documentation
- Parameter Offloading: Offload Documentation
Method 2: ComfyUI
Please refer to workflow
β οΈ Important Notes
Other Components: These models only contain DIT weights. Additional components needed at runtime:
- T5 text encoder
- CLIP vision encoder
- VAE encoder/decoder
- Tokenizer
Please refer to LightX2V Documentation for instructions on organizing the complete model directory.
π€ Community
- GitHub Issues: https://github.com/ModelTC/LightX2V/issues
- HuggingFace: https://huggingface.co/lightx2v/Wan2.2-Distill-Models
If you find this project helpful, please give us a β on GitHub
- Downloads last month
- 30,819
