🎬 Wan2.2 Distilled Models

⚑ High-Performance Video Generation with 4-Step Inference

Distillation-accelerated version of Wan2.2 - Dramatically faster speed with excellent quality

img_lightx2v


πŸ€— HuggingFace GitHub License


🌟 What's Special?

⚑ Ultra-Fast Generation

  • 4-step inference (vs traditional 50+ steps)
  • Approximately 2x faster using LightX2V than ComfyUI
  • Near real-time video generation capability

🎯 Flexible Options

  • Dual noise control: High/Low noise variants
  • Multiple precision formats (BF16/FP8/INT8)
  • Full 14B parameter models

πŸ’Ύ Memory Efficient

  • FP8/INT8: ~50% size reduction
  • CPU offload support
  • Optimized for consumer GPUs

πŸ”§ Easy Integration

  • Compatible with LightX2V framework
  • ComfyUI support
  • Simple configuration files

πŸ“¦ Model Catalog

πŸŽ₯ Model Types

πŸ–ΌοΈ Image-to-Video (I2V) - 14B Parameters

Transform static images into dynamic videos with advanced quality control

  • 🎨 High Noise: More creative, diverse outputs
  • 🎯 Low Noise: More faithful to input, stable outputs

πŸ“ Text-to-Video (T2V) - 14B Parameters

Generate videos from text descriptions

  • 🎨 High Noise: More creative, diverse outputs
  • 🎯 Low Noise: More stable and controllable outputs
  • πŸš€ Full 14B parameter model

🎯 Precision Versions

Precision Model Identifier Model Size Framework Quality vs Speed
πŸ† BF16 lightx2v_4step ~28.6 GB LightX2V ⭐⭐⭐⭐⭐ Highest Quality
⚑ FP8 scaled_fp8_e4m3_lightx2v_4step ~15 GB LightX2V ⭐⭐⭐⭐ Excellent Balance
🎯 INT8 int8_lightx2v_4step ~15 GB LightX2V ⭐⭐⭐⭐ Fast & Efficient
πŸ”· FP8 ComfyUI scaled_fp8_e4m3_lightx2v_4step_comfyui ~15 GB ComfyUI ⭐⭐⭐ ComfyUI Ready

πŸ“ Naming Convention

# Format: wan2.2_{task}_A14b_{noise_level}_{precision}_lightx2v_4step.safetensors

# I2V Examples:
wan2.2_i2v_A14b_high_noise_lightx2v_4step.safetensors                       # I2V High Noise - BF16
wan2.2_i2v_A14b_high_noise_scaled_fp8_e4m3_lightx2v_4step.safetensors      # I2V High Noise - FP8
wan2.2_i2v_A14b_low_noise_int8_lightx2v_4step.safetensors                  # I2V Low Noise - INT8
wan2.2_i2v_A14b_low_noise_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensors  # I2V Low Noise - FP8 ComfyUI

πŸ’‘ Browse All Models: View Full Model Collection β†’


πŸš€ Usage

Method 1: LightX2V (Recommended ⭐)

LightX2V is a high-performance inference framework optimized for these models, approximately 2x faster than ComfyUI with better quantization accuracy. Highly recommended!

Quick Start

  1. Download model (using I2V FP8 as example)
huggingface-cli download lightx2v/Wan2.2-Distill-Models \
    --local-dir ./models/wan2.2_i2v \
    --include "wan2.2_i2v_A14b_high_noise_scaled_fp8_e4m3_lightx2v_4step.safetensors"
huggingface-cli download lightx2v/Wan2.2-Distill-Models \
    --local-dir ./models/wan2.2_i2v \
    --include "wan2.2_i2v_A14b_low_noise_scaled_fp8_e4m3_lightx2v_4step.safetensors"

πŸ’‘ Tip: For T2V models, follow the same steps but replace i2v with t2v in the filenames

  1. Clone LightX2V repository
git clone https://github.com/ModelTC/LightX2V.git
cd LightX2V
  1. Install dependencies
pip install -r requirements.txt

Or refer to Quick Start Documentation to use docker

  1. Select and modify configuration file

Choose appropriate configuration based on your GPU memory:

80GB+ GPUs (A100/H100)

24GB+ GPUs (RTX 4090)

  1. Run inference (using I2V as example)
cd scripts
bash wan22/run_wan22_moe_i2v_distill.sh

πŸ“ Note: Update model paths in the script to point to your Wan2.2 model. Also refer to LightX2V Model Structure Documentation

LightX2V Documentation


Method 2: ComfyUI

Please refer to workflow

⚠️ Important Notes

Other Components: These models only contain DIT weights. Additional components needed at runtime:

  • T5 text encoder
  • CLIP vision encoder
  • VAE encoder/decoder
  • Tokenizer

Please refer to LightX2V Documentation for instructions on organizing the complete model directory.

🀝 Community

If you find this project helpful, please give us a ⭐ on GitHub

Downloads last month
30,819
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for lightx2v/Wan2.2-Distill-Models

Finetuned
(26)
this model
Quantizations
1 model

Collection including lightx2v/Wan2.2-Distill-Models