Wan 2.2 Accelerated Inference Collection optimized demos for Wan 2.2 14B models, using FP8 quantization + AoT compilation & community LoRAs for fast & high quality inference on ZeroGPU ๐จ โข 3 items โข Updated Aug 29, 2025 โข 12