---
title: HunyuanVideo Foley
emoji: π¬
colorFrom: blue
colorTo: indigo
sdk: gradio
sdk_version: 5.43.1
app_file: app.py
pinned: false
models:
- tencent/HunyuanVideo-Foley
---
π¬ HunyuanVideo-Foley
Multimodal Diffusion with Representation Alignment for High-Fidelity Foley Audio Generation
Professional-grade AI sound effect generation for video content creators
[](https://szczesnys.github.io/hunyuanvideo-foley)
[](https://github.com/Tencent-Hunyuan/HunyuanVideo-Foley)
[](https://arxiv.org/abs/2508.16930)
[](https://huggingface.co/tencent/HunyuanVideo-Foley)
---
### π₯ **Authors**
**Sizhe Shan**1,2* β’ **Qiulin Li**1,3* β’ **Yutao Cui**1 β’ **Miles Yang**1 β’ **Yuehai Wang**2 β’ **Qun Yang**3 β’ **Jin Zhou**1β β’ **Zhao Zhong**1
π’ 1**Tencent Hunyuan** β’ π 2**Zhejiang University** β’ βοΈ 3**Nanjing University of Aeronautics and Astronautics**
*Equal contribution β’ β Project lead
---
### β¨ **Key Highlights**
π **Multi-scenario Sync**
High-quality audio synchronized with complex video scenes
|
π§ **Multi-modal Balance**
Perfect harmony between visual and textual information
|
π΅ **48kHz Hi-Fi Output**
Professional-grade audio generation with crystal clarity
|
---
## π **Abstract**
**π Tencent Hunyuan** proudly open-sources **HunyuanVideo-Foley** - an end-to-end video sound effect generation model!
*A professional-grade AI tool specifically designed for video content creators, widely applicable to diverse scenarios including short video creation, film production, advertising creativity, and game development.*
### π― **Core Highlights**
**π¬ Multi-scenario Audio-Visual Synchronization**
Supports generating high-quality audio that is synchronized and semantically aligned with complex video scenes, enhancing realism and immersive experience for film/TV and gaming applications.
**βοΈ Multi-modal Semantic Balance**
Intelligently balances visual and textual information analysis, comprehensively orchestrates sound effect elements, avoids one-sided generation, and meets personalized dubbing requirements.
**π΅ High-fidelity Audio Output**
Self-developed 48kHz audio VAE perfectly reconstructs sound effects, music, and vocals, achieving professional-grade audio generation quality.
**π SOTA Performance Achieved**
*HunyuanVideo-Foley comprehensively leads the field across multiple evaluation benchmarks, achieving new state-of-the-art levels in audio fidelity, visual-semantic alignment, temporal alignment, and distribution matching - surpassing all open-source solutions!*

*π Performance comparison across different evaluation metrics - HunyuanVideo-Foley leads in all categories*
---
## π§ **Technical Architecture**
### π **Data Pipeline Design**

*π Comprehensive data processing pipeline for high-quality text-video-audio datasets*
The **TV2A (Text-Video-to-Audio)** task presents a complex multimodal generation challenge requiring large-scale, high-quality datasets. Our comprehensive data pipeline systematically identifies and excludes unsuitable content to produce robust and generalizable audio generation capabilities.
### ποΈ **Model Architecture**

*π§ HunyuanVideo-Foley hybrid architecture with multimodal and unimodal transformer blocks*
**HunyuanVideo-Foley** employs a sophisticated hybrid architecture:
- **π Multimodal Transformer Blocks**: Process visual-audio streams simultaneously
- **π΅ Unimodal Transformer Blocks**: Focus on audio stream refinement
- **ποΈ Visual Encoding**: Pre-trained encoder extracts visual features from video frames
- **π Text Processing**: Semantic features extracted via pre-trained text encoder
- **π§ Audio Encoding**: Latent representations with Gaussian noise perturbation
- **β° Temporal Alignment**: Synchformer-based frame-level synchronization with gated modulation
---
## π **Performance Benchmarks**
### π¬ **MovieGen-Audio-Bench Results**
> *Objective and Subjective evaluation results demonstrating superior performance across all metrics*
| π **Method** | **PQ** β | **PC** β | **CE** β | **CU** β | **IB** β | **DeSync** β | **CLAP** β | **MOS-Q** β | **MOS-S** β | **MOS-T** β |
|:-------------:|:--------:|:--------:|:--------:|:--------:|:--------:|:-------------:|:-----------:|:------------:|:------------:|:------------:|
| FoleyGrafter | 6.27 | 2.72 | 3.34 | 5.68 | 0.17 | 1.29 | 0.14 | 3.36Β±0.78 | 3.54Β±0.88 | 3.46Β±0.95 |
| V-AURA | 5.82 | 4.30 | 3.63 | 5.11 | 0.23 | 1.38 | 0.14 | 2.55Β±0.97 | 2.60Β±1.20 | 2.70Β±1.37 |
| Frieren | 5.71 | 2.81 | 3.47 | 5.31 | 0.18 | 1.39 | 0.16 | 2.92Β±0.95 | 2.76Β±1.20 | 2.94Β±1.26 |
| MMAudio | 6.17 | 2.84 | 3.59 | 5.62 | 0.27 | 0.80 | 0.35 | 3.58Β±0.84 | 3.63Β±1.00 | 3.47Β±1.03 |
| ThinkSound | 6.04 | 3.73 | 3.81 | 5.59 | 0.18 | 0.91 | 0.20 | 3.20Β±0.97 | 3.01Β±1.04 | 3.02Β±1.08 |
| **π₯ HiFi-Foley (ours)** | **π’ 6.59** | **π’ 2.74** | **π’ 3.88** | **π’ 6.13** | **π’ 0.35** | **π’ 0.74** | **π’ 0.33** | **π’ 4.14Β±0.68** | **π’ 4.12Β±0.77** | **π’ 4.15Β±0.75** |
### π― **Kling-Audio-Eval Results**
> *Comprehensive objective evaluation showcasing state-of-the-art performance*
| π **Method** | **FD_PANNs** β | **FD_PASST** β | **KL** β | **IS** β | **PQ** β | **PC** β | **CE** β | **CU** β | **IB** β | **DeSync** β | **CLAP** β |
|:-------------:|:--------------:|:--------------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:-------------:|:-----------:|
| FoleyGrafter | 22.30 | 322.63 | 2.47 | 7.08 | 6.05 | 2.91 | 3.28 | 5.44 | 0.22 | 1.23 | 0.22 |
| V-AURA | 33.15 | 474.56 | 3.24 | 5.80 | 5.69 | 3.98 | 3.13 | 4.83 | 0.25 | 0.86 | 0.13 |
| Frieren | 16.86 | 293.57 | 2.95 | 7.32 | 5.72 | 2.55 | 2.88 | 5.10 | 0.21 | 0.86 | 0.16 |
| MMAudio | 9.01 | 205.85 | 2.17 | 9.59 | 5.94 | 2.91 | 3.30 | 5.39 | 0.30 | 0.56 | 0.27 |
| ThinkSound | 9.92 | 228.68 | 2.39 | 6.86 | 5.78 | 3.23 | 3.12 | 5.11 | 0.22 | 0.67 | 0.22 |
| **π₯ HiFi-Foley (ours)** | **π’ 6.07** | **π’ 202.12** | **π’ 1.89** | **π’ 8.30** | **π’ 6.12** | **π’ 2.76** | **π’ 3.22** | **π’ 5.53** | **π’ 0.38** | **π’ 0.54** | **π’ 0.24** |
**π Outstanding Results!** HunyuanVideo-Foley achieves the best scores across **ALL** evaluation metrics, demonstrating significant improvements in audio quality, synchronization, and semantic alignment.
---
## π **Quick Start**
### π¦ **Installation**
**π§ System Requirements**
- **CUDA**: 12.4 or 11.8 recommended
- **Python**: 3.8+
- **OS**: Linux (primary support)
#### **Step 1: Clone Repository**
```bash
# π₯ Clone the repository
git clone https://github.com/Tencent-Hunyuan/HunyuanVideo-Foley
cd HunyuanVideo-Foley
```
#### **Step 2: Environment Setup**
π‘ **Tip**: We recommend using [Conda](https://docs.anaconda.com/free/miniconda/index.html) for Python environment management.
```bash
# π§ Install dependencies
pip install -r requirements.txt
```
#### **Step 3: Download Pretrained Models**
π **Download Model weights from Huggingface**
```bash
# using git-lfs
git clone https://huggingface.co/tencent/HunyuanVideo-Foley
# using huggingface-cli
huggingface-cli download tencent/HunyuanVideo-Foley
```
---
## π» **Usage**
### π¬ **Single Video Generation**
Generate Foley audio for a single video file with text description:
```bash
python3 infer.py \
--model_path PRETRAINED_MODEL_PATH_DIR \
--config_path ./configs/hunyuanvideo-foley-xxl.yaml \
--single_video video_path \
--single_prompt "audio description" \
--output_dir OUTPUT_DIR
```
### π **Batch Processing**
Process multiple videos using a CSV file with video paths and descriptions:
```bash
python3 infer.py \
--model_path PRETRAINED_MODEL_PATH_DIR \
--config_path ./configs/hunyuanvideo-foley-xxl.yaml \
--csv_path assets/test.csv \
--output_dir OUTPUT_DIR
```
### π **Interactive Web Interface**
Launch a user-friendly Gradio web interface for easy interaction:
```bash
export HIFI_FOLEY_MODEL_PATH=PRETRAINED_MODEL_PATH_DIR
python3 gradio_app.py
```
*π Then open your browser and navigate to the provided local URL to start generating Foley audio!*
---
## π **Citation**
If you find **HunyuanVideo-Foley** useful for your research, please consider citing our paper:
```bibtex
@misc{shan2025hunyuanvideofoleymultimodaldiffusionrepresentation,
title={HunyuanVideo-Foley: Multimodal Diffusion with Representation Alignment for High-Fidelity Foley Audio Generation},
author={Sizhe Shan and Qiulin Li and Yutao Cui and Miles Yang and Yuehai Wang and Qun Yang and Jin Zhou and Zhao Zhong},
year={2025},
eprint={2508.16930},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/2508.16930},
}
```
---
## π **Acknowledgements**
**We extend our heartfelt gratitude to the open-source community!**
π¨ **[Stable Diffusion 3](https://huggingface.co/stabilityai/stable-diffusion-3-medium)**
*Foundation diffusion models*
|
β‘ **[FLUX](https://github.com/black-forest-labs/flux)**
*Advanced generation techniques*
|
π΅ **[MMAudio](https://github.com/hkchengrex/MMAudio)**
*Multimodal audio generation*
|
π€ **[HuggingFace](https://huggingface.co)**
*Platform & diffusers library*
|
ποΈ **[DAC](https://github.com/descriptinc/descript-audio-codec)**
*High-Fidelity Audio Compression*
|
π **[Synchformer](https://github.com/v-iashin/Synchformer)**
*Audio-Visual Synchronization*
|
**π Special thanks to all researchers and developers who contribute to the advancement of AI-generated audio and multimodal learning!**
---
### π **Connect with Us**
[](https://github.com/Tencent-Hunyuan)
[](https://twitter.com/TencentHunyuan)
[](https://hunyuan.tencent.com/)
Β© 2025 Tencent Hunyuan. All rights reserved. | Made with β€οΈ for the AI community