zzx0916 commited on
Commit
be6cf35
Β·
verified Β·
1 Parent(s): eed435f

Create README.md

Browse files

![logo.png](https://cdn-uploads.huggingface.co/production/uploads/646d7592bb95b5d4001e5a04/HDZpvr8F-UaHAHlsF--fh.png)
![method.png](
![demo.png](https://cdn-uploads.huggingface.co/production/uploads/646d7592bb95b5d4001e5a04/RVM42NLlvlwABiQNlTLdd.png)
![teaser.png](https://cdn-uploads.huggingface.co/production/uploads/646d7592bb95b5d4001e5a04/373d-IQMuy5yqOpFRe8cK.png)
)

Files changed (1) hide show
  1. README.md +155 -0
README.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!-- ## **HunyuanVideo-Avatar** -->
2
+
3
+ <p align="center">
4
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/646d7592bb95b5d4001e5a04/HDZpvr8F-UaHAHlsF--fh.png" height=100>
5
+ </p>
6
+
7
+ <div align="center">
8
+ <a href="https://github.com/Tencent/HunyuanVideo-Avatar"><img src="https://img.shields.io/static/v1?label=HunyuanVideo-Avatar%20Code&message=Github&color=blue"></a>
9
+ <a href="https://HunyuanVideo-Avatar.github.io/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Web&color=green"></a>
10
+ <a href="https://hunyuan.tencent.com/modelSquare/home/play?modelId=126"><img src="https://img.shields.io/static/v1?label=Playground&message=Web&color=green"></a>
11
+ <a href="https://arxiv.org/pdf/2505.20156"><img src="https://img.shields.io/static/v1?label=Tech Report&message=Arxiv&color=red"></a>
12
+ <a href="https://huggingface.co/tencent/HunyuanVideo-Avatar"><img src="https://img.shields.io/static/v1?label=HunyuanVideo-Avatar&message=HuggingFace&color=yellow"></a>
13
+ </div>
14
+
15
+
16
+
17
+ ![image](https://cdn-uploads.huggingface.co/production/uploads/646d7592bb95b5d4001e5a04/SAQAlLLsEzC1fURoL89_C.png)
18
+
19
+ > [**Tencent HunyuanVideo-Avatar: Dynamic and Consistent Audio-Driven Human Animation for Multiple Characters**](https://arxiv.org/pdf/2505.20156) <be>
20
+
21
+ ## **Abstract**
22
+
23
+ Recent years have witnessed significant progress in audio-driven human animation. However, critical challenges remain in (i) generating highly dynamic videos while preserving character consistency, (ii) achieving precise emotion alignment between characters and audio, and (iii) enabling multi-character audio-driven animation. To address these challenges, we propose HunyuanVideo-Avatar, a multimodal diffusion transformer (MM-DiT)-based model capable of simultaneously generating dynamic, emotion-controllable, and multi-character dialogue videos. Concretely, HunyuanVideo-Avatar introduces three key innovations: (i) A character image injection module is designed to replace the conventional addition-based character conditioning scheme, eliminating the inherent condition mismatch between training and inference. This ensures the dynamic motion and strong character consistency; (ii) An Audio Emotion Module (AEM) is introduced to extract and transfer the emotional cues from an emotion reference image to the target generated video, enabling fine-grained and accurate emotion style control; (iii) A Face-Aware Audio Adapter (FAA) is proposed to isolate the audio-driven character with latent-level face mask, enabling independent audio injection via cross-attention for multi-character scenarios. These innovations empower HunyuanVideo-Avatar to surpass state-of-the-art methods on benchmark datasets and a newly proposed wild dataset, generating realistic avatars in dynamic, immersive scenarios. The source code and model weights will be released publicly.
24
+
25
+ ## **HunyuanVideo-Avatar Overall Architecture**
26
+
27
+ ![image](https://cdn-uploads.huggingface.co/production/uploads/646d7592bb95b5d4001e5a04/SAQAlLLsEzC1fURoL89_C.png)
28
+
29
+ We propose **HunyuanVideo-Avatar**, a multi-modal diffusion transformer(MM-DiT)-based model capable of generating **dynamic**, **emotion-controllable**, and **multi-character dialogue** videos.
30
+
31
+ ## πŸŽ‰ **HunyuanVideo-Avatar Key Features**
32
+
33
+ ![image](https://cdn-uploads.huggingface.co/production/uploads/646d7592bb95b5d4001e5a04/RVM42NLlvlwABiQNlTLdd.png)
34
+
35
+ ### **High-Dynamic and Emotion-Controllable Video Generation**
36
+
37
+ HunyuanVideo-Avatar supports animating any input **avatar images** to **high-dynamic** and **emotion-controllable** videos with simple **audio conditions**. Specifically, it takes as input **multi-style** avatar images at **arbitrary scales and resolutions**. The system supports multi-style avatars encompassing photorealistic, cartoon, 3D-rendered, and anthropomorphic characters. Multi-scale generation spanning portrait, upper-body and full-body. It generates videos with high-dynamic foreground and background, achieving superior realistic and naturalness. In addition, the system supports controlling facial emotions of the characters conditioned on input audio.
38
+
39
+ ### **Various Applications**
40
+
41
+ HunyuanVideo-Avatar supports various downstream tasks and applications. For instance, the system generates talking avatar videos, which could be applied to e-commerce, online streaming, social media video production, etc. In addition, its multi-character animation feature enlarges the application such as video content creation, editing, etc.
42
+
43
+ ## πŸš€ Parallel Inference on Multiple GPUs
44
+
45
+ For example, to generate a video with 8 GPUs, you can use the following command:
46
+
47
+ ```bash
48
+ cd HunyuanVideo-Avatar
49
+
50
+ JOBS_DIR=$(dirname $(dirname "$0"))
51
+ export PYTHONPATH=./
52
+ export MODEL_BASE="./weights"
53
+ checkpoint_path=${MODEL_BASE}/ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt
54
+
55
+ torchrun --nnodes=1 --nproc_per_node=8 --master_port 29605 hymm_sp/sample_batch.py \
56
+ --input 'assets/test.csv' \
57
+ --ckpt ${checkpoint_path} \
58
+ --sample-n-frames 129 \
59
+ --seed 128 \
60
+ --image-size 704 \
61
+ --cfg-scale 7.5 \
62
+ --infer-steps 50 \
63
+ --use-deepcache 1 \
64
+ --flow-shift-eval-video 5.0 \
65
+ --save-path ${OUTPUT_BASEPATH}
66
+ ```
67
+
68
+ ## πŸ”‘ Single-gpu Inference
69
+
70
+ For example, to generate a video with 1 GPU, you can use the following command:
71
+
72
+ ```bash
73
+ cd HunyuanVideo-Avatar
74
+
75
+ JOBS_DIR=$(dirname $(dirname "$0"))
76
+ export PYTHONPATH=./
77
+
78
+ export MODEL_BASE=./weights
79
+ OUTPUT_BASEPATH=./results-single
80
+ checkpoint_path=${MODEL_BASE}/ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states_fp8.pt
81
+
82
+ export DISABLE_SP=1
83
+ CUDA_VISIBLE_DEVICES=0 python3 hymm_sp/sample_gpu_poor.py \
84
+ --input 'assets/test.csv' \
85
+ --ckpt ${checkpoint_path} \
86
+ --sample-n-frames 129 \
87
+ --seed 128 \
88
+ --image-size 704 \
89
+ --cfg-scale 7.5 \
90
+ --infer-steps 50 \
91
+ --use-deepcache 1 \
92
+ --flow-shift-eval-video 5.0 \
93
+ --save-path ${OUTPUT_BASEPATH} \
94
+ --use-fp8 \
95
+ --infer-min
96
+ ```
97
+
98
+ ### Run with very low VRAM
99
+
100
+ ```bash
101
+ cd HunyuanVideo-Avatar
102
+
103
+ JOBS_DIR=$(dirname $(dirname "$0"))
104
+ export PYTHONPATH=./
105
+
106
+ export MODEL_BASE=./weights
107
+ OUTPUT_BASEPATH=./results-poor
108
+
109
+ checkpoint_path=${MODEL_BASE}/ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states_fp8.pt
110
+
111
+ export CPU_OFFLOAD=1
112
+ CUDA_VISIBLE_DEVICES=0 python3 hymm_sp/sample_gpu_poor.py \
113
+ --input 'assets/test.csv' \
114
+ --ckpt ${checkpoint_path} \
115
+ --sample-n-frames 129 \
116
+ --seed 128 \
117
+ --image-size 704 \
118
+ --cfg-scale 7.5 \
119
+ --infer-steps 50 \
120
+ --use-deepcache 1 \
121
+ --flow-shift-eval-video 5.0 \
122
+ --save-path ${OUTPUT_BASEPATH} \
123
+ --use-fp8 \
124
+ --cpu-offload \
125
+ --infer-min
126
+ ```
127
+
128
+
129
+ ## Run a Gradio Server
130
+ ```bash
131
+ cd HunyuanVideo-Avatar
132
+
133
+ bash ./scripts/run_gradio.sh
134
+
135
+ ```
136
+
137
+ ## πŸ”— BibTeX
138
+
139
+ If you find [HunyuanVideo-Avatar](https://arxiv.org/pdf/2505.20156) useful for your research and applications, please cite using this BibTeX:
140
+
141
+ ```BibTeX
142
+ @misc{hu2025HunyuanVideo-Avatar,
143
+ title={HunyuanVideo-Avatar: High-Fidelity Audio-Driven Human Animation for Multiple Characters},
144
+ author={Yi Chen and Sen Liang and Zixiang Zhou and Ziyao Huang and Yifeng Ma and Junshu Tang and Qin Lin and Yuan Zhou and Qinglin Lu},
145
+ year={2025},
146
+ eprint={2505.20156},
147
+ archivePrefix={arXiv},
148
+ primaryClass={cs.CV},
149
+ url={https://arxiv.org/pdf/2505.20156},
150
+ }
151
+ ```
152
+
153
+ ## Acknowledgements
154
+
155
+ We would like to thank the contributors to the [HunyuanVideo](https://github.com/Tencent/HunyuanVideo), [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [FLUX](https://github.com/black-forest-labs/flux), [Llama](https://github.com/meta-llama/llama), [LLaVA](https://github.com/haotian-liu/LLaVA), [Xtuner](https://github.com/InternLM/xtuner), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration.