shenyunhang commited on
Commit
49b5b45
·
verified ·
1 Parent(s): 21c7606

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -276
README.md CHANGED
@@ -1,278 +1,8 @@
1
- # VITA-Audio: Fast Interleaved Audio-Text Token Generation for Efficient Large Speech-Language Model
2
-
3
- <p align="center">
4
- <img src="asset/VITA_audio_logos.png" width="50%" height="50%">
5
- </p>
6
-
7
- <p align="center">
8
- <a href="https://arxiv.org/abs/2502.05177" target="_blank"><img src="https://img.shields.io/badge/VITA%20Audio-Report-b5212f.svg?logo=arxiv" /></a>
9
- <a href="https://huggingface.co/collections/VITA-MLLM/vita-audio-680f036c174441e7cdf02575" target="_blank"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-ffc107?color=ffc107&logoColor=white" /></a>
10
- </p>
11
-
12
-
13
- ## :fire: News
14
-
15
-
16
-
17
- * **`2025.05.06`** 🌟 We are proud to launch VITA-Audio, an end-to-end large speech model with fast audio-text token generation.
18
-
19
-
20
- ## 📄 Contents <!-- omit in toc -->
21
-
22
-
23
- - [Highlights](#-highlights)
24
- - [Exhibition](#-exhibition)
25
- - [Models](#-models)
26
- - [Experimental Results](#-experimental-results)
27
- - [Training](#-training)
28
- - [Inference](#-inference)
29
- - [Evaluation](#-evaluation)
30
-
31
-
32
- ## ✨ Highlights
33
-
34
- - **Low Latency**. VITA-Audio is the first end-to-end speech model capable of generating audio during the initial forward pass. By utilizing a set of 32 prefill tokens, VITA-Audio reduces the time required to generate the first audio token chunk from 217 ms to just 47 ms.
35
- - **Fast Inference**. VITA-Audio achieves an inference speedup of 3-5x at the 7B parameter scale.
36
- - **Open Source**. VITA-Audio is trained on **open-source data** only, consisting of 200k hours of publicly available audio.
37
- - **Strong Performance**. VITA-Audio achieves competitive results on ASR,TTS and SQA benchmarks among cutting-edge models under 7B parameters.
38
-
39
-
40
-
41
- ## 📌 Exhibition
42
-
43
- ### Inference Acceleration
44
- Model inference speed under different inference modes.
45
-
46
- <p align="center">
47
- <img src="./asset/qa_speed.gif" alt="demogif" width="48%" style="display: inline-block; margin-right: 2%;">
48
- <img src="./asset/tts_speed.gif" alt="second_gif" width="48%" style="display: inline-block;">
49
- </p>
50
-
51
- ### Time to Generate the First Audio Segment In Streaming Inference
52
- <div align="center">
53
- <img width="400" alt="first audio generate time" src="https://github.com/user-attachments/assets/165f943e-ac53-443f-abba-e5eb1e0c0f40" />
54
- </div>
55
-
56
-
57
- ### Generated Audio Case
58
-
59
-
60
-
61
- > 打南边来了个哑巴,腰里别了个喇叭;打北边来了个喇嘛,手里提了个獭犸。
62
- > 提着獭犸的喇嘛要拿獭犸换别着喇叭的哑巴的喇叭;别着喇叭的哑巴不愿拿喇叭换提着獭玛的喇嘛的獭犸。
63
- > 不知是别着喇叭的哑巴打了提着獭玛的喇嘛一喇叭;还是提着獭玛的喇嘛打了别着喇叭的哑巴一獭玛。
64
- > 喇嘛回家炖獭犸;哑巴嘀嘀哒哒吹喇叭。
65
-
66
- https://github.com/user-attachments/assets/38da791f-5d72-4d9c-a9b2-cec97c2f2b2b
67
-
68
-
69
  ---
70
-
71
- > To be or not to be--to live intensely and richly,
72
- > merely to exist, that depends on ourselves. Let widen and intensify our relations.
73
- > While we live, let live!
74
-
75
- https://github.com/user-attachments/assets/fd478065-4041-4eb8-b331-0c03b304d853
76
-
77
-
78
  ---
79
-
80
- > The hair has been so little, don't think about it, go to bed early, for your hair. Good night!
81
-
82
- https://github.com/user-attachments/assets/4cfe4742-e237-42bd-9f17-7935b2285799
83
-
84
-
85
- ---
86
- > 两个黄鹂鸣翠柳,
87
- > 一行白鹭上青天。
88
- > 窗含西岭千秋雪,
89
- > 门泊东吴万里船。
90
-
91
- https://github.com/user-attachments/assets/382620ee-bb2a-488e-9e00-71afd2342b56
92
-
93
-
94
- ---
95
- ## 🔔 Models
96
-
97
- | Model | LLM Size | Huggingface Weights |
98
- |-------------------------|----------|---------------------------------------------------------------|
99
- | VITA-Audio-Boost | 7B | https://huggingface.co/VITA-MLLM/VITA-Audio-Boost |
100
- | VITA-Audio-Balance | 7B | https://huggingface.co/VITA-MLLM/VITA-Audio-Balance |
101
- | VITA-Audio-Plus-Vanilla | 7B | https://huggingface.co/VITA-MLLM/VITA-Audio-Plus-Vanilla |
102
-
103
-
104
-
105
- ## 📈 Experimental Results
106
- - **Comparison of Spoken Question Answering**.
107
-
108
- ![Clipboard_Screenshot_1746531780](https://github.com/user-attachments/assets/3adcad15-0333-4b92-bfdf-b753b330a3e2)
109
-
110
-
111
- - **Comparison of Text to Speech**.
112
-
113
- ![image](https://github.com/user-attachments/assets/09cf8fd3-d7a5-4b77-be49-5a0ace308f3f)
114
-
115
-
116
- - **Comparison of Automatic Speech Recognition**.
117
-
118
- ![Clipboard_Screenshot_1746532039](https://github.com/user-attachments/assets/d950cae0-c065-4da9-b37a-a471d28158a0)
119
-
120
- ![Clipboard_Screenshot_1746532022](https://github.com/user-attachments/assets/929f45cd-693a-4ff6-af73-ceec6e875706)
121
-
122
-
123
-
124
- - **Effectiveness of Inference Acceleration**.
125
-
126
-
127
- ![Clipboard_Screenshot_1746532167](https://github.com/user-attachments/assets/ad8b9e90-cd3c-4968-8653-998811a50006)
128
-
129
- ![Image](https://github.com/user-attachments/assets/4aa5db8c-362d-4152-8090-92292b9a84c0)
130
-
131
-
132
-
133
- ## 📔 Requirements and Installation
134
-
135
- ### Prepare Environment
136
- ```
137
- docker pull shenyunhang/pytorch:24.11-py3_2024-1224
138
- ```
139
-
140
- ### Get the Code
141
- ```
142
- git clone https://github.com/VITA-MLLM/VITA-Audio.git
143
- cd VITA-Audio
144
- pip install -r requirements_ds_gpu.txt
145
- pip install -e .
146
- ```
147
-
148
- ### Prepare Pre-trained Weight
149
-
150
- #### LLM
151
-
152
- - Download the LLM from https://huggingface.co/Qwen/Qwen2.5-7B-Instruct.
153
- - Put it into '../models/Qwen/Qwen2.5-7B-Instruct/'
154
-
155
- #### Audio Encoder and Audio Decoder
156
-
157
- - Download the Audio Encoder from https://huggingface.co/THUDM/glm-4-voice-tokenizer.
158
- - Put it into '../models/THUDM/glm-4-voice-tokenizer'
159
-
160
- - Download the Audio Decoder from https://huggingface.co/THUDM/glm-4-voice-decoder.
161
- - Put it into '../models/THUDM/glm-4-voice-decoder'
162
-
163
-
164
- ### Data Format
165
- #### **Speech QA Interleaved Data Format**
166
-
167
- > This format shows how text and audio sequences are interleaved in a structured JSON conversation between a user and an assistant.
168
-
169
- ```jsonc
170
- {
171
- "messages": [
172
- {
173
- "role": "user",
174
- "content": "<|begin_of_audio|> audio_sequence <|end_of_audio|>"
175
- },
176
- {
177
- "role": "assistant",
178
- "content": "text_sequence_1 <|begin_of_audio|> audio_sequence_1 <|end_of_audio|> text_sequence_2 <|begin_of_audio|> audio_sequence_2 <|end_of_audio|>"
179
- }
180
- ]
181
- }
182
- ```
183
-
184
- ## 🎲 Training
185
-
186
-
187
- The following tutorial will take `VITA-Audio-Boost` as an example.
188
-
189
- - To train `VITA-Audio-Balance` and other variants, you should modify the `text-audio-interval-ratio`.
190
-
191
- VITA-Audio-Boost:
192
- ```
193
- --text-audio-interval-ratio 1 10 4 10 \
194
- ```
195
-
196
- VITA-Audio-Balance:
197
- ```
198
- --text-audio-interval-ratio 1 4 3 8 4 10 \
199
- ```
200
-
201
- - To train `VITA-Audio-Plus-*`, you should use the script like `scripts/deepspeed/sts_qwen25/finetune_sensevoice_glm4voice...`
202
-
203
- ### Stage-1 (Audio-Text Alignment)
204
-
205
- ```
206
- bash scripts/deepspeed/sts_qwen25/finetune_glm4voice_stage1.sh 8192 `date +'%Y%m%d_%H%M%S'`
207
- ```
208
-
209
- The above script may need some adjustments.
210
-
211
- - Set `ROOT_PATH` to your code root folder.
212
- - Set `LOCAL_ROOT_PATH` to a temporary code root folder.
213
- - Modify other variables as needed for your environment.
214
-
215
- ### Stage-2 (Single MCTP Module Training)
216
-
217
- ```
218
- bash scripts/deepspeed/sts_qwen25/finetune_glm4voice_mtp1_stage1.sh 8192 `date +'%Y%m%d_%H%M%S'`
219
- ```
220
-
221
- The above script may need some adjustments.
222
-
223
- - Set `ROOT_PATH` to your code root folder.
224
- - Set `LOCAL_ROOT_PATH` to a temporary code root folder.
225
- - Set `MODEL_NAME_OR_PATH` to the path of the model trained in Stage 1.
226
- - Modify other variables as needed for your environment.
227
-
228
- ### Stage-3 (Multiple MCTP Modules Training)
229
-
230
- ```
231
- bash scripts/deepspeed/sts_qwen25/finetune_glm4voice_mtp10_stage1.sh 8192 `date +'%Y%m%d_%H%M%S'`
232
- ```
233
-
234
- The above script may need some adjustments.
235
-
236
- - Set `ROOT_PATH` to your code root folder.
237
- - Set `LOCAL_ROOT_PATH` to a temporary code root folder.
238
- - Set `MODEL_NAME_OR_PATH` to the path of the model trained in Stage 2.
239
- - Modify other variables as needed for your environment.
240
-
241
- ### Stage-4 (Supervised Fine-tuning)
242
-
243
- ```
244
- bash scripts/deepspeed/sts_qwen25/finetune_glm4voice_mtp10_stage2.sh 2048 `date +'%Y%m%d_%H%M%S'`
245
- ```
246
-
247
- The above script may need some adjustments.
248
-
249
- - Set `ROOT_PATH` to your code root folder.
250
- - Set `LOCAL_ROOT_PATH` to a temporary code root folder.
251
- - Set `MODEL_NAME_OR_PATH` to the path of the model trained in Stage 3.
252
- - Modify other variables as needed for your environment.
253
-
254
-
255
-
256
- ## 📐 Inference
257
-
258
- Here we implement a simple script for inference.
259
-
260
- It includes examples of speech-to-speech, ASR, and TTS tasks, as well as inference speed testing.
261
-
262
- ```
263
- python tools/inference_sts.py
264
- ```
265
-
266
- - Set `model_name_or_path` to VITA-Audio weights.
267
- - Set `audio_tokenizer_path` to the path of the audio encoder.
268
- - Set `flow_path` to the path of the audio decoder.
269
-
270
-
271
- ## 🔎 Evaluation
272
-
273
- Evaluate SQA, ASR, and TTS benchmarks
274
- ```
275
- bash scripts/deepspeed/evaluate_sts.sh
276
- ```
277
-
278
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ title: VITA-Audio
4
+ emoji: 🚀
5
+ colorTo: red
6
+ pinned: true
7
+ app_file: web_demo.py
 
 
8
  ---