ginipick commited on
Commit
cf14c5d
·
verified ·
1 Parent(s): 81c0e1c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -234
README.md CHANGED
@@ -1,245 +1,13 @@
1
  ---
2
- title: ACE Step
3
  emoji: 😻
4
  colorFrom: blue
5
  colorTo: pink
6
  sdk: gradio
7
- sdk_version: 5.27.0
8
  app_file: app.py
9
  pinned: false
10
  license: apache-2.0
11
  short_description: A Step Towards Music Generation Foundation Model
12
  ---
13
 
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
15
-
16
- <h1 align="center">✨ ACE-Step ✨</h1>
17
- <h1 align="center">🎵 A Step Towards Music Generation Foundation Model 🎵</h1>
18
- <p align="center">
19
- <a href="https://ace-step.github.io/">Project</a> |
20
- <a href="https://github.com/ace-step/ACE-Step">Code</a> |
21
- <a href="https://huggingface.co/ACE-Step/ACE-Step-v1-3.5B">Checkpoints</a> |
22
- <a href="https://huggingface.co/spaces/ACE-Step/ACE-Step">Space Demo</a>
23
- </p>
24
-
25
- ---
26
- <p align="center">
27
- <img src="./fig/orgnization_logos.png" width="100%" alt="Org Logo">
28
- </p>
29
-
30
- ## Table of Contents
31
-
32
- - [Features](#-features)
33
- - [Installation](#-installation)
34
- - [Usage](#-user-interface-guide)
35
-
36
- ## 📢 News and Updates
37
-
38
- - 🚀 2025.05.06: Open source demo code and model
39
-
40
- ## TODOs📋
41
- - [ ] 🔁 Release training code
42
- - [ ] 🔄 Release LoRA training code & 🎤 RapMachine lora
43
- - [ ] 🎮 Release ControlNet training code & 🎤 Singing2Accompaniment controlnet
44
-
45
- ## 🏗️ Architecture
46
-
47
- <p align="center">
48
- <img src="./fig/ACE-Step_framework.png" width="100%" alt="ACE-Step Framework">
49
- </p>
50
-
51
-
52
- ## 📝 Abstract
53
-
54
- We introduce ACE-Step, a novel open-source foundation model for music generation that overcomes key limitations of existing approaches and achieves state-of-the-art performance through a holistic architectural design. Current methods face inherent trade-offs between generation speed, musical coherence, and controllability. For instance, LLM-based models (e.g., Yue, SongGen) excel at lyric alignment but suffer from slow inference and structural artifacts. Diffusion models (e.g., DiffRhythm), on the other hand, enable faster synthesis but often lack long-range structural coherence.
55
-
56
- ACE-Step bridges this gap by integrating diffusion-based generation with Sana’s Deep Compression AutoEncoder (DCAE) and a lightweight linear transformer. It further leverages MERT and m-hubert to align semantic representations (REPA) during training, enabling rapid convergence. As a result, our model synthesizes up to 4 minutes of music in just 20 seconds on an A100 GPU—15× faster than LLM-based baselines—while achieving superior musical coherence and lyric alignment across melody, harmony, and rhythm metrics. Moreover, ACE-Step preserves fine-grained acoustic details, enabling advanced control mechanisms such as voice cloning, lyric editing, remixing, and track generation (e.g., lyric2vocal, singing2accompaniment).
57
-
58
- Rather than building yet another end-to-end text-to-music pipeline, our vision is to establish a foundation model for music AI: a fast, general-purpose, efficient yet flexible architecture that makes it easy to train sub-tasks on top of it. This paves the way for developing powerful tools that seamlessly integrate into the creative workflows of music artists, producers, and content creators. In short, we aim to build the Stable Diffusion moment for music.
59
-
60
- ## ✨ Features
61
-
62
- <p align="center">
63
- <img src="./fig/application_map.png" width="100%" alt="ACE-Step Framework">
64
- </p>
65
-
66
- ### 🎯 Baseline Quality
67
-
68
- #### 🌈 Diverse Styles & Genres
69
- - 🎸 Supports all mainstream music styles with various description formats including short tags, descriptive text, or use-case scenarios
70
- - 🎷 Capable of generating music across different genres with appropriate instrumentation and style
71
-
72
- #### 🌍 Multiple Languages
73
- - 🗣️ Supports 19 languages with top 10 well-performing languages including:
74
- - 🇺🇸 English, 🇨🇳 Chinese, 🇷🇺 Russian, 🇪🇸 Spanish, 🇯🇵 Japanese, 🇩🇪 German, 🇫🇷 French, 🇵🇹 Portuguese, 🇮🇹 Italian, 🇰🇷 Korean
75
- - ⚠️ Due to data imbalance, less common languages may underperform
76
-
77
- #### 🎻 Instrumental Styles
78
- - 🎹 Supports various instrumental music generation across different genres and styles
79
- - 🎺 Capable of producing realistic instrumental tracks with appropriate timbre and expression for each instrument
80
- - 🎼 Can generate complex arrangements with multiple instruments while maintaining musical coherence
81
-
82
- #### 🎤 Vocal Techniques
83
- - 🎙️ Capable of rendering various vocal styles and techniques with good quality
84
- - 🗣️ Supports different vocal expressions including various singing techniques and styles
85
-
86
- ### 🎛️ Controllability
87
-
88
- #### 🔄 Variations Generation
89
- - ⚙️ Implemented using training-free, inference-time optimization techniques
90
- - 🌊 Flow-matching model generates initial noise, then uses trigFlow's noise formula to add additional Gaussian noise
91
- - 🎚️ Adjustable mixing ratio between original initial noise and new Gaussian noise to control variation degree
92
-
93
- #### 🎨 Repainting
94
- - 🖌️ Implemented by adding noise to the target audio input and applying mask constraints during the ODE process
95
- - 🔍 When input conditions change from the original generation, only specific aspects can be modified while preserving the rest
96
- - 🔀 Can be combined with Variations Generation techniques to create localized variations in style, lyrics, or vocals
97
-
98
- #### ✏️ Lyric Editing
99
- - 💡 Innovatively applies flow-edit technology to enable localized lyric modifications while preserving melody, vocals, and accompaniment
100
- - 🔄 Works with both generated content and uploaded audio, greatly enhancing creative possibilities
101
- - ℹ️ Current limitation: can only modify small segments of lyrics at once to avoid distortion, but multiple edits can be applied sequentially
102
-
103
- ### 🚀 Applications
104
-
105
- #### 🎤 Lyric2Vocal (LoRA)
106
- - 🔊 Based on a LoRA fine-tuned on pure vocal data, allowing direct generation of vocal samples from lyrics
107
- - 🛠️ Offers numerous practical applications such as vocal demos, guide tracks, songwriting assistance, and vocal arrangement experimentation
108
- - ⏱️ Provides a quick way to test how lyrics might sound when sung, helping songwriters iterate faster
109
-
110
- #### 📝 Text2Samples (LoRA)
111
- - 🎛️ Similar to Lyric2Vocal, but fine-tuned on pure instrumental and sample data
112
- - 🎵 Capable of generating conceptual music production samples from text descriptions
113
- - 🧰 Useful for quickly creating instrument loops, sound effects, and musical elements for production
114
-
115
- ### 🔮 Coming Soon
116
-
117
- #### 🎤 RapMachine
118
- - 🔥 Fine-tuned on pure rap data to create an AI system specialized in rap generation
119
- - 🏆 Expected capabilities include AI rap battles and narrative expression through rap
120
- - 📚 Rap has exceptional storytelling and expressive capabilities, offering extraordinary application potential
121
-
122
- #### 🎛️ StemGen
123
- - 🎚️ A controlnet-lora trained on multi-track data to generate individual instrument stems
124
- - 🎯 Takes a reference track and specified instrument (or instrument reference audio) as input
125
- - 🎹 Outputs an instrument stem that complements the reference track, such as creating a piano accompaniment for a flute melody or adding jazz drums to a lead guitar
126
-
127
- #### 🎤 Singing2Accompaniment
128
- - 🔄 The reverse process of StemGen, generating a mixed master track from a single vocal track
129
- - 🎵 Takes a vocal track and specified style as input to produce a complete vocal accompaniment
130
- - 🎸 Creates full instrumental backing that complements the input vocals, making it easy to add professional-sounding accompaniment to any vocal recording
131
-
132
- ## 💻 Installation
133
-
134
- ```bash
135
- conda create -n ace_step python==3.10
136
- conda activate ace_step
137
- pip install -r requirements.txt
138
- conda install ffmpeg
139
- ```
140
-
141
- ## 🖥️ Hardware Performance
142
-
143
- We've tested ACE-Step on various hardware configurations with the following throughput results:
144
-
145
- | Device | 27 Steps | 60 Steps |
146
- |--------|-------------------------|-------------------------|
147
- | NVIDIA A100 | 0.036675| 0.0815 |
148
- | MacBook M2 Max | | 0.44 | 0.97 |
149
- | NVIDIA RTX 4090 | 0.029 | 0.064 |
150
-
151
- seconds cost per generated audio (seconds/audio)
152
- For example, to generate a 180-second song, multiply 180 by the seconds cost per generated audio (seconds/audio) for the desired device and step count. This will give you the total time required for the generation process.
153
-
154
- ## 🚀 Usage
155
-
156
- ![Demo Interface](fig/demo_interface.png)
157
-
158
- ### 🔍 Basic Usage
159
-
160
- ```bash
161
- python app.py
162
- ```
163
-
164
- ### ⚙️ Advanced Usage
165
-
166
- ```bash
167
- python app.py --checkpoint_path /path/to/checkpoint --port 7865 --device_id 0 --share --bf16
168
- ```
169
-
170
- #### 🛠️ Command Line Arguments
171
-
172
- - `--checkpoint_path`: Path to the model checkpoint (default: downloads automatically)
173
- - `--port`: Port to run the Gradio server on (default: 7865)
174
- - `--device_id`: GPU device ID to use (default: 0)
175
- - `--share`: Enable Gradio sharing link (default: False)
176
- - `--bf16`: Use bfloat16 precision for faster inference (default: True)
177
-
178
- ## 📱 User Interface Guide
179
-
180
- The ACE-Step interface provides several tabs for different music generation and editing tasks:
181
-
182
- ### 📝 Text2Music Tab
183
-
184
- 1. **📋 Input Fields**:
185
- - **🏷️ Tags**: Enter descriptive tags, genres, or scene descriptions separated by commas
186
- - **📜 Lyrics**: Enter lyrics with structure tags like [verse], [chorus], and [bridge]
187
- - **⏱️ Audio Duration**: Set the desired duration of the generated audio (-1 for random)
188
-
189
- 2. **⚙️ Settings**:
190
- - **🔧 Basic Settings**: Adjust inference steps, guidance scale, and seeds
191
- - **🔬 Advanced Settings**: Fine-tune scheduler type, CFG type, ERG settings, and more
192
-
193
- 3. **🚀 Generation**: Click "Generate" to create music based on your inputs
194
-
195
- ### 🔄 Retake Tab
196
-
197
- - 🎲 Regenerate music with slight variations using different seeds
198
- - 🎚️ Adjust variance to control how much the retake differs from the original
199
-
200
- ### 🎨 Repainting Tab
201
-
202
- - 🖌️ Selectively regenerate specific sections of the music
203
- - ⏱️ Specify start and end times for the section to repaint
204
- - 🔍 Choose the source audio (text2music output, last repaint, or upload)
205
-
206
- ### ✏️ Edit Tab
207
-
208
- - 🔄 Modify existing music by changing tags or lyrics
209
- - 🎛️ Choose between "only_lyrics" mode (preserves melody) or "remix" mode (changes melody)
210
- - 🎚️ Adjust edit parameters to control how much of the original is preserved
211
-
212
- ### 📏 Extend Tab
213
-
214
- - ➕ Add music to the beginning or end of an existing piece
215
- - 📐 Specify left and right extension lengths
216
- - 🔍 Choose the source audio to extend
217
-
218
- ## Examples
219
-
220
- The `examples/input_params` directory contains sample input parameters that can be used as references for generating music.
221
-
222
- ## 📜 License&Disclaimer
223
-
224
- This project is licensed under [Apache License 2.0](./LICENSE)
225
-
226
- ACE-Step enables original music generation across diverse genres, with applications in creative production, education, and entertainment. While designed to support positive and artistic use cases, we acknowledge potential risks such as unintentional copyright infringement due to stylistic similarity, inappropriate blending of cultural elements, and misuse for generating harmful content. To ensure responsible use, we encourage users to verify the originality of generated works, clearly disclose AI involvement, and obtain appropriate permissions when adapting protected styles or materials. By using ACE-Step, you agree to uphold these principles and respect artistic integrity, cultural diversity, and legal compliance. The authors are not responsible for any misuse of the model, including but not limited to copyright violations, cultural insensitivity, or the generation of harmful content.
227
-
228
- ## 🙏 Acknowledgements
229
-
230
- This project is co-led by ACE Studio and StepFun.
231
-
232
-
233
- ## 📖 Citation
234
-
235
- If you find this project useful for your research, please consider citing:
236
-
237
- ```BibTeX
238
- @misc{gong2025acestep,
239
- title={ACE-Step: A Step Towards Music Generation Foundation Model},
240
- author={Junmin Gong, Wenxiao Zhao, Sen Wang, Shengyuan Xu, Jing Guo},
241
- howpublished={\url{https://github.com/ace-step/ACE-Step}},
242
- year={2025},
243
- note={GitHub repository}
244
- }
245
- ```
 
1
  ---
2
+ title: ACE Step PRO
3
  emoji: 😻
4
  colorFrom: blue
5
  colorTo: pink
6
  sdk: gradio
7
+ sdk_version: 5.31.0
8
  app_file: app.py
9
  pinned: false
10
  license: apache-2.0
11
  short_description: A Step Towards Music Generation Foundation Model
12
  ---
13