Update README.md
Browse files
README.md
CHANGED
@@ -3,4 +3,374 @@ license: mit
|
|
3 |
pipeline_tag: text-to-speech
|
4 |
tags:
|
5 |
- jellybox
|
6 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
pipeline_tag: text-to-speech
|
4 |
tags:
|
5 |
- jellybox
|
6 |
+
---
|
7 |
+
<div align="center">
|
8 |
+
|
9 |
+
|
10 |
+
<h1>GPT-SoVITS-WebUI</h1>
|
11 |
+
A Powerful Few-shot Voice Conversion and Text-to-Speech WebUI.<br><br>
|
12 |
+
|
13 |
+
[](https://github.com/RVC-Boss/GPT-SoVITS)
|
14 |
+
|
15 |
+
<a href="https://trendshift.io/repositories/7033" target="_blank"><img src="https://trendshift.io/api/badge/repositories/7033" alt="RVC-Boss%2FGPT-SoVITS | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
16 |
+
|
17 |
+
<!-- img src="https://counter.seku.su/cmoe?name=gptsovits&theme=r34" /><br> -->
|
18 |
+
|
19 |
+
[](https://colab.research.google.com/github/RVC-Boss/GPT-SoVITS/blob/main/colab_webui.ipynb)
|
20 |
+
[](https://github.com/RVC-Boss/GPT-SoVITS/blob/main/LICENSE)
|
21 |
+
[](https://huggingface.co/spaces/lj1995/GPT-SoVITS-v2)
|
22 |
+
[](https://discord.gg/dnrgs5GHfG)
|
23 |
+
|
24 |
+
**English** | [**中文简体**](./docs/cn/README.md) | [**日本語**](./docs/ja/README.md) | [**한국어**](./docs/ko/README.md) | [**Türkçe**](./docs/tr/README.md)
|
25 |
+
|
26 |
+
</div>
|
27 |
+
|
28 |
+
---
|
29 |
+
|
30 |
+
## Features:
|
31 |
+
|
32 |
+
1. **Zero-shot TTS:** Input a 5-second vocal sample and experience instant text-to-speech conversion.
|
33 |
+
|
34 |
+
2. **Few-shot TTS:** Fine-tune the model with just 1 minute of training data for improved voice similarity and realism.
|
35 |
+
|
36 |
+
3. **Cross-lingual Support:** Inference in languages different from the training dataset, currently supporting English, Japanese, Korean, Cantonese and Chinese.
|
37 |
+
|
38 |
+
4. **WebUI Tools:** Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models.
|
39 |
+
|
40 |
+
**Check out our [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) here!**
|
41 |
+
|
42 |
+
Unseen speakers few-shot fine-tuning demo:
|
43 |
+
|
44 |
+
https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb
|
45 |
+
|
46 |
+
**User guide: [简体中文](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e) | [English](https://rentry.co/GPT-SoVITS-guide#/)**
|
47 |
+
|
48 |
+
## Installation
|
49 |
+
|
50 |
+
For users in China, you can [click here](https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official) to use AutoDL Cloud Docker to experience the full functionality online.
|
51 |
+
|
52 |
+
### Tested Environments
|
53 |
+
|
54 |
+
- Python 3.9, PyTorch 2.0.1, CUDA 11
|
55 |
+
- Python 3.10.13, PyTorch 2.1.2, CUDA 12.3
|
56 |
+
- Python 3.9, PyTorch 2.2.2, macOS 14.4.1 (Apple silicon)
|
57 |
+
- Python 3.9, PyTorch 2.2.2, CPU devices
|
58 |
+
|
59 |
+
_Note: numba==0.56.4 requires py<3.11_
|
60 |
+
|
61 |
+
### Windows
|
62 |
+
|
63 |
+
If you are a Windows user (tested with win>=10), you can [download the integrated package](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-v3lora-20250228.7z?download=true) and double-click on _go-webui.bat_ to start GPT-SoVITS-WebUI.
|
64 |
+
|
65 |
+
**Users in China can [download the package here](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e/dkxgpiy9zb96hob4#KTvnO).**
|
66 |
+
|
67 |
+
### Linux
|
68 |
+
|
69 |
+
```bash
|
70 |
+
conda create -n GPTSoVits python=3.9
|
71 |
+
conda activate GPTSoVits
|
72 |
+
bash install.sh
|
73 |
+
```
|
74 |
+
|
75 |
+
### macOS
|
76 |
+
|
77 |
+
**Note: The models trained with GPUs on Macs result in significantly lower quality compared to those trained on other devices, so we are temporarily using CPUs instead.**
|
78 |
+
|
79 |
+
1. Install Xcode command-line tools by running `xcode-select --install`.
|
80 |
+
2. Install FFmpeg by running `brew install ffmpeg`.
|
81 |
+
3. Install the program by running the following commands:
|
82 |
+
|
83 |
+
```bash
|
84 |
+
conda create -n GPTSoVits python=3.9
|
85 |
+
conda activate GPTSoVits
|
86 |
+
pip install -r requirements.txt
|
87 |
+
```
|
88 |
+
|
89 |
+
### Install Manually
|
90 |
+
|
91 |
+
#### Install FFmpeg
|
92 |
+
|
93 |
+
##### Conda Users
|
94 |
+
|
95 |
+
```bash
|
96 |
+
conda install ffmpeg
|
97 |
+
```
|
98 |
+
|
99 |
+
##### Ubuntu/Debian Users
|
100 |
+
|
101 |
+
```bash
|
102 |
+
sudo apt install ffmpeg
|
103 |
+
sudo apt install libsox-dev
|
104 |
+
conda install -c conda-forge 'ffmpeg<7'
|
105 |
+
```
|
106 |
+
|
107 |
+
##### Windows Users
|
108 |
+
|
109 |
+
Download and place [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) and [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) in the GPT-SoVITS root.
|
110 |
+
|
111 |
+
Install [Visual Studio 2017](https://aka.ms/vs/17/release/vc_redist.x86.exe) (Korean TTS Only)
|
112 |
+
|
113 |
+
##### MacOS Users
|
114 |
+
```bash
|
115 |
+
brew install ffmpeg
|
116 |
+
```
|
117 |
+
|
118 |
+
#### Install Dependences
|
119 |
+
|
120 |
+
```bash
|
121 |
+
pip install -r requirements.txt
|
122 |
+
```
|
123 |
+
|
124 |
+
### Using Docker
|
125 |
+
|
126 |
+
#### docker-compose.yaml configuration
|
127 |
+
|
128 |
+
0. Regarding image tags: Due to rapid updates in the codebase and the slow process of packaging and testing images, please check [Docker Hub](https://hub.docker.com/r/breakstring/gpt-sovits) for the currently packaged latest images and select as per your situation, or alternatively, build locally using a Dockerfile according to your own needs.
|
129 |
+
1. Environment Variables:
|
130 |
+
- is_half: Controls half-precision/double-precision. This is typically the cause if the content under the directories 4-cnhubert/5-wav32k is not generated correctly during the "SSL extracting" step. Adjust to True or False based on your actual situation.
|
131 |
+
2. Volumes Configuration,The application's root directory inside the container is set to /workspace. The default docker-compose.yaml lists some practical examples for uploading/downloading content.
|
132 |
+
3. shm_size: The default available memory for Docker Desktop on Windows is too small, which can cause abnormal operations. Adjust according to your own situation.
|
133 |
+
4. Under the deploy section, GPU-related settings should be adjusted cautiously according to your system and actual circumstances.
|
134 |
+
|
135 |
+
#### Running with docker compose
|
136 |
+
|
137 |
+
```
|
138 |
+
docker compose -f "docker-compose.yaml" up -d
|
139 |
+
```
|
140 |
+
|
141 |
+
#### Running with docker command
|
142 |
+
|
143 |
+
As above, modify the corresponding parameters based on your actual situation, then run the following command:
|
144 |
+
|
145 |
+
```
|
146 |
+
docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx
|
147 |
+
```
|
148 |
+
|
149 |
+
## Pretrained Models
|
150 |
+
|
151 |
+
**Users in China can [download all these models here](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e/dkxgpiy9zb96hob4#nVNhX).**
|
152 |
+
|
153 |
+
1. Download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`.
|
154 |
+
|
155 |
+
2. Download G2PW models from [G2PWModel_1.1.zip](https://paddlespeech.cdn.bcebos.com/Parakeet/released_models/g2p/G2PWModel_1.1.zip), unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS/text`.(Chinese TTS Only)
|
156 |
+
|
157 |
+
3. For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) and place them in `tools/uvr5/uvr5_weights`.
|
158 |
+
|
159 |
+
- If you want to use `bs_roformer` or `mel_band_roformer` models for UVR5, you can manually download the model and corresponding configuration file, and put them in `tools/uvr5/uvr5_weights`. **Rename the model file and configuration file, ensure that the model and configuration files have the same and corresponding names except for the suffix**. In addition, the model and configuration file names **must include `roformer`** in order to be recognized as models of the roformer class.
|
160 |
+
|
161 |
+
- The suggestion is to **directly specify the model type** in the model name and configuration file name, such as `mel_mand_roformer`, `bs_roformer`. If not specified, the features will be compared from the configuration file to determine which type of model it is. For example, the model `bs_roformer_ep_368_sdr_12.9628.ckpt` and its corresponding configuration file `bs_roformer_ep_368_sdr_12.9628.yaml` are a pair, `kim_mel_band_roformer.ckpt` and `kim_mel_band_roformer.yaml` are also a pair.
|
162 |
+
|
163 |
+
4. For Chinese ASR (additionally), download models from [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), and [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) and place them in `tools/asr/models`.
|
164 |
+
|
165 |
+
5. For English or Japanese ASR (additionally), download models from [Faster Whisper Large V3](https://huggingface.co/Systran/faster-whisper-large-v3) and place them in `tools/asr/models`. Also, [other models](https://huggingface.co/Systran) may have the similar effect with smaller disk footprint.
|
166 |
+
|
167 |
+
## Dataset Format
|
168 |
+
|
169 |
+
The TTS annotation .list file format:
|
170 |
+
|
171 |
+
```
|
172 |
+
vocal_path|speaker_name|language|text
|
173 |
+
```
|
174 |
+
|
175 |
+
Language dictionary:
|
176 |
+
|
177 |
+
- 'zh': Chinese
|
178 |
+
- 'ja': Japanese
|
179 |
+
- 'en': English
|
180 |
+
- 'ko': Korean
|
181 |
+
- 'yue': Cantonese
|
182 |
+
|
183 |
+
Example:
|
184 |
+
|
185 |
+
```
|
186 |
+
D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin.
|
187 |
+
```
|
188 |
+
|
189 |
+
## Finetune and inference
|
190 |
+
|
191 |
+
### Open WebUI
|
192 |
+
|
193 |
+
#### Integrated Package Users
|
194 |
+
|
195 |
+
Double-click `go-webui.bat`or use `go-webui.ps1`
|
196 |
+
if you want to switch to V1,then double-click`go-webui-v1.bat` or use `go-webui-v1.ps1`
|
197 |
+
|
198 |
+
#### Others
|
199 |
+
|
200 |
+
```bash
|
201 |
+
python webui.py <language(optional)>
|
202 |
+
```
|
203 |
+
|
204 |
+
if you want to switch to V1,then
|
205 |
+
|
206 |
+
```bash
|
207 |
+
python webui.py v1 <language(optional)>
|
208 |
+
```
|
209 |
+
Or maunally switch version in WebUI
|
210 |
+
|
211 |
+
### Finetune
|
212 |
+
|
213 |
+
#### Path Auto-filling is now supported
|
214 |
+
|
215 |
+
1. Fill in the audio path
|
216 |
+
2. Slice the audio into small chunks
|
217 |
+
3. Denoise(optinal)
|
218 |
+
4. ASR
|
219 |
+
5. Proofreading ASR transcriptions
|
220 |
+
6. Go to the next Tab, then finetune the model
|
221 |
+
|
222 |
+
### Open Inference WebUI
|
223 |
+
|
224 |
+
#### Integrated Package Users
|
225 |
+
|
226 |
+
Double-click `go-webui-v2.bat` or use `go-webui-v2.ps1` ,then open the inference webui at `1-GPT-SoVITS-TTS/1C-inference`
|
227 |
+
|
228 |
+
#### Others
|
229 |
+
|
230 |
+
```bash
|
231 |
+
python GPT_SoVITS/inference_webui.py <language(optional)>
|
232 |
+
```
|
233 |
+
OR
|
234 |
+
|
235 |
+
```bash
|
236 |
+
python webui.py
|
237 |
+
```
|
238 |
+
then open the inference webui at `1-GPT-SoVITS-TTS/1C-inference`
|
239 |
+
|
240 |
+
## V2 Release Notes
|
241 |
+
|
242 |
+
New Features:
|
243 |
+
|
244 |
+
1. Support Korean and Cantonese
|
245 |
+
|
246 |
+
2. An optimized text frontend
|
247 |
+
|
248 |
+
3. Pre-trained model extended from 2k hours to 5k hours
|
249 |
+
|
250 |
+
4. Improved synthesis quality for low-quality reference audio
|
251 |
+
|
252 |
+
[more details](https://github.com/RVC-Boss/GPT-SoVITS/wiki/GPT%E2%80%90SoVITS%E2%80%90v2%E2%80%90features-(%E6%96%B0%E7%89%B9%E6%80%A7))
|
253 |
+
|
254 |
+
Use v2 from v1 environment:
|
255 |
+
|
256 |
+
1. `pip install -r requirements.txt` to update some packages
|
257 |
+
|
258 |
+
2. Clone the latest codes from github.
|
259 |
+
|
260 |
+
3. Download v2 pretrained models from [huggingface](https://huggingface.co/lj1995/GPT-SoVITS/tree/main/gsv-v2final-pretrained) and put them into `GPT_SoVITS\pretrained_models\gsv-v2final-pretrained`.
|
261 |
+
|
262 |
+
Chinese v2 additional: [G2PWModel_1.1.zip](https://paddlespeech.cdn.bcebos.com/Parakeet/released_models/g2p/G2PWModel_1.1.zip)(Download G2PW models, unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS/text`.
|
263 |
+
|
264 |
+
## V3 Release Notes
|
265 |
+
|
266 |
+
New Features:
|
267 |
+
|
268 |
+
1. The timbre similarity is higher, requiring less training data to approximate the target speaker (the timbre similarity is significantly improved using the base model directly without fine-tuning).
|
269 |
+
|
270 |
+
2. GPT model is more stable, with fewer repetitions and omissions, and it is easier to generate speech with richer emotional expression.
|
271 |
+
|
272 |
+
[more details](https://github.com/RVC-Boss/GPT-SoVITS/wiki/GPT%E2%80%90SoVITS%E2%80%90v3%E2%80%90features-(%E6%96%B0%E7%89%B9%E6%80%A7))
|
273 |
+
|
274 |
+
Use v3 from v2 environment:
|
275 |
+
|
276 |
+
1. `pip install -r requirements.txt` to update some packages
|
277 |
+
|
278 |
+
2. Clone the latest codes from github.
|
279 |
+
|
280 |
+
3. Download v3 pretrained models (s1v3.ckpt, s2Gv3.pth and models--nvidia--bigvgan_v2_24khz_100band_256x folder) from [huggingface](https://huggingface.co/lj1995/GPT-SoVITS/tree/main) and put them into `GPT_SoVITS\pretrained_models`.
|
281 |
+
|
282 |
+
additional: for Audio Super Resolution model, you can read [how to download](./tools/AP_BWE_main/24kto48k/readme.txt)
|
283 |
+
|
284 |
+
|
285 |
+
## Todo List
|
286 |
+
|
287 |
+
- [x] **High Priority:**
|
288 |
+
|
289 |
+
- [x] Localization in Japanese and English.
|
290 |
+
- [x] User guide.
|
291 |
+
- [x] Japanese and English dataset fine tune training.
|
292 |
+
|
293 |
+
- [ ] **Features:**
|
294 |
+
- [x] Zero-shot voice conversion (5s) / few-shot voice conversion (1min).
|
295 |
+
- [x] TTS speaking speed control.
|
296 |
+
- [ ] ~~Enhanced TTS emotion control.~~ Maybe use pretrained finetuned preset GPT models for better emotion.
|
297 |
+
- [ ] Experiment with changing SoVITS token inputs to probability distribution of GPT vocabs (transformer latent).
|
298 |
+
- [x] Improve English and Japanese text frontend.
|
299 |
+
- [ ] Develop tiny and larger-sized TTS models.
|
300 |
+
- [x] Colab scripts.
|
301 |
+
- [x] Try expand training dataset (2k hours -> 10k hours).
|
302 |
+
- [x] better sovits base model (enhanced audio quality)
|
303 |
+
- [ ] model mix
|
304 |
+
|
305 |
+
## (Additional) Method for running from the command line
|
306 |
+
Use the command line to open the WebUI for UVR5
|
307 |
+
```
|
308 |
+
python tools/uvr5/webui.py "<infer_device>" <is_half> <webui_port_uvr5>
|
309 |
+
```
|
310 |
+
<!-- If you can't open a browser, follow the format below for UVR processing,This is using mdxnet for audio processing
|
311 |
+
```
|
312 |
+
python mdxnet.py --model --input_root --output_vocal --output_ins --agg_level --format --device --is_half_precision
|
313 |
+
``` -->
|
314 |
+
This is how the audio segmentation of the dataset is done using the command line
|
315 |
+
```
|
316 |
+
python audio_slicer.py \
|
317 |
+
--input_path "<path_to_original_audio_file_or_directory>" \
|
318 |
+
--output_root "<directory_where_subdivided_audio_clips_will_be_saved>" \
|
319 |
+
--threshold <volume_threshold> \
|
320 |
+
--min_length <minimum_duration_of_each_subclip> \
|
321 |
+
--min_interval <shortest_time_gap_between_adjacent_subclips>
|
322 |
+
--hop_size <step_size_for_computing_volume_curve>
|
323 |
+
```
|
324 |
+
This is how dataset ASR processing is done using the command line(Only Chinese)
|
325 |
+
```
|
326 |
+
python tools/asr/funasr_asr.py -i <input> -o <output>
|
327 |
+
```
|
328 |
+
ASR processing is performed through Faster_Whisper(ASR marking except Chinese)
|
329 |
+
|
330 |
+
(No progress bars, GPU performance may cause time delays)
|
331 |
+
```
|
332 |
+
python ./tools/asr/fasterwhisper_asr.py -i <input> -o <output> -l <language> -p <precision>
|
333 |
+
```
|
334 |
+
A custom list save path is enabled
|
335 |
+
|
336 |
+
## Credits
|
337 |
+
|
338 |
+
Special thanks to the following projects and contributors:
|
339 |
+
|
340 |
+
### Theoretical Research
|
341 |
+
- [ar-vits](https://github.com/innnky/ar-vits)
|
342 |
+
- [SoundStorm](https://github.com/yangdongchao/SoundStorm/tree/master/soundstorm/s1/AR)
|
343 |
+
- [vits](https://github.com/jaywalnut310/vits)
|
344 |
+
- [TransferTTS](https://github.com/hcy71o/TransferTTS/blob/master/models.py#L556)
|
345 |
+
- [contentvec](https://github.com/auspicious3000/contentvec/)
|
346 |
+
- [hifi-gan](https://github.com/jik876/hifi-gan)
|
347 |
+
- [fish-speech](https://github.com/fishaudio/fish-speech/blob/main/tools/llama/generate.py#L41)
|
348 |
+
- [f5-TTS](https://github.com/SWivid/F5-TTS/blob/main/src/f5_tts/model/backbones/dit.py)
|
349 |
+
- [shortcut flow matching](https://github.com/kvfrans/shortcut-models/blob/main/targets_shortcut.py)
|
350 |
+
### Pretrained Models
|
351 |
+
- [Chinese Speech Pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain)
|
352 |
+
- [Chinese-Roberta-WWM-Ext-Large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large)
|
353 |
+
- [BigVGAN](https://github.com/NVIDIA/BigVGAN)
|
354 |
+
### Text Frontend for Inference
|
355 |
+
- [paddlespeech zh_normalization](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/paddlespeech/t2s/frontend/zh_normalization)
|
356 |
+
- [split-lang](https://github.com/DoodleBears/split-lang)
|
357 |
+
- [g2pW](https://github.com/GitYCC/g2pW)
|
358 |
+
- [pypinyin-g2pW](https://github.com/mozillazg/pypinyin-g2pW)
|
359 |
+
- [paddlespeech g2pw](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/paddlespeech/t2s/frontend/g2pw)
|
360 |
+
### WebUI Tools
|
361 |
+
- [ultimatevocalremovergui](https://github.com/Anjok07/ultimatevocalremovergui)
|
362 |
+
- [audio-slicer](https://github.com/openvpi/audio-slicer)
|
363 |
+
- [SubFix](https://github.com/cronrpc/SubFix)
|
364 |
+
- [FFmpeg](https://github.com/FFmpeg/FFmpeg)
|
365 |
+
- [gradio](https://github.com/gradio-app/gradio)
|
366 |
+
- [faster-whisper](https://github.com/SYSTRAN/faster-whisper)
|
367 |
+
- [FunASR](https://github.com/alibaba-damo-academy/FunASR)
|
368 |
+
- [AP-BWE](https://github.com/yxlu-0102/AP-BWE)
|
369 |
+
|
370 |
+
Thankful to @Naozumi520 for providing the Cantonese training set and for the guidance on Cantonese-related knowledge.
|
371 |
+
|
372 |
+
## Thanks to all contributors for their efforts
|
373 |
+
|
374 |
+
<a href="https://github.com/RVC-Boss/GPT-SoVITS/graphs/contributors" target="_blank">
|
375 |
+
<img src="https://contrib.rocks/image?repo=RVC-Boss/GPT-SoVITS" />
|
376 |
+
</a>
|