SWivid commited on
Commit
ac672f3
·
1 Parent(s): 9d2b8cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -77,22 +77,22 @@ Currently support 30s for a single generation, which is the **TOTAL** length of
77
  Either you can specify everything in `inference-cli.toml` or override with flags. Leave `--ref_text ""` will have ASR model transcribe the reference audio automatically (use extra GPU memory). If encounter network error, consider use local ckpt, just set `ckpt_path` in `inference-cli.py`
78
 
79
  ```bash
80
- python inference-cli.py --model "F5-TTS" --ref_audio "tests/ref_audio/test_en_1_ref_short.wav" --ref_text "Some call me nature, others call me mother nature." --gen_text "I don't really care what you call me. I've been a silent spectator, watching species evolve, empires rise and fall. But always remember, I am mighty and enduring. Respect me and I'll nurture you; ignore me and you shall face the consequences."
 
 
 
 
81
 
82
- python inference-cli.py --model "E2-TTS" --ref_audio "tests/ref_audio/test_zh_1_ref_short.wav" --ref_text "对,这就是我,万人敬仰的太乙真人。" --gen_text "突然,身边一阵笑声。我看着他们,意气风发地挺直了胸膛,甩了甩那稍显肉感的双臂,轻笑道:\"我身上的肉,是为了掩饰我爆棚的魅力,否则,岂不吓坏了你们呢?\""
 
 
 
 
83
  ```
84
 
85
  ### Gradio App
86
 
87
- You can launch a Gradio app (web interface) to launch a GUI for inference.
88
-
89
- First, make sure you have the dependencies installed (`pip install -r requirements.txt`). Then, install the Gradio app dependencies:
90
-
91
- ```bash
92
- pip install -r requirements_gradio.txt
93
- ```
94
-
95
- After installing the dependencies, launch the app (will load ckpt from Huggingface, you may set `ckpt_path` to local file in `gradio_app.py`). Currently load ASR model, F5-TTS and E2 TTS all in once, thus use more GPU memory than `inference-cli`.
96
 
97
  ```bash
98
  python gradio_app.py
 
77
  Either you can specify everything in `inference-cli.toml` or override with flags. Leave `--ref_text ""` will have ASR model transcribe the reference audio automatically (use extra GPU memory). If encounter network error, consider use local ckpt, just set `ckpt_path` in `inference-cli.py`
78
 
79
  ```bash
80
+ python inference-cli.py \
81
+ --model "F5-TTS" \
82
+ --ref_audio "tests/ref_audio/test_en_1_ref_short.wav" \
83
+ --ref_text "Some call me nature, others call me mother nature." \
84
+ --gen_text "I don't really care what you call me. I've been a silent spectator, watching species evolve, empires rise and fall. But always remember, I am mighty and enduring. Respect me and I'll nurture you; ignore me and you shall face the consequences."
85
 
86
+ python inference-cli.py \
87
+ --model "E2-TTS" \
88
+ --ref_audio "tests/ref_audio/test_zh_1_ref_short.wav" \
89
+ --ref_text "对,这就是我,万人敬仰的太乙真人。" \
90
+ --gen_text "突然,身边一阵笑声。我看着他们,意气风发地挺直了胸膛,甩了甩那稍显肉感的双臂,轻笑道:\"我身上的肉,是为了掩饰我爆棚的魅力,否则,岂不吓坏了你们呢?\""
91
  ```
92
 
93
  ### Gradio App
94
 
95
+ You can launch a Gradio app (web interface) to launch a GUI for inference (will load ckpt from Huggingface, you may set `ckpt_path` to local file in `gradio_app.py`). Currently load ASR model, F5-TTS and E2 TTS all in once, thus use more GPU memory than `inference-cli`.
 
 
 
 
 
 
 
 
96
 
97
  ```bash
98
  python gradio_app.py