Datasets:
spark-tts
commited on
Commit
·
2a9c56f
1
Parent(s):
07c4108
update readme
Browse files- .gitattributes +1 -0
- README.md +67 -1
.gitattributes
CHANGED
@@ -35,6 +35,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
35 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
36 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
37 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
38 |
# Audio files - uncompressed
|
39 |
*.pcm filter=lfs diff=lfs merge=lfs -text
|
40 |
*.sam filter=lfs diff=lfs merge=lfs -text
|
|
|
35 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
36 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
37 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
38 |
+
*.jsonl filter=lfs diff=lfs merge=lfs -text
|
39 |
# Audio files - uncompressed
|
40 |
*.pcm filter=lfs diff=lfs merge=lfs -text
|
41 |
*.sam filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -8,4 +8,70 @@ tags:
|
|
8 |
pretty_name: voxbox
|
9 |
size_categories:
|
10 |
- 10M<n<100M
|
11 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
pretty_name: voxbox
|
9 |
size_categories:
|
10 |
- 10M<n<100M
|
11 |
+
---
|
12 |
+
|
13 |
+
# VoxBox
|
14 |
+
|
15 |
+
This dataset is a curated collection of bilingual speech corpora annotated clean transcriptions and rich metadata incluing age, gender, and emotion.
|
16 |
+
|
17 |
+
## Dataset Structure
|
18 |
+
|
19 |
+
```bash
|
20 |
+
.
|
21 |
+
├── audios/
|
22 |
+
│ └── aishell-3/ # Audio files (organised by sub-corpus)
|
23 |
+
│ └── ...
|
24 |
+
└── metadata/
|
25 |
+
├── aishell-3.jsonl
|
26 |
+
├── casia.jsonl
|
27 |
+
├── commonvoice_cn.jsonl
|
28 |
+
├── ...
|
29 |
+
└── wenetspeech4tts.jsonl # JSONL metadata files
|
30 |
+
```
|
31 |
+
|
32 |
+
Each JSONL file corresponds to a specific corpus and contains one metadata record per audio sample.
|
33 |
+
|
34 |
+
## Metadata Format
|
35 |
+
|
36 |
+
Each line in the *.jsonl files is a JSON object describing one audio sample. Below is a typical example:
|
37 |
+
|
38 |
+
```json
|
39 |
+
{
|
40 |
+
"index": "VCTK_0000044280",
|
41 |
+
"split": "train",
|
42 |
+
"language": "en",
|
43 |
+
"age": "Youth-Adult",
|
44 |
+
"gender": "female",
|
45 |
+
"emotion": "UNKNOWN",
|
46 |
+
"pitch": 180.626,
|
47 |
+
"pitch_std": 0.158,
|
48 |
+
"speed": 4.2,
|
49 |
+
"duration": 3.84,
|
50 |
+
"speech_duration": 3.843,
|
51 |
+
"syllable_num": 16,
|
52 |
+
"text": "Clearly, the need for a personal loan is written in the stars.",
|
53 |
+
"syllables": "K-L-IH1-R L-IY0 DH-AH0 N-IY1-D F-AO1 R-AH0 P-ER1 S-IH0 N-IH0-L L-OW1 N-IH1 Z-R-IH1 T-AH0 N-IH0-N DH-AH0-S T-AA1-R-Z",
|
54 |
+
"wav_path": "vctk/VCTK_0000044280.flac"
|
55 |
+
}
|
56 |
+
|
57 |
+
```
|
58 |
+
The corresponding audio file is located inside the extracted .tar.gz archive.
|
59 |
+
|
60 |
+
**Key Fields:**
|
61 |
+
|
62 |
+
- `index`: Unique identifier for the audio sample.
|
63 |
+
- `split`: Train/test split.
|
64 |
+
- `language`: Language of the audio sample. Currently only English and Chinese are supported.
|
65 |
+
- `age`, `gender`, `emotion`: Speaker and utterance attributes
|
66 |
+
- `pitch`, `pitch_std`, `speed`: Acoustic features
|
67 |
+
- `duration`: Duration of the audio sample in seconds
|
68 |
+
- `speech_duration`: Duration of the speech in seconds by excluding silence in both ends.
|
69 |
+
- `syllable_num`: Number of syllables in the utterance
|
70 |
+
- `text`: Transcription of the utterance
|
71 |
+
- `syllables`: Syllable-level transcription
|
72 |
+
- `wav_path`: Path to the audio file
|
73 |
+
|
74 |
+
|
75 |
+
## 📌 Licence & Attribution
|
76 |
+
|
77 |
+
Please refer to the original licenses of each sub-corpus. This dataset merely aggregates and annotates the metadata in a unified structure for research purposes.
|