Datasets:
Tasks:
Automatic Speech Recognition
Formats:
parquet
Languages:
English
Size:
1M - 10M
ArXiv:
License:
Nick Rossenbach
commited on
Commit
·
38fcb15
1
Parent(s):
a5ae0a3
add language resourced readme section
Browse files
README.md
CHANGED
|
@@ -185,6 +185,21 @@ Some of the above datasets, in particular People's Speech, Yodas and CommonVoice
|
|
| 185 |
|
| 186 |
Audios are embedded as raw bytes (can be decoded by soundfile). We chunked and created smaller audio files from long ones based on start and stop supervision from the different manifests of the datasets (this is necessary for HuggingFace). Language ID with a [SpeechBrain language ID model](https://huggingface.co/speechbrain/lang-id-voxlingua107-ecapa) was performed on Yodas.
|
| 187 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 188 |
#### Referencing the Loquacious Set and SpeechBrain
|
| 189 |
|
| 190 |
```
|
|
|
|
| 185 |
|
| 186 |
Audios are embedded as raw bytes (can be decoded by soundfile). We chunked and created smaller audio files from long ones based on start and stop supervision from the different manifests of the datasets (this is necessary for HuggingFace). Language ID with a [SpeechBrain language ID model](https://huggingface.co/speechbrain/lang-id-voxlingua107-ecapa) was performed on Yodas.
|
| 187 |
|
| 188 |
+
### Count-based Language Models and Lexicon
|
| 189 |
+
|
| 190 |
+
The dataset includes three ARPA-format count-based language models trained on the text of the *train.large* subset of Loquacious:
|
| 191 |
+
|
| 192 |
+
| LM | Size | Binary Size | Dev. Perplexity |
|
| 193 |
+
| --- | --- | --- | --- |
|
| 194 |
+
| 3-gram pruned | 331MB | 721MB | 222 |
|
| 195 |
+
| 4-gram pruned | 538MB | 1.2GB | 202 |
|
| 196 |
+
| 4-gram unpruned | 2.4GB | 4.7GB | 193 |
|
| 197 |
+
|
| 198 |
+
Each language model is limited to a vocabulary containing 216k words.
|
| 199 |
+
|
| 200 |
+
We also provide a pronunciation lexicon using ARPA-style phonemes containing one or multiple pronunciations for each of the words in the vocabulary.
|
| 201 |
+
The original pronunciations are based on [CMUDict 0.7b](http://www.speech.cs.cmu.edu/cgi-bin/cmudict). Missing pronunciations were generated using [Sequitur](https://github.com/sequitur-g2p/sequitur-g2p), for which we also provide the trained G2P model.
|
| 202 |
+
|
| 203 |
#### Referencing the Loquacious Set and SpeechBrain
|
| 204 |
|
| 205 |
```
|