--- dataset_info: features: - name: audio dtype: audio: sampling_rate: 16000 - name: text dtype: string - name: id dtype: string splits: - name: train num_bytes: 33151646487.314 num_examples: 152842 - name: dev num_bytes: 5337234537.496 num_examples: 29459 - name: test_clean num_bytes: 371033002.14 num_examples: 2620 - name: test_other num_bytes: 356294359.949 num_examples: 2939 - name: dev_clean num_bytes: 359418612.521 num_examples: 2703 - name: dev_other num_bytes: 335212385.28 num_examples: 2864 - name: test_1h num_bytes: 5187961956.584 num_examples: 24256 download_size: 46937023790 dataset_size: 45098801341.28399 configs: - config_name: default data_files: - split: test_clean path: data/test_clean-* - split: test_other path: data/test_other-* - split: dev_clean path: data/dev_clean-* - split: dev_other path: data/dev_other-* - split: test_1h path: data/test_1h-* - split: train path: data/train-* - split: dev path: data/dev-* --- ### The Interspeech 2024 Challenge on Speech Processing Using Discrete Units - Paper: https://www.isca-archive.org/interspeech_2024/chang24b_interspeech.html - Arxiv: https://arxiv.org/abs/2406.07725 - Challenge details: https://www.wavlab.org/activities/2024/Interspeech2024-Discrete-Speech-Unit-Challenge/ To cite: ``` @inproceedings{chang24b_interspeech, title = {The Interspeech 2024 Challenge on Speech Processing Using Discrete Units}, author = {Xuankai Chang and Jiatong Shi and Jinchuan Tian and Yuning Wu and Yuxun Tang and Yihan Wu and Shinji Watanabe and Yossi Adi and Xie Chen and Qin Jin}, year = {2024}, booktitle = {Interspeech 2024}, pages = {2559--2563}, doi = {10.21437/Interspeech.2024-1878}, issn = {2958-1796}, } ```