ACL-OCL / Base_JSON /prefixK /json /konvens /2021.konvens-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:47.775178Z"
},
"title": "Robustness of end-to-end Automatic Speech Recognition Models -A Case Study using Mozilla DeepSpeech",
"authors": [
{
"first": "Aashish",
"middle": [],
"last": "Agarwal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Lab University of Duisburg-Essen Duisburg",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Lab University of Duisburg-Essen Duisburg",
"location": {
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "When evaluating the performance of automatic speech recognition models, usually word error rate within a certain dataset is used. Special care must be taken in understanding the dataset in order to report realistic performance numbers. We argue that many performance numbers reported probably underestimate the expected error rate. We conduct experiments controlling for selection bias, gender as well as overlap (between training and test data) in content, voices, and recording conditions. We find that content overlap has the biggest impact, but other factors like gender also play a role.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "When evaluating the performance of automatic speech recognition models, usually word error rate within a certain dataset is used. Special care must be taken in understanding the dataset in order to report realistic performance numbers. We argue that many performance numbers reported probably underestimate the expected error rate. We conduct experiments controlling for selection bias, gender as well as overlap (between training and test data) in content, voices, and recording conditions. We find that content overlap has the biggest impact, but other factors like gender also play a role.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic Speech Recognition (ASR) has made striking progress in recent years with the deployment of increasingly large deep neural networks (Zhang et al., 2017; Sperber et al., 2018; Chang et al., 2019; Zhang et al., 2020) . Now when you see a shiny new model with an error rate reported to be below 10%, are you likely to get the same error rate on your data? Many reported results probably underestimate the word error rate (WER) to be expected when a model is applied outside of its exact training conditions (Likhomanenko et al., 2020) For example, in many datasets, there is a large imbalance between male and female voices (usually not enough female data). When evaluating only within such a dataset and not controlling for gender, the model can optimize overall WER by performing worse for females (Tatman, 2017 ). If the model is eventually applied in a setting where males and females are equally likely to use the system, WER will be much higher.",
"cite_spans": [
{
"start": 141,
"end": 161,
"text": "(Zhang et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 162,
"end": 183,
"text": "Sperber et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 184,
"end": 203,
"text": "Chang et al., 2019;",
"ref_id": "BIBREF2"
},
{
"start": 204,
"end": 223,
"text": "Zhang et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 513,
"end": 540,
"text": "(Likhomanenko et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 806,
"end": 819,
"text": "(Tatman, 2017",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Other issues that might lead to underestimating error rate are overlaps between the train and test sets regarding content, voices or recording conditions. Another issue to be considered is selection bias when the training process can select samples for training and testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A really robust model should generalize beyond these factors, but we find that current models trained on the available datasets do not. We argue that this is partly due to the focus on reporting improvements in a within-dataset setting. It just sounds better to report a 4.3% WER on the standard dataset instead of a more realistic number (which we show can be several times higher). However, as most real-world applications are unlikely to directly reflect the properties of a specific dataset, most users would be better off with more robust models and a realistic estimate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of the end-to-end speech recognition systems for English use the Librispeech (Panayotov et al., 2015) corpus, which has pre-defined data splits trying to avoid the issues discussed above. 1 For German data, standard splits are not fully established leading to large differences in WER between datasets, e.g. Agarwal and Zesch (2019) report WER in the range between 15 and 79.",
"cite_spans": [
{
"start": 82,
"end": 106,
"text": "(Panayotov et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 313,
"end": 337,
"text": "Agarwal and Zesch (2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We argue that this is also a challenge for other languages, where standard data splits are not defined, including Arabic (Menacer et al., 2017), Kazak (Mamyrbayev et al., 2019) , Bengali (Islam et al., 2019) , and Russian (Adams et al., 2019) .",
"cite_spans": [
{
"start": 151,
"end": 176,
"text": "(Mamyrbayev et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 187,
"end": 207,
"text": "(Islam et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 222,
"end": 242,
"text": "(Adams et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We thus perform experiments investigating the relative impact of dataset properties in order to give practical advice on how to train the models. This might also have consequences for the way speech datasets are collected. For data-rich languages like English, these issues can somewhat be offset by using more training data, so that a model might still be able to generalize well across different conditions. We thus perform our experiments on German, which -at least when it comes to the amount of publicly available, transcribed speech data-has to be counted as an under-resourced language. We perform our experiments using the endto-end speech recognition toolkit Mozilla Deep-Speech. 2 Our results probably generalize to other neural architecture similar to DeepSpeech.",
"cite_spans": [
{
"start": 689,
"end": 690,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We make our experimental setup publicly available (URL removed for review).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As we argue that dataset properties play such a big role, we will first have a look at the available training data collections. While for English or Chinese quite large datasets are publicly available, all German datasets are of limited size (see Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 254,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset Properties",
"sec_num": "2"
},
{
"text": "However, only focusing on the overall size is misleading anyway as e.g. even one million hours of one person reading the same sentence over and over again would not result in a usable model. We thus also look at other properties. A dataset like M-AILABS with very few voices is unlikely to generalize well to new voices. On the other hand, a dataset like Mozilla Common Voice (MCV) with thousands of voices easily reaches the largest overall size in our set, but as most voices repeat the same sentences, the dataset does not capture the same breadth of lexical material. As a consequence, the size of unique content in the MCV dataset is rather small, but not as small as the TUDA-De dataset where each sample is recorded by 5 different microphones bringing the unique size down to 7 hours (from 184 hours in total).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Properties",
"sec_num": "2"
},
{
"text": "We thus argue that the question Can I train a robust model with [XYZ] hours of data? cannot be answered without estimating the relative influence that each of these factors is going to have on the training process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Properties",
"sec_num": "2"
},
{
"text": "As we are not aware that the gender balance of the available German datasets has been analyzed in detail before, we provide the statistics in Table 2. We found that across almost all the datasets, except M-Ailabs, the number of male voices is predominantly high. For example, in TUDA-De, male to female ratio is 3:1 and in MCV it is 9:1. This means that male voices form the majority of the corpora. Thus such corpora might not be able to generalise well in realistic settings. Projects 2 https://github.com/mozilla/DeepSpeech collecting speech samples from volunteers should try to recruit more women and in general a more diverse set of dialects etc. When designing a speech corpus, keeping diversity (not only regarding gender) in mind would be beneficial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voice Gender",
"sec_num": "2.1"
},
{
"text": "Having a dataset with multiple voices, varied recording conditions, and little content redundancy does not automatically guarantee a robust model. Care has to be taken to separate cases between train, validation and test. Figure 1 visualizes the issue in a general way. A fixed data split (left) should separate dimensions are as much as possible, e.g. not have the same voices or the same content in train and test (right).",
"cite_spans": [],
"ref_spans": [
{
"start": 222,
"end": 230,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data Splits",
"sec_num": "2.2"
},
{
"text": "Of course, the severity of the issue depends on the usage scenario. If all one wants to do is recognizing spoken digits from 0 to 10, there is no harm with having samples of all digits in train and in the test, as in the application scenario those digits are all to care about. However, if the goal is a robust, domain-independent model, we need to control for overlap in sentences between train and test in order to obtain a realistic error rate estimate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Splits",
"sec_num": "2.2"
},
{
"text": "An issue indirectly related to dataset properties is that frameworks often perform some kind of preprocessing and might filter out some samples in the process. For example, in Figure 2 we show the length distribution of samples in each dataset. Without looking at other dataset properties it might look useful to get rid of very short or very long samples and to only train (and test!) a model using samples close to the peak of the distribution. However, this might introduce a selection bias, where we reduce WER by simply discarding all the hard cases. This leads to excellent withindataset results, but poor cross-dataset results.",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 184,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Selection Bias",
"sec_num": "2.3"
},
{
"text": "For our experiments, we used the latest released version of Mozilla DeepSpeech (v0.6.0). 3 We choose the best hyperparameters 4 as described in (Agarwal and Zesch, 2019) . The models are trained and tested on a compute server having 56 Intel(R) Xeon(R) Gold 5120 CPUs @ 2.20GHz, 3 Nvidia Quadro RTX 6000 with 24GB of RAM each. The typical training time on a single dataset under this setup was in the range of 2 hours. We ran our experiments for approximately 200 hours, which is equivalent to about 50 kg of CO 2 . 5",
"cite_spans": [
{
"start": 89,
"end": 90,
"text": "3",
"ref_id": null
},
{
"start": 144,
"end": 169,
"text": "(Agarwal and Zesch, 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "3"
},
{
"text": "As a baseline, we simply take all data and randomly split the data into train/dev/test, i.e. we do not take any of the dataset properties discussed above into account. This is the setup that is most likely used whenever not discussed differently in Table 3 gives an overview of the WER obtained in that way (rows in italics). Given the limited amount of training data, the results are in the expected range and generally similar to previously reported results (Agarwal and Zesch, 2019) . However, as noted above, those numbers are probably underestimating the true error rate. We thus also conduct cross-domain experiments, as testing on a dataset different from training is a natural way of checking the model robustness without any overlap at all. If the WER reported on the dataset itself is a realistic measure of performance, we should see cross-domain results that are similar. However Table 3 shows that WER always dramatically rises -mostly to the point that the model is not being useful anymore. MCV seems to generalize somewhat better than TUDA-De or M-AILABS, which indicates that many voices are more important for model robustness than more unique training samples.",
"cite_spans": [
{
"start": 460,
"end": 485,
"text": "(Agarwal and Zesch, 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 249,
"end": 256,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 892,
"end": 899,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Baseline: All data, random split",
"sec_num": "3.1"
},
{
"text": "In the remainder of this section, we explore which other factors are influencing results the most. Table 4 compares the baseline results with the setup when there is no content overlap (i.e. exact same utterance) between the data splits. Note that we use the same amount of data in both conditions, only the splits are different. M-AILABS is not affected, as there is no content overlap to begin with. 6 This nicely shows that the results obtained for a specific dataset are replicable in general. The other datasets are heavily effected showing that content overlap is the main reason for underestimating the true error rate. As the MCV dataset has many voices and microphones, the 43.9 WER is probably already a robust estimate (cf. cross-domain results in Table 3 ). Table 5 first shows the results without content overlap (these are the same numbers as in Table 4) and then the results without voice overlap. The WER on M-AILABS, that only has very few voices, goes up to over 70% well into the unusable range. Results for TUDA-De go down, but only as we are not controlling for content overlap anymore. This is another piece of evidence that content is actually more important than voices, as it has a relatively larger impact. If we control for both (last column), all models perform approximately on the same abysmal level.",
"cite_spans": [
{
"start": 402,
"end": 403,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 99,
"end": 106,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 759,
"end": 766,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 770,
"end": 777,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Baseline: All data, random split",
"sec_num": "3.1"
},
{
"text": "TUDA-De is the only dataset where we can easily control recording conditions in the form of microphones used. 7 We can use 88h for this experiment and use 3 mics for training and 1 for dev and test each. Without content overlap, we obtain a WER of 73.8, while without mic overlap it is 53.1. Content overlap is thus the much more important factor. Consequently removing content and mic overlap only slightly increases WER to 77.4. 6 The small difference is due to the independent randomization when re-running an experiment.",
"cite_spans": [
{
"start": 110,
"end": 111,
"text": "7",
"ref_id": null
},
{
"start": 431,
"end": 432,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recording conditions",
"sec_num": "3.4"
},
{
"text": "7 Actually 'recording conditions' is a much wider variable, but not present as meta-data in most datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recording conditions",
"sec_num": "3.4"
},
{
"text": "As we have shown, the influence of content overlap is rather strong and likely to overshadow any gender effect to be found in the data. We thus isolate the gender variable by creating a sub-corpus where there is not content overlap between train and test and where the test set for male and female voices contains the same sentences. We find that training on male yields 63.5 WER for males and 87.4 for females showing the expected gender gap. If we train only on female voices, we get 55.2 WER for females and 88.3 for males.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gender",
"sec_num": "3.5"
},
{
"text": "Our study shows that the robustness of end-toend speech recognition models heavily depends on dataset splits. Content overlap is the main reason for underestimating the true error rate. Especially in datasets that are collected in a crowd-sourced fashion, where many voices read the same sentences, or when multiple microphones are used, extra care has to be taken to avoid information leakage from train to test. However, other factors like gender balance or recording conditions are also contributing to the effect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "4"
},
{
"text": "However, note that over time fixed data splits lead to overfitting the methods on the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Massively multilingual adversarial speech recognition",
"authors": [
{
"first": "Oliver",
"middle": [],
"last": "Adams",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Wiesner",
"suffix": ""
},
{
"first": "Shinji",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver Adams, Matthew Wiesner, Shinji Watanabe, and David Yarowsky. 2019. Massively multilingual ad- versarial speech recognition.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "German end-to-end speech recognition based on deepspeech",
"authors": [
{
"first": "Aashish",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 15th Conference on Natural Language Processing (KONVENS 2019)",
"volume": "",
"issue": "",
"pages": "111--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aashish Agarwal and Torsten Zesch. 2019. German end-to-end speech recognition based on deepspeech. In Proceedings of the 15th Conference on Natu- ral Language Processing (KONVENS 2019), pages 111-119, Erlangen, Germany. GSCL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Mimo-speech: End-to-end multi-channel multispeaker speech recognition",
"authors": [
{
"first": "Xuankai",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Wangyou",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yanmin",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"Le"
],
"last": "Roux",
"suffix": ""
},
{
"first": "Shinji",
"middle": [],
"last": "Watanabe",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuankai Chang, Wangyou Zhang, Yanmin Qian, Jonathan Le Roux, and Shinji Watanabe. 2019. Mimo-speech: End-to-end multi-channel multi- speaker speech recognition.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A speech recognition system for bengali language using recurrent neural network",
"authors": [
{
"first": "J",
"middle": [],
"last": "Islam",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mubassira",
"suffix": ""
},
{
"first": "M",
"middle": [
"R"
],
"last": "Islam",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Das",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS)",
"volume": "",
"issue": "",
"pages": "73--76",
"other_ids": {
"DOI": [
"10.1109/CCOMS.2019.8821629"
]
},
"num": null,
"urls": [],
"raw_text": "J. Islam, M. Mubassira, M. R. Islam, and A. K. Das. 2019. A speech recognition system for bengali language using recurrent neural network. In 2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS), pages 73-76.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Rethinking evaluation in ASR: are our models robust enough? CoRR",
"authors": [
{
"first": "Tatiana",
"middle": [],
"last": "Likhomanenko",
"suffix": ""
},
{
"first": "Qiantong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Vineel",
"middle": [],
"last": "Pratap",
"suffix": ""
},
{
"first": "Paden",
"middle": [],
"last": "Tomasello",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Kahn",
"suffix": ""
},
{
"first": "Gilad",
"middle": [],
"last": "Avidov",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Synnaeve",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Paden Tomasello, Jacob Kahn, Gilad Avidov, Ronan Collobert, and Gabriel Synnaeve. 2020. Rethinking evaluation in ASR: are our models robust enough? CoRR, abs/2010.11745.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic recognition of kazakh speech using deep neural networks",
"authors": [
{
"first": "Orken",
"middle": [],
"last": "Mamyrbayev",
"suffix": ""
},
{
"first": "Mussa",
"middle": [],
"last": "Turdalyuly",
"suffix": ""
},
{
"first": "Nurbapa",
"middle": [],
"last": "Mekebayev",
"suffix": ""
},
{
"first": "Keylan",
"middle": [],
"last": "Alimhan",
"suffix": ""
},
{
"first": "Aizat",
"middle": [],
"last": "Kydyrbekova",
"suffix": ""
},
{
"first": "Tolganay",
"middle": [],
"last": "Turdalykyzy",
"suffix": ""
}
],
"year": 2019,
"venue": "Intelligent Information and Database Systems",
"volume": "",
"issue": "",
"pages": "465--474",
"other_ids": {
"DOI": [
"https://link.springer.com/chapter/10.1007/978-3-030-14802-7_40"
]
},
"num": null,
"urls": [],
"raw_text": "Orken Mamyrbayev, Mussa Turdalyuly, Nurbapa Mekebayev, Keylan Alimhan, Aizat Kydyrbekova, and Tolganay Turdalykyzy. 2019. Automatic recog- nition of kazakh speech using deep neural networks. In Intelligent Information and Database Systems, pages 465-474, Cham. Springer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An enhanced automatic speech recognition system for Arabic",
"authors": [
{
"first": "Odile",
"middle": [],
"last": "Mohamed Amine Menacer",
"suffix": ""
},
{
"first": "Dominique",
"middle": [],
"last": "Mella",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Fohr",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Jouvet",
"suffix": ""
},
{
"first": "Kamel",
"middle": [],
"last": "Langlois",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smaili",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Third Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "157--165",
"other_ids": {
"DOI": [
"10.18653/v1/W17-1319"
]
},
"num": null,
"urls": [],
"raw_text": "Mohamed Amine Menacer, Odile Mella, Dominique Fohr, Denis Jouvet, David Langlois, and Kamel Smaili. 2017. An enhanced automatic speech recog- nition system for Arabic. In Proceedings of the Third Arabic Natural Language Processing Work- shop, pages 157-165, Valencia, Spain.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Librispeech: An ASR corpus based on public domain audio books",
"authors": [
{
"first": "Vassil",
"middle": [],
"last": "Panayotov",
"suffix": ""
},
{
"first": "Guoguo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "5206--5210",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2015.7178964"
]
},
"num": null,
"urls": [],
"raw_text": "Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: An ASR corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2015, pages 5206-5210. IEEE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sebastian St\u00fcker, and Alex Waibel",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Sperber",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Sperber, Jan Niehues, Graham Neubig, Se- bastian St\u00fcker, and Alex Waibel. 2018. Self- attentional acoustic models.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Gender and dialect bias in YouTube's automatic captions",
"authors": [
{
"first": "Rachael",
"middle": [],
"last": "Tatman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First ACL Workshop on Ethics in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "53--59",
"other_ids": {
"DOI": [
"10.18653/v1/W17-1606"
]
},
"num": null,
"urls": [],
"raw_text": "Rachael Tatman. 2017. Gender and dialect bias in YouTube's automatic captions. In Proceedings of the First ACL Workshop on Ethics in Natural Lan- guage Processing, pages 53-59, Valencia, Spain.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Transformer transducer: A streamable speech recognition model with transformer encoders and rnn-t loss",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hasim",
"middle": [],
"last": "Sak",
"suffix": ""
},
{
"first": "Anshuman",
"middle": [],
"last": "Tripathi",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Mcdermott",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Shankar",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qian Zhang, Han Lu, Hasim Sak, Anshuman Tripathi, Erik McDermott, Stephen Koo, and Shankar Ku- mar. 2020. Transformer transducer: A streamable speech recognition model with transformer encoders and rnn-t loss.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Towards end-to-end speech recognition with deep convolutional neural networks",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Pezeshki",
"suffix": ""
},
{
"first": "Philemon",
"middle": [],
"last": "Brakel",
"suffix": ""
},
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Cesar",
"middle": [],
"last": "Laurent Yoshua",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Courville",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Zhang, Mohammad Pezeshki, Philemon Brakel, Saizheng Zhang, Cesar Laurent Yoshua Bengio, and Aaron Courville. 2017. Towards end-to-end speech recognition with deep convolutional neural networks.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Visualization of data split issue Figure 2: Distribution of sample length",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"text": "German datasets used in this study",
"num": null,
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">TUDA-De</td><td colspan=\"2\">MCV</td><td colspan=\"2\">M-AILABS</td></tr><tr><td>Gender</td><td>#</td><td>[h]</td><td>#</td><td>[h]</td><td>#</td><td>[h]</td></tr><tr><td>Male</td><td colspan=\"4\">129 123 1555 215</td><td>1</td><td>40</td></tr><tr><td>Female</td><td>50</td><td>61</td><td>173</td><td>33</td><td>4</td><td>147</td></tr><tr><td>Unknown</td><td>-</td><td colspan=\"2\">-3122</td><td>73</td><td>?</td><td>46</td></tr><tr><td>male:female</td><td>3:1</td><td>2:1</td><td>9:1</td><td colspan=\"2\">7:1 1:4</td><td>1:4</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"text": "",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF4": {
"text": "Cross-domain results",
"num": null,
"html": null,
"content": "<table><tr><td>Dataset</td><td colspan=\"3\">[h] Baseline No content</td></tr><tr><td>TUDA-De</td><td>184</td><td>14.9</td><td>66.9</td></tr><tr><td>MCV</td><td>321</td><td>26.8</td><td>43.9</td></tr><tr><td colspan=\"2\">M-AILABS 233</td><td>17.5</td><td>17.1</td></tr></table>",
"type_str": "table"
},
"TABREF5": {
"text": "WER without content overlap",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF7": {
"text": "Results with No Voice and No Sentence Overlap",
"num": null,
"html": null,
"content": "<table><tr><td>3.2 Content overlap</td></tr></table>",
"type_str": "table"
}
}
}
}