ACL-OCL / Base_JSON /prefixS /json /sltu /2020.sltu-1.17.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:33:45.194642Z"
},
"title": "Fully Convolutional ASR for Less-Resourced Endangered Languages",
"authors": [
{
"first": "Bao",
"middle": [],
"last": "Thai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rochester Institute of Technology",
"location": {
"settlement": "Rochester",
"region": "NY",
"country": "USA"
}
},
"email": ""
},
{
"first": "Robbie",
"middle": [],
"last": "Jimerson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rochester Institute of Technology",
"location": {
"settlement": "Rochester",
"region": "NY",
"country": "USA"
}
},
"email": ""
},
{
"first": "Ray",
"middle": [],
"last": "Ptucha",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rochester Institute of Technology",
"location": {
"settlement": "Rochester",
"region": "NY",
"country": "USA"
}
},
"email": ""
},
{
"first": "Emily",
"middle": [],
"last": "Prud'hommeaux",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rochester Institute of Technology",
"location": {
"settlement": "Rochester",
"region": "NY",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The application of deep learning to automatic speech recognition (ASR) has yielded dramatic accuracy increases for languages with abundant training data, but languages with limited training resources have yet to see accuracy improvements on this scale. In this paper, we compare a fully convolutional approach for acoustic modelling in ASR with a variety of established acoustic modeling approaches. We evaluate our method on Seneca, a low-resource endangered language spoken in North America. Our method yields word error rates up to 40% lower than those reported using both standard GMM-HMM approaches and established deep neural methods, with a substantial reduction in training time. These results show particular promise for languages like Seneca that are both endangered and lack extensive documentation.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The application of deep learning to automatic speech recognition (ASR) has yielded dramatic accuracy increases for languages with abundant training data, but languages with limited training resources have yet to see accuracy improvements on this scale. In this paper, we compare a fully convolutional approach for acoustic modelling in ASR with a variety of established acoustic modeling approaches. We evaluate our method on Seneca, a low-resource endangered language spoken in North America. Our method yields word error rates up to 40% lower than those reported using both standard GMM-HMM approaches and established deep neural methods, with a substantial reduction in training time. These results show particular promise for languages like Seneca that are both endangered and lack extensive documentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Improvements and breakthroughs in deep learning for automatic speech recognition (ASR) have resulted in significant improvements in ASR performance in high-resource languages such as English and Mandarin (Hinton et al., 2012; Hannun et al., 2014; Chan et al., 2016; Audhkhasi et al., 2018; Chiu et al., 2018) . Such methods, however, require very large volumes of labelled training data to achieve these notable results. Most languages of the world, even those with tens of millions of speakers, do not have the quantities of data required to train such systems. The data sparsity problem is even more dire for the many indigenous languages that have historically been undocumented for political or cultural reasons. Deep learning ASR systems for languages with truly limited labelled training data typically incorporate additional training resources such as cross-lingual acoustic models or in-domain synthetic acoustic data to begin to approach the word error rates found using traditional hidden Markov model (HMM) and Gaussian mixture model (GMM) frameworks. While convolutional neural networks (CNNs) have demonstrated superior performance on vision tasks such as image classification, image segmentation, and object recognition, deep learning for ASR has relied heavily upon variants of recurrent neural networks (RNNs). In RNNs, information from timesteps before, and after in the case of bidirectional networks, is used in making the decision of the current timestep. CNNs are excellent at extracting regional patterns but typically require inputs to be of fixed size. However, as seen in object detection and image segmentation applications, fully convolutional variations can operate on multiple locations simultaneously and allow variable-size inputs. In this paper, we present a convolutional acoustic model for ASR in low-resource conditions. We demonstrate our approach using a corpus of 10 hours of recordings of the Seneca language, a critically endangered, morphologically complex language spoken in the northeastern part of North America. Our model reduces the computational cost in terms of number of parameters while still capturing enough temporal dependencies to make accurate predictions. We find that our fully convolutional acoustic model yields significant accuracy improvements over both deep recurrent and HMM/GMM models. To demonstrate the robustness of our approach, we additionally apply our framework to Iban, an unrelated low-resource language with a phonetic inventory roughly the size of Seneca's but with a less complex morphology. Our main contributions are as follows: 1) We introduce a deep convolutional architecture optimized for low-resource scenarios that captures feature-rich audio data over a broad temporal receptive field; 2) We utilize a fully convolutional framework for arbitrary length sequence processing; and 3) We show the effectiveness of utilizing transfer learning and data augmentation for further reducing word and character error rates.",
"cite_spans": [
{
"start": 204,
"end": 225,
"text": "(Hinton et al., 2012;",
"ref_id": "BIBREF13"
},
{
"start": 226,
"end": 246,
"text": "Hannun et al., 2014;",
"ref_id": "BIBREF11"
},
{
"start": 247,
"end": 265,
"text": "Chan et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 266,
"end": 289,
"text": "Audhkhasi et al., 2018;",
"ref_id": "BIBREF2"
},
{
"start": 290,
"end": 308,
"text": "Chiu et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 1012,
"end": 1017,
"text": "(HMM)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "When given sufficient in-domain monolingual training data, deep neural network methods for ASR often perform significantly better than traditional methods based on HMMs and GMMs (Hinton et al., 2012; Graves et al., 2013; Hannun et al., 2014; Amodei et al., 2016; Chan et al., 2016; Zhang et al., 2017; Chiu et al., 2018; Agenbag and Niesler, 2019) . Common approaches for deep learning ASR rely on RNNs: sequence-to-sequence models like that in Chan et al. (2016) use RNNs to generate a latent representation of the utterance before decoding with RNNs, while DeepSpeech 1 and DeepSpeech 2 (Hannun et al., 2014; Amodei et al., 2016) use RNNs to capture temporal dependencies before making predictions for each timestep. Methods that produce characters, such as versions of Deep-Speech, currently use Connectionist Temporal Classification (CTC) (Graves et al., 2006) to reduce streams of characters to plausible words by combining consecutive similar characters and pauses during speech. Convolutional architectures have achieved remarkable results in computer vision tasks such as image classification (Szegedy et al., 2015; Xie et al., 2017) . Szegedy et al. (Szegedy et al., 2015) introduced the concept of an Inception block which consists of multiple filter sizes in a layer to capture different levels of regional dependencies. This concept can be applied to sequential data like speech by using filters with different widths to simultaneously capture different temporal dependencies. The Inception network introduces 1\u00d7 bottleneck filters to reduce the number of parameters in a model. Xie et al. (Xie et al., 2017) use Inception-like blocks but with similar filter sizes while adding skip connections similar to ResNet to allow for better gradient flow. Previous experiments have shown that transfer learning from a model trained on resource-rich languages can improve the performance of ASR for low-resource languages (Gales et al., 2014; Imseng et al., 2014) . Using synthetic data has also been found to yield improvements in true lowresource, artificially low-resource, and resource-rich conditions (T\u00fcske et al., 2014; Billa, 2018; Wiesner et al., 2018) . Carmantini et al. (Carmantini et al., 2019) introduced sample overgeneration during initialization for low-resource ASR for improved semi-supervised training on lattice-free maximum mutual information (LF-MMI) (Manohar et al., 2018) . Malhotra el at. (Malhotra et al., 2019) selected samples with lower confidence in an active learning scenario for low-resource ASR. Rosenberg et al. (Rosenberg et al., 2017) investigated the use of a CTC-based RNN and an RNN Encoder-Decoder network in character-based end-to-end ASR for low-resource languages. While recurrent-based models have demonstrated usefulness in ASR and other sequence modeling tasks, these models cannot easily take advantage of parallelization on modern hardware since the output of an RNN cell at each timestep depends on the results from the previous timestep. To mitigate this problem, Collobert et al. (Collobert et al., 2016) relies on convolution to capture temporal dependencies. The fully convolutional, character-based architecture proposed by Collobert et al. (Collobert et al., 2016 ) still requires training models with large numbers of parameters. Additionally, these models have a high number of layers causing the models to converge more slowly. Our proposed model aims to reduce the complexity of the model without reducing performance by using bottleneck filters and skip connections. Additionally, instead of relying on different layers to capture different levels of temporal dependencies, we combine filters with different widths into one layer to reduce the number of layers in the model while still maintaining a wide context window. While transfer learning and data augmentation separately have both shown improvements, we explore the effectiveness of combining both concepts on low resource ASR, as well as a final finetuning step using only unaugmented data to prevent digital artifacts in augmented data from degrading performance.",
"cite_spans": [
{
"start": 178,
"end": 199,
"text": "(Hinton et al., 2012;",
"ref_id": "BIBREF13"
},
{
"start": 200,
"end": 220,
"text": "Graves et al., 2013;",
"ref_id": "BIBREF10"
},
{
"start": 221,
"end": 241,
"text": "Hannun et al., 2014;",
"ref_id": "BIBREF11"
},
{
"start": 242,
"end": 262,
"text": "Amodei et al., 2016;",
"ref_id": "BIBREF1"
},
{
"start": 263,
"end": 281,
"text": "Chan et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 282,
"end": 301,
"text": "Zhang et al., 2017;",
"ref_id": "BIBREF27"
},
{
"start": 302,
"end": 320,
"text": "Chiu et al., 2018;",
"ref_id": "BIBREF6"
},
{
"start": 321,
"end": 347,
"text": "Agenbag and Niesler, 2019)",
"ref_id": "BIBREF0"
},
{
"start": 445,
"end": 463,
"text": "Chan et al. (2016)",
"ref_id": "BIBREF5"
},
{
"start": 589,
"end": 610,
"text": "(Hannun et al., 2014;",
"ref_id": "BIBREF11"
},
{
"start": 611,
"end": 631,
"text": "Amodei et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 843,
"end": 864,
"text": "(Graves et al., 2006)",
"ref_id": "BIBREF9"
},
{
"start": 1101,
"end": 1123,
"text": "(Szegedy et al., 2015;",
"ref_id": "BIBREF22"
},
{
"start": 1124,
"end": 1141,
"text": "Xie et al., 2017)",
"ref_id": null
},
{
"start": 1159,
"end": 1181,
"text": "(Szegedy et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 1602,
"end": 1620,
"text": "(Xie et al., 2017)",
"ref_id": null
},
{
"start": 1925,
"end": 1945,
"text": "(Gales et al., 2014;",
"ref_id": "BIBREF8"
},
{
"start": 1946,
"end": 1966,
"text": "Imseng et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 2109,
"end": 2129,
"text": "(T\u00fcske et al., 2014;",
"ref_id": "BIBREF23"
},
{
"start": 2130,
"end": 2142,
"text": "Billa, 2018;",
"ref_id": "BIBREF3"
},
{
"start": 2143,
"end": 2164,
"text": "Wiesner et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 2167,
"end": 2210,
"text": "Carmantini et al. (Carmantini et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 2377,
"end": 2399,
"text": "(Manohar et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 2418,
"end": 2441,
"text": "(Malhotra et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 2534,
"end": 2575,
"text": "Rosenberg et al. (Rosenberg et al., 2017)",
"ref_id": "BIBREF21"
},
{
"start": 3036,
"end": 3060,
"text": "(Collobert et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 3200,
"end": 3223,
"text": "(Collobert et al., 2016",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2."
},
{
"text": "We conduct our experiments on Seneca, a morphologically complex and critically endangered language spoken by indigenous people in what is now Western New York State and Ontario. Although the language was still widely spoken in the Seneca community as recently as 75 years ago, Seneca children in the mid-twentieth century were typically required to attend state-run residential schools where they were punished or beaten for using their native language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "Today, roughly 50 elderly individuals speak Seneca as their first language, and a few hundred others are second language speakers. There are several ongoing efforts to revitalize the Seneca language, including language immersion programs for adults and children, but there are very few available Seneca recordings or texts, as many members of the Seneca community are reluctant to allow their speech to be recorded or transcribed. One motivation for developing a robust ASR system for Seneca is to accelerate efforts to document the language while there are living native speakers and to produce educational materials for the immersion programs that will train the next generation of speakers. The available transcribed audio recordings consist of approximately 720 minutes of spontaneous, naturalistic speech produced by eleven adult speakers, eight male and three female. All speakers in the dataset are middle-aged or elderly first-language Seneca speakers whose second language is English. Recordings were made over many years primarily by Seneca language learners under a variety of conditions using various recording equipment, resulting in a diverse range of audio quality. The recordings were transcribed using Seneca's current orthography, which uses 30 characters, and segmented at the utterance level by second-language Seneca speakers. Since Seneca orthography is quite reliably phonemic, with few ambiguous character-to-phone and phone-to-character mappings, we choose to treat characters (excluding punctuation) as phones. Using utterance boundaries provided in the reference transcripts, we randomly selected individual utterances from the full corpus of 720 minutes until we had obtained 600 minutes of audio for training. The remaining 120 minutes made up the test set. We deliberately selected utterances at random to maximize diversity in terms of gender, age, dialect, voice quality, and content (e.g. narrative vs. conversation) of both the train and test sets in order to avoid overfitting to any particular speaker or speaker characteristics. While this selection procedure lead to certain speakers appearing in both the testing and training sets, we were obliged to make this compromise due to the limited number of speakers of the language. In addition to the transcriptions of the recorded audio (roughly 35,000 words), we have available text data consisting of 6000 words of previously transcribed texts for which no corresponding audio is available. To demonstrate the generalizability of our methods, we also conduct our experiments on Iban, a Malayic language spoken in Brunei and Malaysia. The publicly available dataset ((Juan et al., 2014)) consists of 479 minutes of professional recordings of broadcast news, partitioned into 408 minutes of training data and 71 minutes of testing data. There are 17 speakers (7 male, 10 female) in the training set and 6 speakers (2 male, 4 female) in the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "We utilize a fully convolutional acoustic model constructed from a family of one dimensional convolution layers. The model takes either 13 MFCCs and their first and second derivatives, or 80 log mel-filterbanks as input features. Both are obtained using 25ms windows with 10ms stride. WideBlock: The main building block of our architecture is the WideBlock (Figure 1) , named for the high number of paths in each block. The architecture of the block, taking inspiration from ResNeXt blocks used in image classification (Xie et al., 2017) , consists of several parallel streams, each consisting of bottleneck 1 \u00d7 1 convolution layers before and after a normal convolution layer. The bottleneck layers reduce the complexity of the model by reducing the number of parameters required by the middle convolution operation. Instead of keeping the same filter size for all paths, we draw inspiration from Inception networks and employ filters with different sizes in each layer. The filter widths are odd numbers between 3 and 19. This choice is suitable for speech-related tasks since temporal dependencies in audio typically have more variance than spatial dependencies in visual tasks. The different filter sizes allow the model to pick up both short-term and long-term temporal dependencies. The output from each path is then summed before being added to the input of each block, forming a skip connection.",
"cite_spans": [
{
"start": 519,
"end": 537,
"text": "(Xie et al., 2017)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 357,
"end": 367,
"text": "(Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Acoustic Modeling",
"sec_num": "4.1."
},
{
"text": "Acoustic Model: Our acoustic model consists of two convolutional layers between the input feature vector and the first WideBlock (Figure 1 ). These embedding layers convert input audio features into a vector of desired depth and temporal content. The acoustic architecture continues with five WideBlocks, then two 1 \u00d7 1 convolution layers which act as fully-connected layers. The final layer outputs a vector with size corresponding to the number of tokens to be predicted. Batch normalization and ReLU are used after each convolution operation. To prevent overfitting due to limited data, dropout layers of 0.25 are added after each WideBlock. To train the network, the CTC loss function is used.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 138,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Acoustic Modeling",
"sec_num": "4.1."
},
{
"text": "DeepSpeech: To compare the performance of our deep approach against recurrent-based ASR models, we also trained a DeepSpeech model. The DeepSpeech model consists of a five-layer recurrent neural network with Long-Short Term Memory cells. The first, second, third, and fifth layers of the neural network are fully connected, while the fourth layer is a bi-directional recurrent layer. All layers contain 2048 hidden units and are followed by a dropout layer of 0.2. The DeepSpeech model uses the same input features as our deep approach and also uses CTC loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic Modeling",
"sec_num": "4.1."
},
{
"text": "Kaldi: We also compare the performance of our model against the traditional HMM/GMM framework provided by Kaldi (Povey et al., 2011) with a triphone acoustic model trained with the parameter settings described in the Kaldi tutorial and a word-level trigram language model. A second acoustic model was created using Kaldi's time-delay neural network (TDNN) architecture trained with the lattice-free maximum mutual information (LF-MMI) objective function (Peddinti et al., 2015) .",
"cite_spans": [
{
"start": 112,
"end": 132,
"text": "(Povey et al., 2011)",
"ref_id": "BIBREF20"
},
{
"start": 454,
"end": 477,
"text": "(Peddinti et al., 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic Modeling",
"sec_num": "4.1."
},
{
"text": "Transfer learning has proven successful in deep learning tasks with limited domain data. We extend this concept with a multistage transfer learning strategy. In the first stage, we train an acoustic model on a 960-hour Lib-riSpeech English corpus for 100 epochs. In the second stage, weight initialization is from the model obtained in the first stage. The model was then trained on heavily augmented training data as per (Jimerson et al., 2018) for 100 epochs or until convergence. In the final stage, the weights of the model from the second stage were used to initialize a model which is trained only on unaugmented data. For this final stage, the learning rate is reduced by an order of magnitude. Table 1 shows the performance for Seneca across different acoustic models with different transfer learning and augmentation strategies. To evaluate the performance of each model, we use word error rate (WER) and character error rate (CER). WER is the minimum edit distance over a word alignment, aggregated across utterances and normalized by the total number of words in the reference transcript. CER is calculated by aggregating the character-level minimum edit distance over all utterances and normalizing by the total number of characters in the reference. We report results for decoding both with and without a trigram language model built on the transcripts of the 10 hours of acoustic training data using KenLM (Heafield, 2011) with modified Kneser-Ney smoothing and no pruning. Table 1 shows that DeepSpeech (DS) with no transfer learning, augmentation, or language model yields little or no correct output. With a language model, the WER and CER for this model are reduced, but results are still mostly incorrect. Our deep approach shows slightly better performance than DeepSpeech without a language model and significantly lower WER when decoding with a trigram language model.",
"cite_spans": [
{
"start": 422,
"end": 445,
"text": "(Jimerson et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 1420,
"end": 1436,
"text": "(Heafield, 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 702,
"end": 709,
"text": "Table 1",
"ref_id": null
},
{
"start": 1488,
"end": 1495,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multistage Learning",
"sec_num": "4.2."
},
{
"text": "DS ( Fine-tuning of the augmented model using only nonaugmented data yields the best performance across all models, with a WER of 0.299 using our deep acoustic model. While fine-tuning after augmentation results in improvements, it yields much larger absolute and relative reductions in WER for the DeepSpeech model than for our deep architecture. Table 2 shows results of using log mel-filterbank features in place of MFCCs with modest improvement. Table 3 shows three Kaldi results on this same dataset: two standard HMM/GMM models (monophone and triphone) and one deep architecture, TDNN with LF-MMI. For Seneca, our deep architecture substantially outperforms all three of these models, including the TDNN. Demonstrating the efficacy and generalizability of our models on other low-resource datasets, Table 4 shows the performance of our deep method under different configurations for the Iban language. We see slightly higher but comparable error rates on this dataset, which had three fewer hours of acoustic training data. Table 5 shows previously reported results 1 for the three Kaldi models for Iban. These results are noticeably lower than those we report using the same acoustic model training configurations for Seneca. In addition, the TDNN LF-MMMI model yields a lower error rate than our best deep model. We note that the language model used to decode with these Kaldi models was built on a 2-million word text corpus, while the results presented above in Table 4 for our own deep methods used a language model built using only the transcripts from the 7 hours of available audio data. We suspect that this accounts for much of this discrepancy. It is also possible that our framework is better suited to the lower-quality recordings typical in the Seneca dataset and less appropriate for the clean, professionally recorded Iban data. We also note that our model yields comparable WER error rates in both languages, which points to its superior ability to generalize to new datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 348,
"end": 355,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 450,
"end": 457,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 805,
"end": 812,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 1030,
"end": 1037,
"text": "Table 5",
"ref_id": "TABREF4"
},
{
"start": 1472,
"end": 1479,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "DS (NO LM)",
"sec_num": null
},
{
"text": "In this paper, we introduced a residual network with a very wide filter selection in a fully convolutional architecture for low-resource ASR acoustic modeling. We show that our acoustic model outperforms a typical recurrent-based deep neural network in all experimental settings while also being more compute-efficient. Our deep acoustic model, when combined with a trigram language model, outperforms the traditional GMM/HMM model without the need for transfer learning or data augmentation. We also show that transfer learning from a high-resource language and data augmentation contribute to meaningful reductions in word error rate achieved by the model for two distinct low-resource languages. Our results point the way toward new, fasttraining deep learning ASR methods for languages with extremely limited audio and textual training resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6."
},
{
"text": "https://github.com/bagustris/id",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This material is based upon work supported by the National Science Foundation under Grant No. 1761562. We recognize the contributions of the team collecting and analyzing this dataset, including Morris Cooke, Richard Hatcher, Alex Jimerson, Mike Jones, Megan Kennedy, Whitney Nephew, Aryien Stevens, and Karin Michelson. We are grateful for the cooperation, support, generosity of the elders of the Seneca Nation of Indians.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "7."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic sub-word unit discovery and pronunciation lexicon induction for asr with application to under-resourced languages",
"authors": [
{
"first": "W",
"middle": [],
"last": "Agenbag",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Niesler",
"suffix": ""
}
],
"year": 2019,
"venue": "Computer Speech & Language",
"volume": "57",
"issue": "",
"pages": "20--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agenbag, W. and Niesler, T. (2019). Automatic sub-word unit discovery and pronunciation lexicon induction for asr with application to under-resourced languages. Com- puter Speech & Language, 57:20-40.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Deep Speech 2: Endto-end speech recognition in English and Mandarin",
"authors": [
{
"first": "D",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ananthanarayanan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Anubhai",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Battenberg",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Case",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Casper",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Catanzaro",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "173--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amodei, D., Ananthanarayanan, S., Anubhai, R., Bai, J., Battenberg, E., Case, C., Casper, J., Catanzaro, B., Cheng, Q., and Chen, G. (2016). Deep Speech 2: End- to-end speech recognition in English and Mandarin. In ICML, pages 173-182.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Building competitive direct acoustics-to-word models for english conversational speech recognition",
"authors": [
{
"first": "K",
"middle": [],
"last": "Audhkhasi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kingsbury",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ramabhadran",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Saon",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Picheny",
"suffix": ""
}
],
"year": 2018,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "4759--4763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Audhkhasi, K., Kingsbury, B., Ramabhadran, B., Saon, G., and Picheny, M. (2018). Building competitive di- rect acoustics-to-word models for english conversational speech recognition. In ICASSP, pages 4759-4763.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Isi asr system for the low resource speech recognition challenge for indian languages",
"authors": [
{
"first": "J",
"middle": [],
"last": "Billa",
"suffix": ""
}
],
"year": 2018,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "3207--3211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Billa, J. (2018). Isi asr system for the low resource speech recognition challenge for indian languages. Interspeech, pages 3207-3211.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Untranscribed web audio for low resource speech recognition",
"authors": [
{
"first": "A",
"middle": [],
"last": "Carmantini",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bell",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Renals",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "226--230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carmantini, A., Bell, P., and Renals, S. (2019). Untran- scribed web audio for low resource speech recognition. Interspeech, pages 226-230.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition",
"authors": [
{
"first": "W",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Vinyals",
"suffix": ""
}
],
"year": 2016,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chan, W., Jaitly, N., Le, Q., and Vinyals, O. (2016). Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In ICASSP.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "State-of-the-art speech recognition with sequence-to-sequence models",
"authors": [
{
"first": "C",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "T",
"middle": [
"N"
],
"last": "Sainath",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Prabhavalkar",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gonina",
"suffix": ""
}
],
"year": 2018,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiu, C., Sainath, T. N., Wu, Y., Prabhavalkar, R., Nguyen, P., Chen, Z., Kannan, A., Weiss, R., Rao, K., Gonina, E., et al. (2018). State-of-the-art speech recognition with sequence-to-sequence models. In ICASSP.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Wav2letter: an end-to-end convnet-based speech recognition system",
"authors": [
{
"first": "R",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Puhrsch",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Synnaeve",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.03193"
]
},
"num": null,
"urls": [],
"raw_text": "Collobert, R., Puhrsch, C., and Synnaeve, G. (2016). Wav2letter: an end-to-end convnet-based speech recog- nition system. arXiv preprint arXiv:1609.03193.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Speech recognition and keyword spotting for lowresource languages: Babel project research at CUED",
"authors": [
{
"first": "M",
"middle": [],
"last": "Gales",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knill",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ragni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rath",
"suffix": ""
}
],
"year": 2014,
"venue": "SLTU",
"volume": "",
"issue": "",
"pages": "16--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gales, M., Knill, K., Ragni, A., and Rath, S. (2014). Speech recognition and keyword spotting for low- resource languages: Babel project research at CUED. In SLTU, pages 16-23.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks",
"authors": [
{
"first": "A",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Gomez",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2006,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "369--376",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graves, A., Fern\u00e1ndez, S., Gomez, F., and Schmidhuber, J. (2006). Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural net- works. In ICML, pages 369-376.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Speech recognition with deep recurrent neural networks",
"authors": [
{
"first": "A",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2013,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "6645--6649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graves, A., Mohamed, A., and Hinton, G. (2013). Speech recognition with deep recurrent neural networks. In ICASSP, pages 6645-6649.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Deep Speech: Scaling up end-to-end speech recognition",
"authors": [
{
"first": "A",
"middle": [],
"last": "Hannun",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Case",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Casper",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Catanzaro",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Diamos",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Elsen",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Prenger",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.5567"
]
},
"num": null,
"urls": [],
"raw_text": "Hannun, A., Case, C., Casper, J., Catanzaro, B., Di- amos, G., Elsen, E., Prenger, R., et al. (2014). Deep Speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "KenLM: Faster and smaller language model queries",
"authors": [
{
"first": "K",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2011,
"venue": "SMT Workshop",
"volume": "",
"issue": "",
"pages": "187--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heafield, K. (2011). KenLM: Faster and smaller language model queries. In SMT Workshop, pages 187-197.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Deep neural networks for acoustic modeling in speech recognition",
"authors": [
{
"first": "G",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Dahl",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Senior",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Vanhoucke",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kingsbury",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Sainath",
"suffix": ""
}
],
"year": 2012,
"venue": "IEEE Signal Processing Magazine",
"volume": "29",
"issue": "",
"pages": "82--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinton, G., Dahl, G., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Kingsbury, B., and Sainath, T. (2012). Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Processing Magazine, 29:82- 97, November.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Using out-of-language data to improve an underresourced speech recognizer",
"authors": [
{
"first": "D",
"middle": [],
"last": "Imseng",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Motlicek",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Bourlard",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Garner",
"suffix": ""
}
],
"year": 2014,
"venue": "Speech Communication",
"volume": "56",
"issue": "",
"pages": "142--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Imseng, D., Motlicek, P., Bourlard, H., and Garner, P. (2014). Using out-of-language data to improve an under- resourced speech recognizer. Speech Communication, 56:142-151.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improving ASR output for endangered language documentation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Jimerson",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Simha",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ptucha",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Prud'hommeaux",
"suffix": ""
}
],
"year": 2018,
"venue": "SLTU",
"volume": "",
"issue": "",
"pages": "182--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jimerson, R., Simha, K., Ptucha, R., and Prud'hommeaux, E. (2018). Improving ASR output for endangered lan- guage documentation. In SLTU, pages 182-186.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Semisupervised g2p bootstrapping and its application to asr for a very under-resourced language: Iban",
"authors": [
{
"first": "S",
"middle": [],
"last": "Juan",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rossato",
"suffix": ""
}
],
"year": 2014,
"venue": "SLTU",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan, S., Besacier, L., and Rossato, S. (2014). Semi- supervised g2p bootstrapping and its application to asr for a very under-resourced language: Iban. In SLTU, May.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Active learning methods for low resource end-to-end speech recognition",
"authors": [
{
"first": "K",
"middle": [],
"last": "Malhotra",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ganapathy",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "2215--2219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malhotra, K., Bansal, S., and Ganapathy, S. (2019). Ac- tive learning methods for low resource end-to-end speech recognition. Interspeech, pages 2215-2219.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semi-supervised training of acoustic models using lattice-free mmi",
"authors": [
{
"first": "V",
"middle": [],
"last": "Manohar",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hadian",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2018,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "4844--4848",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manohar, V., Hadian, H., Povey, D., and Khudanpur, S. (2018). Semi-supervised training of acoustic models us- ing lattice-free mmi. In ICASSP, pages 4844-4848.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A time delay neural network architecture for efficient modeling of long temporal contexts",
"authors": [
{
"first": "V",
"middle": [],
"last": "Peddinti",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peddinti, V., Povey, D., and Khudanpur, S. (2015). A time delay neural network architecture for efficient modeling of long temporal contexts. In Interspeech.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The Kaldi speech recognition toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Glembek",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Motlicek",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Schwarz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Silovsky",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Stemmer",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Vesely",
"suffix": ""
}
],
"year": 2011,
"venue": "ASRU",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glem- bek, O., Goel, N., Hannemann, M., Motlicek, P., Qian, Y., Schwarz, P., Silovsky, J., Stemmer, G., and Vesely, K. (2011). The Kaldi speech recognition toolkit. In ASRU.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "End-to-end speech recognition and keyword search on low-resource languages",
"authors": [
{
"first": "A",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Audhkhasi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sethy",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ramabhadran",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Picheny",
"suffix": ""
}
],
"year": 2017,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "5280--5284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rosenberg, A., Audhkhasi, K., Sethy, A., Ramabhadran, B., and Picheny, M. (2017). End-to-end speech recogni- tion and keyword search on low-resource languages. In ICASSP, pages 5280-5284.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Going deeper with convolutions",
"authors": [
{
"first": "C",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Sermanet",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Reed",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Anguelov",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Erhan",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Vanhoucke",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rabinovich",
"suffix": ""
}
],
"year": 2015,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabi- novich, A. (2015). Going deeper with convolutions. In CVPR, pages 1-9.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Data augmentation, feature combination, and multilingual neural networks to improve asr and kws performance for low-resource languages",
"authors": [
{
"first": "Z",
"middle": [],
"last": "T\u00fcske",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Golik",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Nolden",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schl\u00fcter",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T\u00fcske, Z., Golik, P., Nolden, D., Schl\u00fcter, R., and Ney, H. (2014). Data augmentation, feature combination, and multilingual neural networks to improve asr and kws per- formance for low-resource languages. In Interspeech.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Low resource multimodal data augmentation for end-to-end ASR",
"authors": [
{
"first": "M",
"middle": [],
"last": "Wiesner",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Renduchintala",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Dehak",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wiesner, M., Renduchintala, A., Watanabe, S., Liu, C., De- hak, N., and Khudanpur, S. (2018). Low resource multi- modal data augmentation for end-to-end ASR. In Inter- speech.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Aggregated residual transformations for deep neural networks",
"authors": [],
"year": null,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "1492--1500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aggregated residual transformations for deep neural net- works. In CVPR, pages 1492-1500.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Very deep convolutional networks for end-to-end speech recognition",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Jaitly",
"suffix": ""
}
],
"year": 2017,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "4845--4849",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, Y., Chan, W., and Jaitly, N. (2017). Very deep con- volutional networks for end-to-end speech recognition. In ICASSP, pages 4845-4849.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Left: The overall architecture of our convolutional approach. Right: A WideBlock consisting of 9 paths, each consisting of bottleneck filters centered by filters of different width to capture different levels of temporal dependencies. Each layer is shown as (# input channels, filter width, # output channels).",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "shows the overall network architecture and the architecture of a WideBlock, the main building block of our model. The details of each are described next.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"text": "Seneca WER and CER using our deep CNN approach with log mel-filterbank feature as input features with and without a trigram language model.",
"num": null,
"html": null,
"content": "<table><tr><td>Acoustic Model</td><td>WER</td></tr><tr><td colspan=\"2\">Monophone GMM/HMM 0.608</td></tr><tr><td>Triphone GMM/HMM</td><td>0.524</td></tr><tr><td>TDNN LF-MMI</td><td>0.421</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"text": "Seneca WER for Kaldi HMM-GMM models and TDNN with LF-MMI.",
"num": null,
"html": null,
"content": "<table><tr><td/><td>NO LM</td><td>W/LM</td></tr><tr><td/><td colspan=\"2\">WER CER WER CER</td></tr><tr><td>Baseline</td><td colspan=\"2\">0.856 0.463 0.487 0.286</td></tr><tr><td>+TL</td><td colspan=\"2\">0.668 0.287 0.413 0.257</td></tr><tr><td>+TL,Aug</td><td colspan=\"2\">0.665 0.226 0.420 0.286</td></tr><tr><td colspan=\"3\">+TL,Aug,FT 0.518 0.160 0.266 0.116</td></tr></table>",
"type_str": "table"
},
"TABREF3": {
"text": "Iban WER and CER for transfer learning and augmentation strategies within our architecture using with log mel-filterbanks as input features with and without trigram language model built using only the transcripts of the audio.",
"num": null,
"html": null,
"content": "<table><tr><td>Acoustic Model</td><td>WER</td></tr><tr><td colspan=\"2\">Monophone GMM/HMM 0.372</td></tr><tr><td>Triphone GMM/HMM</td><td>0.265</td></tr><tr><td>TDNN LF-MMI</td><td>0.175</td></tr></table>",
"type_str": "table"
},
"TABREF4": {
"text": "Previously reported WER for Iban 2 HMM-GMM models and TDNN with LF-MMI, all decoded with a language model built on the full 2-million word text corpus.",
"num": null,
"html": null,
"content": "<table><tr><td>Using transfer learning from a high resource language im-</td></tr><tr><td>proves performance across all models and all language</td></tr><tr><td>model settings. Training on augmented data after transfer</td></tr><tr><td>learning from a high resource language degrades the per-</td></tr><tr><td>formance of DeepSpeech models in terms of WER but im-</td></tr><tr><td>proves CER. For our deep architecture, this configuration</td></tr><tr><td>improves results across the board. In all configurations for</td></tr><tr><td>Seneca, our deep approach substantially outperforms the</td></tr><tr><td>corresponding DeepSpeech model.</td></tr></table>",
"type_str": "table"
}
}
}
}