ACL-OCL / Base_JSON /prefixS /json /sigmorphon /2021.sigmorphon-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:31:30.590167Z"
},
"title": "Avengers, Ensemble! Benefits of ensembling in grapheme-to-phoneme prediction",
"authors": [
{
"first": "Vagrant",
"middle": [],
"last": "Gautam",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Yau",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Zafarullah",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Fred",
"middle": [],
"last": "Mahmood",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Mailhot",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Shreekantha",
"middle": [],
"last": "Nadig",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Riqiang",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Nathan",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Dialpad",
"middle": [],
"last": "Canada",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Dialpad",
"middle": [],
"last": "India",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe three baseline beating systems for the high-resource English-only sub-task of the SIGMORPHON 2021 Shared Task 1: a small ensemble that Dialpad's 1 speech recognition team uses internally, a well-known off-the-shelf model, and a larger ensemble model comprising these and others. We additionally discuss the challenges related to the provided data, along with the processing steps we took.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe three baseline beating systems for the high-resource English-only sub-task of the SIGMORPHON 2021 Shared Task 1: a small ensemble that Dialpad's 1 speech recognition team uses internally, a well-known off-the-shelf model, and a larger ensemble model comprising these and others. We additionally discuss the challenges related to the provided data, along with the processing steps we took.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The transduction of sequences of graphemes to phones or phonemes, 2 that is from characters used in orthographic representations to characters used to represent minimal units of speech, is a core component of many tasks in speech science & technology. This graphemeto-phoneme conversion (or g2p) may be used, e.g., to automate or scale the creation of digital lexicons or pronunciation dictionaries, which are crucial to FST-based approaches to automatic speech recognition (ASR) and synthesis (Mohri et al., 2002) .",
"cite_spans": [
{
"start": 494,
"end": 514,
"text": "(Mohri et al., 2002)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The SIGMORPHON 2021 Workshop included a Shared Task on g2p conversion, comprising 3 sub-tasks. 3 The low-and mediumresource tasks were multilingual, while the high-resource task was English-only. This paper provides an overview of the three baseline-beating systems submitted by the Dialpad team for the high-resource sub-task, 2 Sub-task 1: high-resource, English-only",
"cite_spans": [
{
"start": 95,
"end": 96,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The organizers provided 41,680 lines of data in total; 33,344 for training, and 4,168 each for development and test. The data consists of word/pronunciation pairs (word-pron pairs, henceforth), where words are sequences of graphemes and pronunciations are sequences of characters from the International Phonetic Alphabet (International Phonetic Association, 1999) . The data was derived from the English portion of the WikiPron database (Lee et al., 2020) , a massively multilingual resource of word-pron pairs extracted from Wiktionary 4 and subject to some manual QA and postprocessing. 5 The baseline model provided was the 2nd place finisher from the 2020 g2p shared task . It is an ensembled neural transition model that operates over edit actions and is trained via imitation learning (Makarov and Clematide, 2020) .",
"cite_spans": [
{
"start": 321,
"end": 363,
"text": "(International Phonetic Association, 1999)",
"ref_id": null
},
{
"start": 437,
"end": 455,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 589,
"end": 590,
"text": "5",
"ref_id": null
},
{
"start": 791,
"end": 820,
"text": "(Makarov and Clematide, 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Evaluation scripts were provided to compute word error rate (WER), the percentage of words for which the output sequence does not match the gold label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Notwithstanding the baseline's strong prior performance and the amount of data available, the task proved to be challenging; the baseline system achieved development and test set WERs of 45.13 and 41.94, respectively. We discuss possible reasons for this below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Wiktionary is an open, collaborative, public effort to create a free dictionary in multiple languages. Anyone can create an account and add or amend words, pronunciations, etymological information, etc. As with most usergenerated content, this is a noisy method of data creation and annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data-related challenges",
"sec_num": "2.1"
},
{
"text": "Even setting aside the theory-laden question of when or whether a given word should be counted as English, 6 the open nature of Wiktionary means that speakers of different variants or dialects of English may submit varying or conflicting pronunciations for sets of words. For example, some transcriptions indicate that the users who input them had the cot/caught merger while others do not; in the training data \"cot\" is transcribed /k \u0251 t/ and \"caught\" is transcribed /k \u0254 t/, indicating a split, but \"aughts\" is transcribed as /\u0251 t s/, indicating merger. There is also variation in the narrowness of transcription. For example, some transcriptions include aspiration on stressed-syllable-initial stops while others do not c.f. \"kill\" /k\u02b0 \u026a l/ and \"killer\" /k \u026a l \u025a/. Typically the set of English phonemes is taken to be somewhere between 38-45 depending on variant/dialect (McMahon, 2002) . In exploring the training data, we found a total of 124 symbols in the training set transcriptions, many of which only appeared in a small set (1-5) of transcriptions. To reduce the effect of this long tail of infrequent symbols, we normalized the training set.",
"cite_spans": [
{
"start": 875,
"end": 890,
"text": "(McMahon, 2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data-related challenges",
"sec_num": "2.1"
},
{
"text": "The main source of symbols in the long tail was the variation in the broadness of transcription-vowels were sometimes but not always transcribed with nasalization before a nasal consonant, aspiration on wordinitial voiceless stops was inconsistently indicated, phonetic length was occasionally indicated, etc. There were also some cases of erroneous transcription that we uncovered by looking at the lowest frequency phones and the word-pronunciation pairs where they appeared. For instance, the IPA /j/ was transcribed as /y/ twice, the voiced alveolar approximant /\u0279/ was mistranscribed as the trill /r/ over 200 times, and we found a handful of issues where a phone was transcribed with a Unicode symbol not used in the IPA at all.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data-related challenges",
"sec_num": "2.1"
},
{
"text": "Most of these were cases where the rare variant was at least two orders of magnitude less frequent than the common variant of the symbol. There was, however, one class of sounds where the variation was less dramatically skewed; the consonants /m/, /n/, and /l/ appeared in unstressed syllables following schwa (/\u0259m/, /\u0259n/, /\u0259l/) roughly one order of magnitude more frequently than their syllabic counterparts (/m\u0329 /, /n\u0329 /, /l \u0329/), and we opted not to normalize these. If we had normalized the syllabic variants, it would have resulted in more consistent g2p output but it would likely also have penalized our performance on the uncleaned test set. 7 In the end, our training data contained 47 phones (plus end-of-sequence and UNK symbols for some models).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data-related challenges",
"sec_num": "2.1"
},
{
"text": "We trained and evaluated several models for this task, both publicly available, in-house, and custom developed, along with various ensembling permutations. In the end, we submitted three sets of baseline beating results. The organizers assigned sequential identifiers to multiple submissions (e.g. Dialpad-N); we include these in the discussion of our entries below, for ease of subsequent reference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "Dialpad uses a g2p system internally for scalable generation of novel lexicon additions. We were motivated to enter this shared task as a means of assessing potential areas of improvement for our system; in order to do so we needed to assess our own performance as a baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dialpad model (Dialpad-2)",
"sec_num": "3.1"
},
{
"text": "This model is a simple majority-vote ensemble of 3 existing publicly available g2p systems: Phonetisaurus (Novak et al., 2012) , a WFST-based model, Sequitur (Bisani and Ney, 2008) , a joint-sequence model trained via EM, and a neural sequence-to-sequence model developed at CMU as part of the CMUSphinx 8 toolkit (see subsection 3.2). As Dialpad uses a proprietary lexicon and phoneset internally, we retrained all three models on the cleaned version of the shared task training data, retaining default hyperparameters and architectures.",
"cite_spans": [
{
"start": 106,
"end": 126,
"text": "(Novak et al., 2012)",
"ref_id": "BIBREF13"
},
{
"start": 158,
"end": 180,
"text": "(Bisani and Ney, 2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Dialpad model (Dialpad-2)",
"sec_num": "3.1"
},
{
"text": "In the end, this ensemble achieved a test set WER of 41.72, narrowly beating the baseline (results are discussed in more depth in Section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dialpad model (Dialpad-2)",
"sec_num": "3.1"
},
{
"text": "CMUSphinx g2p-seq2seq (Dialpad-3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A strong standalone model:",
"sec_num": "3.2"
},
{
"text": "CMUSphinx is a set of open systems and tools for speech science developed at Carnegie Mellon University, including a g2p system. 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A strong standalone model:",
"sec_num": "3.2"
},
{
"text": "It is a neural sequence-to-sequence model (Sutskever et al., 2014) that is Transformerbased (Vaswani et al., 2017) , written in Tensorflow (Abadi et al., 2015) . A pre-trained 3layer model is available for download, but it is trained on a dictionary that uses ARPABET, a substantially different phoneset from the IPA used in this challenge. For this reason we retrained a model from scratch on the cleaned version of the training data. This model achieved a test set WER of 41.58, again narrowly beating the baseline. Interestingly, this outperformed the Dialpad model which incorporates it, suggesting that Phonetisaurus and Sequitur add more noise than signal to predicted outputs, to say nothing of increased computational resources and training time. More generally, this points to the CMUSphinx seq2seq model as a simple and strong baseline against which future g2p research should be assessed.",
"cite_spans": [
{
"start": 42,
"end": 66,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF15"
},
{
"start": 92,
"end": 114,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 139,
"end": 159,
"text": "(Abadi et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A strong standalone model:",
"sec_num": "3.2"
},
{
"text": "In the interest of seeing what results could be achieved via further naive ensembling, our final submission was a large ensemble, comprising two variations on the baseline model, the Dialpad-2 ensemble discussed above, and two additional seq2seq models, one using LSTMs and the other Transformer-based. The latter additionally incorporated a sub-word extraction method designed to bias a model's inputoutput mapping toward \"good\" graphemephoneme correspondences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A large ensemble (Dialpad-1)",
"sec_num": "3.3"
},
{
"text": "The method of ensembling for this model is word level majority-vote ensembling. We select the most common prediction when there is a majority prediction (i.e. one prediction has more votes than all of the others). If there is a tie, we pick the prediction that was generated by the best standalone model with respect to each model's performance on the development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A large ensemble (Dialpad-1)",
"sec_num": "3.3"
},
{
"text": "This collection of models achieved a test set WER of 37.43, a 10.75% relative reduction in WER over the baseline model. As shown in Table 1 , although a majority of the component models did not outperform the baseline, there was sufficient agreement across different examples that a simple majority voting scheme was able to leverage the models' varying strengths effectively. We discuss the components and their individual performance below and in Section 4.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "A large ensemble (Dialpad-1)",
"sec_num": "3.3"
},
{
"text": "The \"foundation\" of our ensemble was the default baseline model (Makarov and Clematide, 2018), which we trained using the raw data and default settings in order to reflect the baseline performance published by the organization. We included this in order to individually assess the effect of additional models on overall performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline variations",
"sec_num": "3.3.1"
},
{
"text": "In addition to this default base, we added a larger version of the same model, for which we increased the number of encoder and decoder layers from 1 to 3, and increased the hidden dimensions 200 to 400.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline variations",
"sec_num": "3.3.1"
},
{
"text": "We conducted experiments with a RNN seq2seq model, comprising a biLSTM encoder, LSTM decoder, and dot-product attention. 10 We conducted several rounds of hyperparameter optimization over layer sizes, optimizer, and learning rate. Although none of these models outperformed the baseline, a small network proved to be efficiently trainable (2 CPUhours) and improved the ensemble results, so it was included.",
"cite_spans": [
{
"start": 121,
"end": 123,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "biLSTM+attention seq2seq",
"sec_num": "3.3.2"
},
{
"text": "Sub-word segmentation is widely used in ASR and neural machine translation tasks, as it reduces the cardinality of the search space over word-based models, and mitigates the issue of OOVs. Use of sub-words for g2p tasks has been explore, e.g. Reddy and Goldsmith (2010) develop an MDL-based approach to extracting sub-word units for the task of g2p. Recently, a pronunciation-assisted sub-word model (PASM) (Xu et al., 2019) was shown to improve the performance of ASR models. We experimented with pronunciation-assisted sub-words to phonemes (PAS2P), leveraging the training data and a reparameterization of the IBM Model 2 aligner (Brown et al., 1993) dubbed fast_align (Dyer et al., 2013) . 11 The alignment model is used to find an alignment of sequences of graphemeres to their corresponding phonemes. We follow a similar process as Xu et al. (2019) to find the consistent grapheme-phoneme pairs and refinement of the pairs for the PASM model. We also collect grapheme sequence statistics and marginalize it by summing up the counts of each type of grapheme sequence over all possible types of phoneme sequences. These counts are the weights of each sub-word sequence.",
"cite_spans": [
{
"start": 243,
"end": 269,
"text": "Reddy and Goldsmith (2010)",
"ref_id": "BIBREF14"
},
{
"start": 407,
"end": 424,
"text": "(Xu et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 633,
"end": 653,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF2"
},
{
"start": 672,
"end": 691,
"text": "(Dyer et al., 2013)",
"ref_id": "BIBREF3"
},
{
"start": 694,
"end": 696,
"text": "11",
"ref_id": null
},
{
"start": 838,
"end": 854,
"text": "Xu et al. (2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PAS2P: Pronunciation-assisted sub-words to phonemes",
"sec_num": "3.3.3"
},
{
"text": "Given a word and the weights for each sub-word, the segmentation process is a search problem over all possible sub-word segmentation of that word. We solve this search problem by building weighted FSTs 12 of a given word and the sub-word vocabulary, and finding the best path through this lattice. For example, the word \"thoughtfulness\" would be segmented by PASM as \"th_ough_t_f_u_l_n_e_ss\", and this would be used as the input in the PAS2P model rather than the full sequence of individual graphemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PAS2P: Pronunciation-assisted sub-words to phonemes",
"sec_num": "3.3.3"
},
{
"text": "Finally, the PAS2P transducer is a Transformer-based sequence-to-sequence model trained using the ESPnet end-to-end speech processing toolkit (Watanabe et al., 2018) , with pronunciation-assisted subwords as inputs and phones as outputs. The model has 6 layers of encoder and decoder with 2048 units, and 4 attention heads with 256 units. We use dropout with a probability of 0.1 and label smoothing with a weight of 0.1 to regularize the model. This model achieved WERs of 44.84 and 43.40 on the development and test sets, respectively.",
"cite_spans": [
{
"start": 142,
"end": 165,
"text": "(Watanabe et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PAS2P: Pronunciation-assisted sub-words to phonemes",
"sec_num": "3.3.3"
},
{
"text": "Our main results are shown in Table 1 , where we show both dev and test set WER for each individual model in addition to the submitted ensembles. In particular, we can see that many of the ensemble components do not beat the baseline WER, but nonetheless serve to improve the ensembled models. Table 1 : Results for components of ensembles, and submitted models/ensembles (bolded).",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 1",
"ref_id": null
},
{
"start": 294,
"end": 301,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We experimented with different ensembles and found that incorporating models with different architectures generally improves overall performance. In the standalone results, only the top three models beat the baseline WER, but adding additional models with higher WER than the baseline continues to reduce overall WER. Table 2 shows the effect of this progressive ensembling, from our top-3 models to our top-7 (i.e. the ensemble for the Dialpad-1 model).",
"cite_spans": [],
"ref_spans": [
{
"start": 318,
"end": 325,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Additional experiments",
"sec_num": "5"
},
{
"text": "In addition to varying our ensemble sizes and components, we investigated a different ensemble voting scheme, in which ties are broken using edit distance when there is no 1best majority option. That is, in the event of a tie, instead of selecting the prediction made by the best standalone model (our usual tiebreaking method), we select the prediction that minimizes edit distance to all other predictions that have the same number of votes. The idea of this method is to maximize subword level agreement. Although this method did not show clear improvements on the development set, we found after submission that it narrowly but consistently outperformed the top-N ensembles on the test set (see Table 3 ). ",
"cite_spans": [],
"ref_spans": [
{
"start": 699,
"end": 706,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Edit distance-based voting",
"sec_num": "5.1"
},
{
"text": "We conducted some basic analyses of the Dialpad-1 submission's patterns of errors, to better understand its performance and identify potential areas of improvement. 13",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "6"
},
{
"text": "We began by calculating the oracle WER, i.e. the theoretical best WER that the ensemble could have achieved if it had selected the correct/gold prediction every time it was present in the pool of component model predictions for a given input. The Dialpad-1 system's oracle WERs on the dev and test sets were 25.12 and 23.27, respectively (c.f. 40.12 and 37.43 actual).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle WER",
"sec_num": "6.1"
},
{
"text": "\"acres\" (e \u026a k \u025a z/) rhymes with \"degrees\", and that \"beret\" has a /t/ sound in it. In each of these cases, there was either not enough samples in the training set to reliably learn the relevant grapheme-phoneme correspondence, or else a conflicting (but correct) correspondence was over-represented in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle WER",
"sec_num": "6.1"
},
{
"text": "We presented and discussed three g2p systems submitted for the SIGMORPHON2021 English-only shared sub-task. In addition to finding a strong off-the-shelf contender, we show that naive ensembling remains a strong strategy in supervised learning, of which g2p is a sub-domain, and that simple majority-voting schemes in classification can often leverage the respective strengths of sub-optimal component models, especially when diverse architectures are combined. Additionally, we provided more evidence for the usefulness of linguistically-informed subword modeling as an input transformation on speech-related tasks. We also discussed additional experiments whose results were not submitted, indicating the benefit of exploring top-N model vs ensemble trade-offs, and demonstrating the potential benefit of an edit-distance based tiebreaking method for ensemble voting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Future work includes further search for the optimal trade-off between ensemble size and performance, as well as additional exploration of the edit-distance voting scheme, and more sophisticated ensembling/voting methods, e.g. majority voting at the phone level on aligned outputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://en.wiktionary.org/ 5 See https://github.com/sigmorphon/2021-task1 for fuller details on data formatting and processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "E.g., the training data included the arguably French word-pronunciation pair: embonpoint /\u0251\u0303b \u0254\u0303p w \u025b/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Although the possibility also exists that one or more of our models would have found and exploited contextual cues that weren't obvious to us by inspection.8 https://cmusphinx.github.io",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/cmusphinx/g2p-seq2seq",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used the DyNet toolkit(Neubig et al., 2017) for these experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/clab/fast_align12 We use Pynini(Gorman, 2016) for this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We are grateful to an anonymous reviewer for suggesting that this would strengthen the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We nonetheless acknowledge the magnitude and challenge of the task of cleaning/normalizing a large quantity of user-generated data, and thank the organizers for the work that they did in this area.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to Dialpad Inc. for providing the resources, both temporal and computational, to work on this project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "These represent massive performance improvements (approx. 15% absolute, or 37% relative, WER reduction), and suggest refinement of our output selection/voting method (perhaps via some kind of confidence weighting) could lead to much-improved results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "We also investigated outputs for which none of our component models predicted the correct pronunciation, in hopes of finding some patterns of interest.Many of the training data-related issues raised in section 2.1 appeared in the dev and test labels as well. In some cases this led to high cross-component agreement, even on incorrect predictions. Our hope that subtle contextual cues might reveal patterns in the distribution of syllabic versus schwa-following liquids and nasals was not borne out, e.g. our ensemble was led astray on words like \"warble\", which had a labelled pronunciation of /w \u0254 \u0279 b l \u0329/, while all 7 of our models predicted /w \u0254 \u0279 b \u0259 l/, a functionally non-distinct pronunciation. In addition, the previously mentioned issue of /\u0279/ being mistranscribed as /r/ affected our performance, e.g. with the word \"unilateral\", whose labelled pronunciation was /j u n \u026a l ae t \u0259 r \u0259 l/, instead of /j u n \u026a l ae t \u0259 \u0279 \u0259 l/, which was again the pronunciation predicted by all 7 models. Finally, narrowness of transcription was also an issue that affected our performance on the dev and test sets, e.g., for words like \"cloudy\" /k \u026b a \u028a d i/ and \"cry\" /k \u0279 a \u026a \u032f/, for which we predicted /k l a \u028a d i/ and /k \u0279 a \u026a/, respectively. In the end, it seems that noisiness in the data was a major source of errors for our submissions. 14 Aside from issues arising due to label noise, our systems also made some genuine errors that are typical of g2p models, mostly related to data distribution or sparsity. For example, our component models overwhelmingly predicted that \"irreparate\" (/\u026a \u0279 \u025b p \u0259 \u0279 \u0259 t/) should rhyme instead with \"rate\" (this \"-ate-\" /e \u026a t/ correspondence was overwhelmingly present in the training data), that \"backache\" (/b ae k e \u026a k/) must contain the affricate /t\u0361 \u0283/, that",
"cite_spans": [
{
"start": 1341,
"end": 1343,
"text": "14",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data-related errors",
"sec_num": "6.2"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Tensor-Flow: Large-scale machine learning on heterogeneous systems",
"authors": [
{
"first": "Mart\u00edn",
"middle": [],
"last": "Abadi",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Barham",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Brevdo",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Citro",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Devin",
"suffix": ""
},
{
"first": "Sanjay",
"middle": [],
"last": "Ghemawat",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Harp",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Irving",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "Yangqing",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Manjunath",
"middle": [],
"last": "Kudlur",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Levenberg",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eu- gene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, An- drew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasude- van, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete War- den, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. Tensor- Flow: Large-scale machine learning on hetero- geneous systems. Software available from ten- sorflow.org.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Jointsequence models for grapheme-to-phoneme conversion",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Bisani",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2008,
"venue": "Speech Communication",
"volume": "50",
"issue": "5",
"pages": "434--451",
"other_ids": {
"DOI": [
"10.1016/j.specom.2008.01.002"
]
},
"num": null,
"urls": [],
"raw_text": "Maximilian Bisani and Hermann Ney. 2008. Joint- sequence models for grapheme-to-phoneme conversion. Speech Communication, 50(5):434- 451.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A simple, fast, and effective reparameterization of IBM model 2",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Chahuneau",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "644--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparam- eterization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies, pages 644- 648, Atlanta, Georgia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Pynini: A python library for weighted finite-state grammar compilation",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Gorman",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the SIGFSM Workshop on Statistical NLP and Weighted Automata",
"volume": "",
"issue": "",
"pages": "75--80",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2409"
]
},
"num": null,
"urls": [],
"raw_text": "Kyle Gorman. 2016. Pynini: A python library for weighted finite-state grammar compilation. In Proceedings of the SIGFSM Workshop on Sta- tistical NLP and Weighted Automata, pages 75- 80, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The SIGMORPHON 2020 shared task on multilingual grapheme-to-phoneme conversion",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Gorman",
"suffix": ""
},
{
"first": "F",
"middle": [
"E"
],
"last": "Lucas",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Ashby",
"suffix": ""
},
{
"first": "Arya",
"middle": [],
"last": "Goyzueta",
"suffix": ""
},
{
"first": "Shijie",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "You",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "40--50",
"other_ids": {
"DOI": [
"10.18653/v1/2020.sigmorphon-1.2"
]
},
"num": null,
"urls": [],
"raw_text": "Kyle Gorman, Lucas F.E. Ashby, Aaron Goyzueta, Arya McCarthy, Shijie Wu, and Daniel You. 2020. The SIGMORPHON 2020 shared task on multilingual grapheme-to-phoneme conver- sion. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonet- ics, Phonology, and Morphology, pages 40-50, Online. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Handbook of the International Phonetic Association: A guide to the use of the International Phonetic Alphabet",
"authors": [],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "International Phonetic Association. 1999. Hand- book of the International Phonetic Association: A guide to the use of the International Phonetic Alphabet. Cambridge University Press, Cam- bridge, U.K.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Massively multilingual pronunciation mining with wikipron",
"authors": [
{
"first": "Jackson",
"middle": [
"L"
],
"last": "Lee",
"suffix": ""
},
{
"first": "F",
"middle": [
"E"
],
"last": "Lucas",
"suffix": ""
},
{
"first": "M",
"middle": [
"Elizabeth"
],
"last": "Ashby",
"suffix": ""
},
{
"first": "Yeonju",
"middle": [],
"last": "Garza",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Lee-Sikka",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Arya",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccarthy Alan",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gorman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4216--4221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jackson L. Lee, Lucas F.E. Ashby, M. Elizabeth Garza, Yeonju Lee-Sikka, Sean Miller, Arya D. McCarthy Alan Wong, and Kyle Gorman. 2020. Massively multilingual pronunciation mining with wikipron. In Proceedings of the 12th Language Resources and Evaluation Confer- ence, pages 4216--4221, Marseille.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Imitation learning for neural morphological string transduction",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Makarov",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Clematide",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2877--2882",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1314"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Makarov and Simon Clematide. 2018. Imi- tation learning for neural morphological string transduction. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing, pages 2877-2882, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "CLUZH at SIGMORPHON 2020 shared task on multilingual grapheme-to-phoneme conversion",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Makarov",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Clematide",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "171--176",
"other_ids": {
"DOI": [
"10.18653/v1/2020.sigmorphon-1.19"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Makarov and Simon Clematide. 2020. CLUZH at SIGMORPHON 2020 shared task on multilingual grapheme-to-phoneme conversion. In Proceedings of the 17th SIGMORPHON Work- shop on Computational Research in Phonetics, Phonology, and Morphology, pages 171-176, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An Introduction to English Phonology",
"authors": [],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "April McMahon. 2002. An Introduction to English Phonology. Edinburgh University Press, Edin- burgh, U.K.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Weighted finite-state transducers in speech recognition",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Riley",
"suffix": ""
}
],
"year": 2002,
"venue": "Computer Speech & Language",
"volume": "16",
"issue": "1",
"pages": "69--88",
"other_ids": {
"DOI": [
"10.1006/csla.2001.0184"
]
},
"num": null,
"urls": [],
"raw_text": "Mehryar Mohri, Fernando Pereira, and Michael Riley. 2002. Weighted finite-state transducers in speech recognition. Computer Speech & Lan- guage, 16(1):69-88.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Austin",
"middle": [],
"last": "Matthews",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Clothiaux",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Lingpeng",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Malaviya",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Michel",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chi- ang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Gar- rette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richard- son, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neu- ral network toolkit.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "WFST-based grapheme-tophoneme conversion: Open source tools for alignment, model-building and decoding",
"authors": [
{
"first": "Josef",
"middle": [
"R"
],
"last": "Novak",
"suffix": ""
},
{
"first": "Nobuaki",
"middle": [],
"last": "Minematsu",
"suffix": ""
},
{
"first": "Keikichi",
"middle": [],
"last": "Hirose",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 10th International Workshop on Finite State Methods and Natural Language Processing",
"volume": "",
"issue": "",
"pages": "45--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josef R. Novak, Nobuaki Minematsu, and Kei- kichi Hirose. 2012. WFST-based grapheme-to- phoneme conversion: Open source tools for alignment, model-building and decoding. In Proceedings of the 10th International Workshop on Finite State Methods and Natural Language Pro- cessing, pages 45-49, Donostia-San Sebasti\u00e1n. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "An MDL-based approach to extracting subword units for grapheme-to-phoneme conversion",
"authors": [
{
"first": "Sravana",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Goldsmith",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "713--716",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sravana Reddy and John Goldsmith. 2010. An MDL-based approach to extracting subword units for grapheme-to-phoneme conversion. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 713-716, Los Angeles, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neu- ral networks. In Advances in Neural Informa- tion Processing Systems, volume 27. Curran As- sociates, Inc.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. In Advances in Neural Information Processing Systems, volume 30. Cur- ran Associates, Inc.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Espnet: End-to-end speech processing toolkit",
"authors": [
{
"first": "Shinji",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Takaaki",
"middle": [],
"last": "Hori",
"suffix": ""
},
{
"first": "Shigeki",
"middle": [],
"last": "Karita",
"suffix": ""
},
{
"first": "Tomoki",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "Jiro",
"middle": [],
"last": "Nishitoba",
"suffix": ""
},
{
"first": "Yuya",
"middle": [],
"last": "Unno",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"Enrique"
],
"last": "",
"suffix": ""
},
{
"first": "Yalta",
"middle": [],
"last": "Soplin",
"suffix": ""
},
{
"first": "Jahn",
"middle": [],
"last": "Heymann",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Wiesner",
"suffix": ""
},
{
"first": "Nanxin",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc",
"volume": "",
"issue": "",
"pages": "2207--2211",
"other_ids": {
"DOI": [
"10.21437/Interspeech.2018-1456"
]
},
"num": null,
"urls": [],
"raw_text": "Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, Adithya Ren- duchintala, and Tsubasa Ochiai. 2018. Espnet: End-to-end speech processing toolkit. In Proc. Interspeech 2018, pages 2207-2211.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Improving end-to-end speech recognition with pronunciation-assisted sub-word modeling",
"authors": [
{
"first": "Hainan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Shuoyang",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Shinji",
"middle": [],
"last": "Watanabe",
"suffix": ""
}
],
"year": 2019,
"venue": "ICASSP 2019 -2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "7110--7114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hainan Xu, Shuoyang Ding, and Shinji Watanabe. 2019. Improving end-to-end speech recogni- tion with pronunciation-assisted sub-word mod- eling. ICASSP 2019 -2019 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP), pages 7110-7114.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"num": null,
"html": null,
"text": "Progressive ensembling results, with topperforming components",
"type_str": "table",
"content": "<table/>"
},
"TABREF4": {
"num": null,
"html": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td>: Results for ensembling with edit-distance tie-breaking</td></tr></table>"
}
}
}
}