ACL-OCL / Base_JSON /prefixS /json /sigmorphon /2020.sigmorphon-1.19.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:31:31.240517Z"
},
"title": "CLUZH at SIGMORPHON 2020 Shared Task on Multilingual Grapheme-to-Phoneme Conversion",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Makarov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {
"country": "Switzerland"
}
},
"email": "[email protected]"
},
{
"first": "Simon",
"middle": [],
"last": "Clematide",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {
"country": "Switzerland"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the submission by the team from the Institute of Computational Linguistics, Zurich University, to the Multilingual Grapheme-to-Phoneme Conversion (G2P) Task of the SIGMORPHON 2020 challenge. The submission adapts our system from the 2018 edition of the SIGMORPHON shared task. Our system is a neural transducer that operates over explicit edit actions and is trained with imitation learning. It is well-suited for morphological string transduction partly because it exploits the fact that the input and output character alphabets overlap. The challenge posed by G2P has been to adapt the model and the training procedure to work with disjoint alphabets. We adapt the model to use substitution edits and train it with a weighted finitestate transducer acting as the expert policy. An ensemble of such models produces competitive results on G2P. Our submission ranks second out of 23 submissions by a total of nine teams.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the submission by the team from the Institute of Computational Linguistics, Zurich University, to the Multilingual Grapheme-to-Phoneme Conversion (G2P) Task of the SIGMORPHON 2020 challenge. The submission adapts our system from the 2018 edition of the SIGMORPHON shared task. Our system is a neural transducer that operates over explicit edit actions and is trained with imitation learning. It is well-suited for morphological string transduction partly because it exploits the fact that the input and output character alphabets overlap. The challenge posed by G2P has been to adapt the model and the training procedure to work with disjoint alphabets. We adapt the model to use substitution edits and train it with a weighted finitestate transducer acting as the expert policy. An ensemble of such models produces competitive results on G2P. Our submission ranks second out of 23 submissions by a total of nine teams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "G2P requires mapping a sequence of characters in some language into a sequence of International Phonetic Alphabet (IPA) symbols, which represent the pronunciation of this input character sequence in some abstract way (not necessarily phonemic, despite the name of the task) ( Figure 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 276,
"end": 284,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multilingual G2P is Task I of this year's SIG-MORPHON challenge. It features fifteen languages from various phylogenetic families and written in different scripts. We refer the reader to Gorman et al. (2020) for an overview of the language data. Each language comes with 3,600 training and 450 development set examples. It is permitted to use external resources as well as to build a single multilingual model.",
"cite_spans": [
{
"start": 187,
"end": 207,
"text": "Gorman et al. (2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We participate in this shared task with an adaptation of our SIGMORPHON 2018 system (Makarov fathaigh \u2192 /fa:/ (\"giants\") Irish of Cois Fhairrge (de Bhaldraithe, 1953) and Clematide, 2018b), which was particularly successful in type-level morphological inflection generation. Our system is a neural transducer that operates over explicit edit actions and is trained with imitation learning (Daum\u00e9 III et al., 2009; Ross et al., 2011; Chang et al., 2015, IL) . It has a number of useful inductive biases, one of which is the familiar bias towards copying the input (implemented as the traditional copy edit). This is particularly useful for morphological string transduction problems, which typically involve small and local edits and where most of the input is preserved in the output. This contrasts with models that rely purely on generating characters such as generic encoder-decoder models, which as a result suffer, particularly on smaller-sized datasets.",
"cite_spans": [
{
"start": 144,
"end": 166,
"text": "(de Bhaldraithe, 1953)",
"ref_id": "BIBREF1"
},
{
"start": 389,
"end": 413,
"text": "(Daum\u00e9 III et al., 2009;",
"ref_id": "BIBREF3"
},
{
"start": 414,
"end": 432,
"text": "Ross et al., 2011;",
"ref_id": "BIBREF15"
},
{
"start": 433,
"end": 456,
"text": "Chang et al., 2015, IL)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Copying requires that the input and output character alphabets overlap, preferably substantially. This also allows our IL training to leverage a simple-to-implement expert policy (which during training provides demonstrations to the learner of how to optimally solve the task). The optimal completion of the target given the prediction generated so far during training requires finding edits that would extend the prediction so that the Levenshtein distance (Levenshtein, 1966) between the target and the partial prediction + the future suffix is minimized. Unfortunately, this objective alone would not discriminate between multiple edit action sequences that relate the input and the partial prediction + the future suffix. To address this spurious ambiguity, our IL training adds edit sequence scores, computed using traditional costs, 1 into the objective. This naturally encourages the system to copy, however this would fail on any editing problem with disjoint alphabets.",
"cite_spans": [
{
"start": 458,
"end": 477,
"text": "(Levenshtein, 1966)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "G2P poses an interesting challenge for a system like ours. On the one hand, G2P shares many similarities with morphological string transduction: The changes are mostly local, it would suffice to perform traditional left-to-right transduction, and a substantial part of the work is arguably applying equivalence rules (e.g. the German letter \"g\" most often converts to /g/, \"a\" to /a/ or /a:/), which is similar to copying. Yet, a general solution to G2P cannot rely on overlapping alphabets since many scripts do not share many symbols, if any at all, with IPA (e.g. Korean or Georgian).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our solution adapts the model to use substitution edits and trains it with a weighted finite-state transducer acting as the expert policy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The underlying model is a neural transducer introduced in Aharoni and Goldberg (2017) . It defines a conditional distribution over traditional edits",
"cite_spans": [
{
"start": 58,
"end": 85,
"text": "Aharoni and Goldberg (2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model description",
"sec_num": "2"
},
{
"text": "p \u03b8 (y, a | x) = |a| j=1 p \u03b8 (a j | a <j , x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model description",
"sec_num": "2"
},
{
"text": ", where x is an input sequence of graphemes and a = a 1 . . . a |a| is an edit action sequence. (The output sequence of IPA symbols y is deterministically computed from x and a.) The model is equipped with a long short-term memory (LSTM) decoder and a bidirectional LSTM encoder (Graves and Schmidhuber, 2005) . The challenge is training this model: Due to the recurrent decoder, it cannot be trained with exact marginal likelihood unlike the more familiar weighted finite-state transducer (Mohri, 2004; Eisner, 2002, WFST) or its neuralizations (Yu et al., 2016) . For a more detailed description of the model, we refer the reader to Makarov and Clematide (2018a). 2 IL training Makarov and Clematide (2018a) propose training the model using IL, a general model fitting framework for sequential problems over exponentially sized output spaces. IL has been applied successfully to natural language processing (NLP) problems, e.g. transition-based parsing (Goldberg and Nivre, 2012) and language generation (Welleck et al., 2019) . IL relies on the availability of demonstrations of how the task can optimally",
"cite_spans": [
{
"start": 279,
"end": 309,
"text": "(Graves and Schmidhuber, 2005)",
"ref_id": "BIBREF8"
},
{
"start": 490,
"end": 503,
"text": "(Mohri, 2004;",
"ref_id": "BIBREF13"
},
{
"start": 504,
"end": 523,
"text": "Eisner, 2002, WFST)",
"ref_id": null
},
{
"start": 546,
"end": 563,
"text": "(Yu et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 955,
"end": 981,
"text": "(Goldberg and Nivre, 2012)",
"ref_id": "BIBREF6"
},
{
"start": 1006,
"end": 1028,
"text": "(Welleck et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model description",
"sec_num": "2"
},
{
"text": "p(#) \u03a3 : / p(DEL(\u03a3))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model description",
"sec_num": "2"
},
{
"text": ":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model description",
"sec_num": "2"
},
{
"text": "\u2126 / p(INS(\u2126)) \u03a3 : \u2126 / p(SUB(\u03a3, \u2126))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model description",
"sec_num": "2"
},
{
"text": "Figure 2: Stochastic edit distance (Ristad and Yianilos, 1998): A memoryless probabilistic FST. \u03a3 and \u2126 stand for any input and output symbol, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model description",
"sec_num": "2"
},
{
"text": "be solved given any configuration. Due to the nature of many NLP problems, such demonstrations can often be provided by a rule-based program (known as expert policy). Makarov and Clematide (2018a) use a combination of Levenshtein distance and edit sequence cost as the task objective (\u03b2 ED(\u0177, y) + ED(x,\u0177), \u03b2 \u2265 1) and devise an expert policy for it. Given a target sequence y, a partially completed prediction y 1:n , and the remaining input sequence x k:l , the expert needs to (1) identify the set of target suffixes y j:m that when appended to\u0177 1:n , lead to a prediction with minimum Levenshtein distance from the target, and (2) check which of the edit sequences producing those suffixes have the lowest cost, i.e. minimum Levenshtein distance from the remaining input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model description",
"sec_num": "2"
},
{
"text": "The second part is crucial for training accurate models especially in the limited resource setting, as it reduces spurious ambiguity arising under the first part of the objective alone. It is also the second part of the training objective that hinges on the overlap of the input and output alphabets, as this permits minimization using the edit distance dynamic program with traditional costs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model description",
"sec_num": "2"
},
{
"text": "The adaptation is two-fold: First, we introduce substitution edits, which have previously not been employed to keep the total number of edit actions to a minimum. For each output character c, there is now a substitution action SUBS[c] which substitutes c for any input character x.",
"cite_spans": [
{
"start": 227,
"end": 234,
"text": "SUBS[c]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to G2P",
"sec_num": "2.1"
},
{
"text": "When the alphabets are disjoint, the completing edit sequences cannot be very informatively scored using traditional edit costs. For example, for the data sample \u043a\u0438\u0442 \u2192 /k j it/ (Russian: \"whale\"), we would like the following most natural edit sequence to attain the lowest cost:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to G2P",
"sec_num": "2.1"
},
{
"text": "SUBS[k], INS[ j ], SUBS[i], SUBS[t].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to G2P",
"sec_num": "2.1"
},
{
"text": "Yet, it is clear that under traditional costs, this sequence attains the same cost as any other that consists of three substitutions and one insertion. Our solution to this is to learn costs from the training data to ensure an intuitive ranking of edit sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to G2P",
"sec_num": "2.1"
},
{
"text": "SED policy Learning costs as well as computing string distance can be achieved with a very simple WFST: Stochastic Edit Distance (Ristad and Yianilos, 1998, SED), which is a probabilistic version of Levenshtein distance (Fig. 2) . We use traditional multinomial parameterization.",
"cite_spans": [],
"ref_spans": [
{
"start": 220,
"end": 228,
"text": "(Fig. 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adaptation to G2P",
"sec_num": "2.1"
},
{
"text": "Before starting training the neural transducer, we train a SED model using the Expectation-Maximization algorithm (Dempster et al., 1977) . We use the following update in the M-step:",
"cite_spans": [
{
"start": 114,
"end": 137,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to G2P",
"sec_num": "2.1"
},
{
"text": "\u03b8 (t+1) \u221d max(0, \u03b8 + \u03b1),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to G2P",
"sec_num": "2.1"
},
{
"text": "where \u03b8 is the unnormalized weight computed in the E-step and 0 < \u03b1 < 1 is a sparse Dirichlet prior parameter associated with this edit. This corresponds to sparse regularization via Dirichlet prior (Johnson et al., 2007) , which results in many edits having zero probability. We found this training to lead to more accurate SED models. Furthermore, it dramatically reduces the size of the edit action set that the neural transducer is defined over.",
"cite_spans": [
{
"start": 199,
"end": 221,
"text": "(Johnson et al., 2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to G2P",
"sec_num": "2.1"
},
{
"text": "SED is integrated into the expert policy. During training, given a configuration consisting of a partial prediction, a remainder of the input, and the target, we query the expert policy for next optimal edits. We minimize the first part of the objective much like before, and we minimize the second part by decoding SED with the Viterbi algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to G2P",
"sec_num": "2.1"
},
{
"text": "Suppose we transduce the French word x = abject (\"vile\") into the target y = a b Z E k t. Suppose also that the neural transducer currently attends to character x 4 = e and the prediction built so far during training is\u0177 1:7 = a b Z e (note the error). We query the SED policy to get the optimal edit action whose likelihood we will maximize. First, much like before, we find that the following edits are optimal with respect to the first term of the training objective (call them permissible) as they do not increase the Levenshtein distance of the prediction from the target (assuming all subsequent edits are permissible too):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to G2P",
"sec_num": "2.1"
},
{
"text": "SUBS[E], INS[E], DEL, SUBS[ ], INS[ ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to G2P",
"sec_num": "2.1"
},
{
"text": "(This can be verified by looking at the Levenshtein distance prefix matrix for strings\u0177 1:7 and y.) Each such edit starts a suffix that completes the target, e.g. it is \"E k t\" for SUBS [E] and \" k t\" for SUBS[ ]. Next, we use SED to rank the permissible edits by cost-to-go. For each of the edits and their corresponding suffixes, the expert needs to execute the edit (e.g. SUBS[E] writes E and moves the attention to x 5 = c) and then decode SED with Viterbi on the the remaining input and the suffix (both possibly modified by the edit). In this way, we obtain that SUBS[ ] is the optimal action with the lowest cost-to-go (=negative sum of the log probabilities of the edit and of the Exploration This time, we also train the transducer with an aggressive exploration schedule:",
"cite_spans": [
{
"start": 186,
"end": 189,
"text": "[E]",
"ref_id": null
},
{
"start": 375,
"end": 382,
"text": "SUBS[E]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to G2P",
"sec_num": "2.1"
},
{
"text": "p sampling (i) = 1 1+exp(i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to G2P",
"sec_num": "2.1"
},
{
"text": ", where i is the training epoch number. After a couple of training epochs, training configurations are generated entirely by executing edit actions sampled from the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to G2P",
"sec_num": "2.1"
},
{
"text": "We train separate models for each language on the official training data and use the development set for model selection. 4 Our submission does not use any additional lexical resources.",
"cite_spans": [
{
"start": 122,
"end": 123,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Submission details",
"sec_num": "3"
},
{
"text": "For most of the models, we employ Unicode decomposition normalization (NFKD) 5 as a data preprocessing step. Importantly, this helps decomposing Unicode syllable blocks used e.g. in Hangul.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submission details",
"sec_num": "3"
},
{
"text": "The size of the development set is rather small (450 examples), and having examined the data, we suspect that overly relying on the development set for model selection might hurt generalization. For example, the French development set contains three exceptions to the \"ill\"-/j/ equivalence; thus, a single model that achieves a high score on the development set might, in fact, be overfitting. To counter this, we build an eleven-model-strong majorityvote ensemble. Fortunately, training a neural transducer is fast as one epoch takes just about four minutes on average on a single CPU, due to the relatively small number of model parameters. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submission details",
"sec_num": "3"
},
{
"text": "Our system ranks second among 23 submissions by a total of nine teams (Table 1) . It ties for first place on four languages (Hindi, Hungarian, Icelandic, Lithuanian) and outperforms every other submission for Armenian. It achieves strong gains over the neural baselines. Ensembling gains us 16% in error reduction compared to test set averages-a substantial improvement. We leave it for future work to see whether dropout and a larger model size could be used instead as effectively as ensembling. Unicode decomposition normalization boosts the performance of our Korean models. 6 On average, at least one model predicts the output correctly for all but 7.93% of all the words (\u22a5)-Adyghe, Lithuanian, and Bulgarian being the most difficult languages. For some languages, WER standard deviation is high, likely confirming our hypothesis that model selection on the small-sized development set would lead to poor generalization. Table 2 shows the most frequent errors of our system for each language and helps to qualitatively assess their strongly varying error profiles. We take a closer look at the errors in French and Korean. Additional lexical information could improve our French models. E.g. the word's lexical category feature and/or morphological segmentation would probably help correctly transduce the word-final \"-ent\" (adverb \"vraiement\" (truly) /...\u00c3/ vs verb \"viennent\" (they come), where the ending is silent). Many errors in French are in English borrowings.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 79,
"text": "(Table 1)",
"ref_id": "TABREF1"
},
{
"start": 927,
"end": 934,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "We look in some detail at the errors on the Korean test data that all or almost all of the individual models of the ensemble make. As expected, lexicalized phenomena contribute most of the errors: vowel length (which is neither phonemic nor phonetically realized in the speech of all except elderly speakers (Sohn, 2001)) and tensification. Vowel length is not indicated in Korean orthography, and neither is tensification (with some exceptions). Knowing whether a word is an English borrowing (e.g. \uc139\uc2a4 seks\u0217 7 (sex)) or whether a word is a compound and where the morpheme boundary lies (\ucd08\uc2b9\ub2ec ch'os\u0217ng-tal (new moon)) could help predict non-automatic tensification correctly in a small number of cases ( ady \u02bc/\u03f5/17 \u0259\u2022/\u03f5/9 \u0283/\u0282/8 \u03f5/\u2022\u0259/7 j\u2022/\u03f5/6 \u03f5/\u02bc/6 \u03f5/\u0259\u2022/5 \u026e/l/5 \u02d0/\u03f5/5 a/\u0259/5 arm \u0254/o/17 \u03f5/\u0259\u2022/12 \u25cc\u0361 /\u2022/12 \u0259\u2022/\u03f5/3 t/d/3 \u0261/k\u02b0/2 \u0283\u02b0/\u0292/2 \u025b/j/2 \u03c7/\u0281/2 t\u0361 \u0283/d\u2022\u0292/1 bul r/\u027e/26 o/\u0254/22 \u0259/a/14 a/\u0259/12 \u25cc\u032a /\u03f5/9 \u03f5/\u25cc\u032a /9 a/\u0250/7 \u026b/l/5 \u0250/\u0259/5 \u03f5/\u02b2/5 dut \u0259/\u025b/9 \u03f5/j\u2022/4 a\u02d0/\u0251/4 e\u02d0/\u0259/4 \u0259/e\u02d0/3 How good is SED policy? Somewhat surprisingly, using SED as part of the expert policy results in competitive performance. Yet, SED is a very crude model (e.g. because of the lack of context, when used as a conditional model, SED assigns less probability to any edit sequence containing insertions than the same sequence but with all the insertions removed; this e.g. makes it unusable as a standalone model for G2P). On top of this, we also do not use learned roll-out, which would be recommended when training with a sub-optimal expert (Chang et al., 2015) . We leave it for future work to examine whether the neural transducer's performance on G2P would improve from replacing SED with a more powerful model.",
"cite_spans": [
{
"start": 701,
"end": 702,
"text": "(",
"ref_id": null
},
{
"start": 1497,
"end": 1517,
"text": "(Chang et al., 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": null
},
{
"text": "t/d/3 \u02d0/\u03f5/3 o\u02d0/\u0254/2 \u03f5/\u025b\u2022/2 n/m/2 fre \u03f5/\u2022\u0251/2 \u03f5/\u2022s/2 a/\u0251/2 o/\u0254/2 w/\u0254/2 \u2022j\u2022\u0251/\u03f5/1 \u03f5/\u2022k\u2022s/1 \u0254\u2022p/o/1 \u2022\u0251/\u03f5/1 e/\u025b\u2022\u0281/1 geo \u026a/i/103 i/\u026a/48 \u03c7/x/5 \u0263/\u0281/4 \u0281/\u0263/3 x/\u03c7/3 \u0251/a/2 \u2022s/\u03f5/1 gre \u027e/r/27 o/\u0254/19 r/\u027e/15 e/\u025b/9 \u029d/i/3 n\u2022/\u03f5/2 \u00e7/i/2 m/\u0271/2 \u2022m\u2022e/\u03f5/1 \u03f5/s\u2022/1 hin \u03f5/\u0259\u2022/10 \u0259\u2022/\u03f5/5 \u03f5/\u2022\u0259/2 \u025b\u02d0/\u0259/2 \u03f5/_\u2022/2 \u0251/a/2 \u026a/i\u02d0/1 \u2022\u0266/\u02b1/1 \u026a/i/1 \u0259/j/1 hun \u0283/\u0292/3 \u03f5/\u02d0/3 e\u02d0/i\u2022n\u2022t/1 \u03f5/\u0271\u2022v\u2022/1 m/e\u02d0\u02b2/1 t\u0361 s/x\u02d0/1 s\u02d0/\u0283\u2022s/1 h\u2022/\u03f5/1 \u2022h/\u25cc\u0361 \u0283/1 \u25cc\u0361 /\u2022/1 ice \u02d0/\u03f5/11 \u03f5/\u02d0/9 t\u2022/\u03f5/4 v/f/3 \u03f5/\u25cc\u0325 /3 t/d/2 \u02b0/\u03f5/2 \u03f5/\u02b0/2 \u028f\u2022\u028f/u\u02d0/1 c\u02b0/k/1 jpn \u03f5/\u25cc/8 \u03f5/\u25cc\u0325 /6 \u03f5/\u02d0/3 \u02d0/\u2022\u026f\u031f \u1d5d/2 \u02d0/\u2022o\u031e /2 \u026f\u031f \u1d5d/j\u2022o\u031e /1 \u03f5/\u2022e\u031e /1 \u026f\u031f \u1d5d/e\u031e /1 o\u031e \u02d0/\u00e3\u0320 /1 s\u2022\u0268/\u026f\u031f /1 kor \u03f5/\u02d0/72 \u02d0/\u03f5/18 \u0258\u02d0/\u028c\u0339 /11 \u0291/\u0255\u0348 /4 \u028c\u0339 /\u0258\u02d0/4 d/t/4 \u0261/k\u0348 /3 \u03f5/\u0272\u2022/3 d/t \u0348/2 \u026d/n/2 lit \u03f5/\u25cc\u032a /15 n/\u014b/14 \u0250/a\u02d0/12 \u02d0/\u03f5/8 \u02b2/\u03f5/7 o/\u0254/7 \u03f5/\u02d0/6 \u03f5/\u02b2/5 \u025b/ae\u02d0/3 a\u02d0/\u0250/3 rum \u25cc\u0361 /\u2022/8 \u2022/\u25cc\u0361 /8 e\u032f /j/6 r/\u027e/5 \u02b2/\u2022i/4 \u02d0/\u2022j/3 i/j/3 \u03f5/e\u2022/2 o/e/2 j/i/2 vie \u03f5/\u02d0/2 \u02e7\u02e7\u02e7\u2022/\u03f5/1 \u03f5/e\u2022/1 w\u2022/\u03f5/1 \u031a\u2022/\u03f5/1 \u25cc\u0361 m/\u03f5/1 \u0259/e/1 \u0294/n/1 \u02e6/\u03f5/1 a/\u0254/1 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": null
},
{
"text": "This presents the approach taken by the CLUZH team to solving the SIGMORPHON 2020 Multilingual Grapheme-to-Morpheme Conversion challenge. Our submission is based on our successful SIGMORPHON 2018 system, which is a majorityvote ensemble of neural transducers trained with imitation learning. We adapt the 2018 system to work on transduction problems with disjoint input and output alphabets. We add substitution actions (not available in previous versions of the system) and employ a memoryless probabilistic finite-state transducer to define the expert policy for the imitation learning. We use majority-vote ensembling to counter the overfitting to the small development sets. These simple modifications result in a highly 8 https://github.com/eddieantonio/ocreval competitive performance even without the use of any exernal resources or learning a single multilingual model. Our ensemble ranks second out of 23 submissions by a total of nine teams. Our error analysis indicates that addressing many of the errors requires additional information such as knowing the word's lexical category, morphological segmentation, or etymology. We will make our code publicly available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Copy costs zero, all other edits cost one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The model uses shared input character / action embeddings of size 100 and one-layer LSTMs with hidden-state size 200.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This particular SED is trained on the French training data for 3 EM epochs with Dirichlet prior \u03b1 = 1e-05 for all edits.4 We train the SED model for 20 epochs of EM with \u03b1 = 0.25 for insertions and 0.5 for all other edits. We train the neural transducer for a maximum of 60 epochs with a patience of 12 epochs. We use mini-batches of size 5. We decode using beam search with beam width 4.5 Using NFKD instead of NFD was a bit unfortunate because some superscript diacritics get normalized to their regular size. Luckily, as pointed out to us by Kyle Gorman, there is a unique mapping from NFKD to NFC for the spaced output format of this task. See http://www.unicode.org/reports/tr15/ for Unicode normalization forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In fact, in a post-submission analysis, we see a strong gain from decomposition only for Korean (17 percentage points on average). For the other languages, it has no impact on performance on average.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This uses McCune-Reischauer transliteration of Korean.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the organizers for their great effort in these turbulent times. We thank Kyle Gorman for taking the time to help us with our Unicode normalization problem. This work has been supported by the Swiss National Science Foundation under grant CR-SII5 173719.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Morphological inflection generation with hard monotonic attention",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roee Aharoni and Yoav Goldberg. 2017. Morphologi- cal inflection generation with hard monotonic atten- tion. In ACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Gaeilge Chois Fhairrge: An Deilbh\u00edocht. Institi\u00faid Ard-L\u00e9inn Bhaile\u00c1tha Cliath",
"authors": [
{
"first": "Bhaldraithe",
"middle": [],
"last": "Tom\u00e1s De",
"suffix": ""
}
],
"year": 1953,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1s de Bhaldraithe. 1953. Gaeilge Chois Fhairrge: An Deilbh\u00edocht. Institi\u00faid Ard-L\u00e9inn Bhaile\u00c1tha Cliath.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning to search better than your teacher",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Akshay",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Alekh",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daume",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Langford",
"suffix": ""
}
],
"year": 2015,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agar- wal, Hal Daume III, and John Langford. 2015. Learning to search better than your teacher. In ICML.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Search-based structured prediction",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Langford",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2009,
"venue": "Machine learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine learning.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Maximum likelihood from incomplete data via the EM algorithm",
"authors": [
{
"first": "P",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Nan",
"middle": [
"M"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "Donald B",
"middle": [],
"last": "Laird",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the Royal Statistical Society: Series B (Methodological)",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur P Dempster, Nan M Laird, and Donald B Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statisti- cal Society: Series B (Methodological), 39(1).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Parameter estimation for probabilistic finite-state transducers",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2002,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner. 2002. Parameter estimation for proba- bilistic finite-state transducers. In ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A dynamic oracle for arc-eager dependency parsing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2012,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Joakim Nivre. 2012. A dynamic or- acle for arc-eager dependency parsing. In COLING.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The SIGMORPHON 2020 shared task on multilingual grapheme-to-phoneme conversion",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Gorman",
"suffix": ""
},
{
"first": "F",
"middle": [
"E"
],
"last": "Lucas",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Ashby",
"suffix": ""
},
{
"first": "Arya",
"middle": [
"D"
],
"last": "Goyzueta",
"suffix": ""
},
{
"first": "Shijie",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "You",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyle Gorman, Lucas F.E. Ashby, Aaron Goyzueta, Arya D. McCarthy, Shijie Wu, and Daniel You. 2020. The SIGMORPHON 2020 shared task on multilin- gual grapheme-to-phoneme conversion. In Proceed- ings of the 17th SIGMORPHON Workshop on Com- putational Research in Phonetics, Phonology, and Morphology.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Framewise phoneme classification with bidirectional LSTM and other neural network architectures",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2005,
"venue": "Neural Networks",
"volume": "18",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional LSTM and other neural network architectures. Neu- ral Networks, 18(5).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bayesian inference for PCFGs via Markov Chain Monte Carlo",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2007,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson, Thomas L Griffiths, and Sharon Gold- water. 2007. Bayesian inference for PCFGs via Markov Chain Monte Carlo. In NAACL-HLT.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Binary codes capable of correcting deletions, insertions, and reversals. Soviet physics doklady",
"authors": [
{
"first": "",
"middle": [],
"last": "Vladimir I Levenshtein",
"suffix": ""
}
],
"year": 1966,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. So- viet physics doklady, 10(8).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Imitation learning for neural morphological string transduction",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Makarov",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Clematide",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Makarov and Simon Clematide. 2018a. Imita- tion learning for neural morphological string trans- duction. In EMNLP.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "UZH at CoNLL-SIGMORPHON 2018 shared task on universal morphological reinflection",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Makarov",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Clematide",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the CoNLL SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Makarov and Simon Clematide. 2018b. UZH at CoNLL-SIGMORPHON 2018 shared task on uni- versal morphological reinflection. Proceedings of the CoNLL SIGMORPHON 2018 Shared Task: Uni- versal Morphological Reinflection.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Weighted finite-state transducer algorithms. An overview. In Formal Languages and Applications, volume 148 of Studies in Fuzziness and Soft Computing",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehryar Mohri. 2004. Weighted finite-state transducer algorithms. An overview. In Formal Languages and Applications, volume 148 of Studies in Fuzziness and Soft Computing. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning string-edit distance",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Sven Ristad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Peter N Yianilos",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Sven Ristad and Peter N Yianilos. 1998. Learning string-edit distance. IEEE Transactions on Pattern Analysis and Machine Intelligence.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A reduction of imitation learning and structured prediction to no-regret online learning",
"authors": [
{
"first": "St\u00e9phane",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Gordon",
"suffix": ""
},
{
"first": "Drew",
"middle": [],
"last": "Bagnell",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the fourteenth international conference on artificial intelligence and statistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "St\u00e9phane Ross, Geoffrey Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and struc- tured prediction to no-regret online learning. In Pro- ceedings of the fourteenth international conference on artificial intelligence and statistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Korean language",
"authors": [
{
"first": "",
"middle": [],
"last": "Ho-Min Sohn",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ho-Min Sohn. 2001. The Korean language. Cam- bridge University Press.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Non-monotonic sequential text generation",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Welleck",
"suffix": ""
},
{
"first": "Kiant\u00e9",
"middle": [],
"last": "Brantley",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2019,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sean Welleck, Kiant\u00e9 Brantley, Hal Daum\u00e9 III, and Kyunghyun Cho. 2019. Non-monotonic sequential text generation. In ICML.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Online segment to segment neural transduction",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Yu, Jan Buys, and Phil Blunsom. 2016. Online seg- ment to segment neural transduction. In EMNLP.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Example of G2P.",
"uris": null
},
"TABREF0": {
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table",
"text": "Viterbi path) of 15.28 (vs 17.65 for SUBS[E], 21.09 for INS[E], 17.31 for DEL, and 17.31 for INS[ ]). 3"
},
"TABREF1": {
"content": "<table><tr><td/><td>CLUZH</td><td>ENS.</td><td/><td colspan=\"2\">CLUZH WER AVG</td><td/><td>LSTM</td><td>TF</td><td colspan=\"2\">BEST BY OTHERS</td><td/></tr><tr><td>LNG ady</td><td>WER 27.11 6.27 PER</td><td colspan=\"10\">#C #D WER 0 11 30.32 1.97 \u00b1 \u2206, % -12 16.89 28.00 28.44 24.67 WER WER \u2206, % \u22a5 WER 9 5.76 PER \u2206, % 8</td></tr><tr><td>arm</td><td>12.22 2.82</td><td colspan=\"3\">0 11 14.73 0.76</td><td>-21</td><td colspan=\"4\">8.89 14.67 14.22 12.67</td><td>-4 2.91</td><td>-3</td></tr><tr><td>bul</td><td>23.33 4.70</td><td colspan=\"3\">0 11 30.81 2.78</td><td colspan=\"5\">-32 13.78 31.11 34.00 22.22</td><td>5 4.70</td><td>0</td></tr><tr><td>dut</td><td>14.44 2.51</td><td>9</td><td colspan=\"2\">2 18.30 1.44</td><td>-27</td><td colspan=\"4\">9.33 16.44 15.78 13.56</td><td>6 2.36</td><td>6</td></tr><tr><td>fre</td><td>6.89 1.56</td><td>2</td><td>9</td><td>8.12 0.54</td><td>-18</td><td>3.56</td><td>6.22</td><td>6.89</td><td>5.11</td><td>26 1.16</td><td>26</td></tr><tr><td>geo</td><td>27.33 4.83</td><td colspan=\"3\">0 11 29.11 0.86</td><td>-7</td><td colspan=\"4\">8.89 26.44 28.00 24.89</td><td>9 4.57</td><td>5</td></tr><tr><td>gre</td><td colspan=\"2\">16.44 2.68 11</td><td colspan=\"2\">0 19.60 1.80</td><td>-19</td><td colspan=\"4\">7.33 18.89 18.89 14.44</td><td>12 2.42</td><td>10</td></tr><tr><td>hin</td><td>5.11 1.20</td><td colspan=\"2\">0 11</td><td>7.13 0.55</td><td>-40</td><td>2.67</td><td>6.67</td><td>9.56</td><td>5.11</td><td>0 1.20</td><td>0</td></tr><tr><td>hun</td><td>4.00 1.02</td><td colspan=\"2\">0 11</td><td>4.77 0.60</td><td>-19</td><td>2.89</td><td>5.33</td><td>5.33</td><td>4.00</td><td>0 0.92</td><td>10</td></tr><tr><td>ice</td><td>9.11 1.90</td><td colspan=\"3\">0 11 10.00 0.53</td><td>-10</td><td colspan=\"3\">5.78 10.00 10.22</td><td>9.11</td><td>0 1.83</td><td>4</td></tr><tr><td>jpn</td><td>6.00 1.58</td><td colspan=\"2\">0 11</td><td>7.19 0.30</td><td>-20</td><td>4.89</td><td>7.56</td><td>7.33</td><td>4.89</td><td>19 1.16</td><td>27</td></tr><tr><td>kor</td><td>28.44 4.88</td><td colspan=\"3\">0 11 28.26 1.39</td><td colspan=\"5\">1 11.78 46.89 43.78 24.00</td><td>16 4.05</td><td>17</td></tr><tr><td>lit</td><td>18.67 3.27</td><td colspan=\"3\">0 11 21.54 0.82</td><td colspan=\"5\">-15 14.22 19.11 20.67 18.67</td><td>0 3.38</td><td>-3</td></tr><tr><td>rum</td><td>11.33 2.68</td><td colspan=\"3\">0 11 13.66 1.11</td><td>-21</td><td colspan=\"3\">7.11 10.67 12.00</td><td>9.78</td><td>14 2.23</td><td>17</td></tr><tr><td>vie</td><td>1.56 0.35</td><td colspan=\"2\">0 11</td><td>1.60 0.21</td><td>-2</td><td>0.89</td><td>4.67</td><td>7.56</td><td>0.89</td><td>43 0.27</td><td>23</td></tr><tr><td colspan=\"5\">AVG 14.13 2.82 1.5 9.5 16.34 1.05</td><td>-16</td><td colspan=\"4\">7.93 16.84 17.51 12.93</td><td>8 2.59</td><td>8</td></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": "Overview of the test results. \u2206 gives relative error difference compared to our submission CLUZH. #C=number of NFC models in the ensemble. #D=number of NFKD models in the ensemble. CLUZH WER AVG=average WER, standard deviation, and relative error difference of the average computed over individual models. \u22a5=lower-bound on WER: correct if predicted by any individual model. LSTM=official seq2seq LSTM baseline."
},
"TABREF3": {
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table",
"text": "Ten most frequent errors per language. Notation: prediction / gold / error frequency. \u2022 denotes whitespace.Computed using the UTF-8 aware version of the ISRI Analytic Tools for OCR Evaluation. 8"
}
}
}
}