|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:30:52.464107Z" |
|
}, |
|
"title": "Comparative Error Analysis in Neural and Finite-state Models for Unsupervised Character-level Transduction", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Ryskina", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Taylor", |
|
"middle": [], |
|
"last": "Berg-Kirkpatrick", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of California", |
|
"location": { |
|
"addrLine": "San Diego" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Gormley", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Traditionally, character-level transduction problems have been solved with finite-state models designed to encode structural and linguistic knowledge of the underlying process, whereas recent approaches rely on the power and flexibility of sequence-to-sequence models with attention. Focusing on the less explored unsupervised learning scenario, we compare the two model classes side by side and find that they tend to make different types of errors even when achieving comparable performance. We analyze the distributions of different error classes using two unsupervised tasks as testbeds: converting informally romanized text into the native script of its language (for Russian, Arabic, and Kannada) and translating between a pair of closely related languages (Serbian and Bosnian). Finally, we investigate how combining finite-state and sequence-to-sequence models at decoding time affects the output quantitatively and qualitatively. 1", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Traditionally, character-level transduction problems have been solved with finite-state models designed to encode structural and linguistic knowledge of the underlying process, whereas recent approaches rely on the power and flexibility of sequence-to-sequence models with attention. Focusing on the less explored unsupervised learning scenario, we compare the two model classes side by side and find that they tend to make different types of errors even when achieving comparable performance. We analyze the distributions of different error classes using two unsupervised tasks as testbeds: converting informally romanized text into the native script of its language (for Russian, Arabic, and Kannada) and translating between a pair of closely related languages (Serbian and Bosnian). Finally, we investigate how combining finite-state and sequence-to-sequence models at decoding time affects the output quantitatively and qualitatively. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Many natural language sequence transduction tasks, such as transliteration or grapheme-to-phoneme conversion, call for a character-level parameterization that reflects the linguistic knowledge of the underlying generative process. Character-level transduction approaches have even been shown to perform well for tasks that are not entirely characterlevel in nature, such as translating between related languages (Pourdamghani and Knight, 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 412, |
|
"end": 443, |
|
"text": "(Pourdamghani and Knight, 2017)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and prior work", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Weighted finite-state transducers (WFSTs) have traditionally been used for such character-level tasks (Knight and Graehl, 1998; Knight et al., 2006) . Their structured formalization makes it easier to encode additional constraints, imposed either 1 Code will be published at https://github.com/ ryskina/error-analysis-sigmorphon2021 \u044d\u0442\u043e \u0442\u043e\u0447\u043d\u043e \u0cae\u0ca8 #$ \u0cb3&$ \u0ca4\u0cc1 3to to4no mana belagitu \u0442\u0435\u0445\u043d\u0438\u0447\u043a\u0430 \u0438 \u0441\u0442\u0440\u0443\u0447\u043d\u0430 \u043d\u0430\u0441\u0442\u0430\u0432\u0430 tehni\u010dko i stru\u010dno obrazovanje Figure 1 : Parallel examples from our test sets for two character-level transduction tasks: converting informally romanized text to its original script (top; examples in Russian and Kannada) and translating between closely related languages (bottom; Bosnian-Serbian). Informal romanization is idiosyncratic and relies on both visual (q \u2192 4) and phonetic (t \u2192 t) character similarity, while translation is more standardized but not fully character-level due to grammatical and lexical differences ('nastava' \u2192 'obrazovanje') between the languages. The lines show character alignment between the source and target side where possible.", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 127, |
|
"text": "(Knight and Graehl, 1998;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 128, |
|
"end": 148, |
|
"text": "Knight et al., 2006)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 608, |
|
"end": 628, |
|
"text": "Russian and Kannada)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 438, |
|
"end": 446, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction and prior work", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "by the underlying linguistic process (e.g. monotonic character alignment) or by the probabilistic generative model (Markov assumption; Eisner, 2002) . Their interpretability also facilitates the introduction of useful inductive bias, which is crucial for unsupervised training (Ravi and Knight, 2009; Ryskina et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 148, |
|
"text": "Eisner, 2002)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 277, |
|
"end": 300, |
|
"text": "(Ravi and Knight, 2009;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 301, |
|
"end": 322, |
|
"text": "Ryskina et al., 2020)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and prior work", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Unsupervised neural sequence-to-sequence (seq2seq) architectures have also shown impressive performance on tasks like machine translation (Lample et al., 2018) and style transfer (Yang et al., 2018; He et al., 2020) . These models are substantially more powerful than WFSTs, and they successfully learn the underlying patterns from monolingual data without any explicit information about the underlying generative process.", |
|
"cite_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 159, |
|
"text": "(Lample et al., 2018)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 179, |
|
"end": 198, |
|
"text": "(Yang et al., 2018;", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 199, |
|
"end": 215, |
|
"text": "He et al., 2020)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and prior work", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As the strengths of the two model classes differ, so do their weaknesses: the WFSTs and the seq2seq models are prone to different kinds of errors. On a higher level, it is explained by the structure-power trade-off: while the seq2seq models are better at recovering long-range dependencies and their outputs look less noisy, they also tend to insert and delete words arbitrarily because their alignments are unconstrained. We attribute the errors to the following aspects of the trade-off:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and prior work", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Language modeling capacity: the statistical character-level n-gram language models (LMs) utilized by finite-state approaches are much weaker than the RNN language models with unlimited left context. While a word-level LM can improve the performance of a WFST, it would also restrict the model's ability to handle out-of-vocabulary words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and prior work", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Controllability of learning: more structured models allow us to ensure that the model does not attempt to learn patterns orthogonal to the underlying process. For example, domain imbalance between the monolingual corpora can cause the seq2seq models to exhibit unwanted style transfer effects like inserting frequent target side words arbitrarily.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and prior work", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Search procedure: WFSTs make it easy to perform exact maximum likelihood decoding via shortest-distance algorithm (Mohri, 2009) . For the neural models trained using conventional methods, decoding strategies that optimize for the output likelihood (e.g. beam search with a large beam size) have been shown to be susceptible to favoring empty outputs (Stahlberg and Byrne, 2019) and generating repetitions (Holtzman et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 127, |
|
"text": "(Mohri, 2009)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 350, |
|
"end": 377, |
|
"text": "(Stahlberg and Byrne, 2019)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 405, |
|
"end": 428, |
|
"text": "(Holtzman et al., 2020)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and prior work", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Prior work on leveraging the strength of the two approaches proposes complex joint parameterizations, such as neural weighting of WFST arcs or paths (Rastogi et al., 2016; Lin et al., 2019) or encoding alignment constraints into the attention layer of seq2seq models (Aharoni and Goldberg, 2017; Wu et al., 2018; Wu and Cotterell, 2019; Makarov et al., 2017) . We study whether performance can be improved with simpler decodingtime model combinations, reranking and product of experts, which have been used effectively for other model classes (Charniak and Johnson, 2005; Hieber and Riezler, 2015) , evaluating on two unsupervised tasks: decipherment of informal roman-ization (Ryskina et al., 2020) and related language translation (Pourdamghani and Knight, 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 171, |
|
"text": "(Rastogi et al., 2016;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 172, |
|
"end": 189, |
|
"text": "Lin et al., 2019)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 267, |
|
"end": 295, |
|
"text": "(Aharoni and Goldberg, 2017;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 296, |
|
"end": 312, |
|
"text": "Wu et al., 2018;", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 313, |
|
"end": 336, |
|
"text": "Wu and Cotterell, 2019;", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 358, |
|
"text": "Makarov et al., 2017)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 543, |
|
"end": 571, |
|
"text": "(Charniak and Johnson, 2005;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 572, |
|
"end": 597, |
|
"text": "Hieber and Riezler, 2015)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 677, |
|
"end": 699, |
|
"text": "(Ryskina et al., 2020)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 733, |
|
"end": 764, |
|
"text": "(Pourdamghani and Knight, 2017)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and prior work", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While there has been much error analysis for the WFST and seq2seq approaches separately, it largely focuses on the more common supervised case. We perform detailed side-by-side error analysis to draw high-level comparisons between finitestate and seq2seq models and investigate if the intuitions from prior work would transfer to the unsupervised transduction scenario.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and prior work", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We compare the errors made by the finite-state and the seq2seq approaches by analyzing their performance on two unsupervised character-level transduction tasks: translating between closely related languages written in different alphabets and converting informally romanized text into its native script. Both tasks are illustrated in Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 333, |
|
"end": 341, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Informal romanization is an idiosyncratic transformation that renders a non-Latin-script language in Latin alphabet, extensively used online by speakers of Arabic (Darwish, 2014) , Russian (Paulsen, 2014) , and many Indic languages (Sowmya et al., 2010) . Figure 1 shows examples of romanized Russian (top left) and Kannada (top right) sentences along with their \"canonicalized\" representations in Cyrillic and Kannada scripts respectively. Unlike official romanization systems such as pinyin, this type of transliteration is not standardized: character substitution choices vary between users and are based on the specific user's perception of how similar characters in different scripts are. Although the substitutions are primarily phonetic (e.g. Russian n /n/ \u2192 n), i.e. based on the pronunciation of a specific character in or out of context, users might also rely on visual similarity between glyphs (e.g. Russian q / > tS j / \u2192 4), especially when the associated phoneme cannot be easily mapped to a Latin-script grapheme (e.g. Arabic /Q/ \u2192 3). To capture this variation, we view the task of decoding informal romanization as a many-to-many character-level decipherment problem.", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 178, |
|
"text": "(Darwish, 2014)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 204, |
|
"text": "(Paulsen, 2014)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 253, |
|
"text": "(Sowmya et al., 2010)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 256, |
|
"end": 264, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Informal romanization", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The difficulty of deciphering romanization also depends on the type of the writing system the language traditionally uses. In alphabetic scripts, where grapheme-to-phoneme correspondence is mostly one-to-one, there tends to be a one-to-one monotonic alignment between characters in the ro-manized and native script sequences (Figure 1 , top left). Abjads and abugidas, where graphemes correspond to consonants or consonant-vowel syllables, increasingly use many-to-one alignment in their romanization (Figure 1 , top right), which makes learning the latent alignments, and therefore decoding, more challenging. In this work, we experiment with three languages spanning over three major types of writing systems-Russian (alphabetic), Arabic (abjad), and Kannada (abugida)-and compare how well-suited character-level models are for learning these varying alignment patterns.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 325, |
|
"end": 334, |
|
"text": "(Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 501, |
|
"end": 510, |
|
"text": "(Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Informal romanization", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "As shown by Pourdamghani and Knight (2017) and Hauer et al. (2014) , character-level models can be used effectively to translate between languages that are closely enough related to have only small lexical and grammatical differences, such as Serbian and Bosnian (Ljube\u0161i\u0107 and Klubi\u010dka, 2014) . We focus on this specific language pair and tie the languages to specific orthographies (Cyrillic for Serbian and Latin for Bosnian), approaching the task as an unsupervised orthography conversion problem. However, the transliteration framing of the translation problem is inherently limited since the task is not truly character-level in nature, as shown by the alignment lines in Figure 1 (bottom) . Even the most accurate transliteration model will not be able to capture non-cognate word translations (Serbian 'nastava' [nastava, 'education, teaching'] \u2192 Bosnian 'obrazovanje' ['education'] ) and the resulting discrepancies in morphological inflection (Serbian -a endings in adjectives agreeing with feminine 'nastava' map to Bosnian -o representing agreement with neuter 'obrazovanje').", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 42, |
|
"text": "Pourdamghani and Knight (2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 47, |
|
"end": 66, |
|
"text": "Hauer et al. (2014)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 263, |
|
"end": 292, |
|
"text": "(Ljube\u0161i\u0107 and Klubi\u010dka, 2014)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 876, |
|
"end": 889, |
|
"text": "['education']", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 677, |
|
"end": 694, |
|
"text": "Figure 1 (bottom)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related language translation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "One major difference with the informal romanization task is the lack of the idiosyncratic orthography: the word spellings are now consistent across the data. However, since the character-level approach does not fully reflect the nature of the transformation, the model will still have to learn a manyto-many cipher with highly context-dependent character substitutions. Table 1 details the statistics of the splits used for all languages and tasks. Below we describe each dataset in detail, explaining the differences in data split sizes between languages. Additional preprocessing steps applied to all datasets are described in \u00a73.4. 2", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 370, |
|
"end": 377, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related language translation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Source:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Informal romanization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "de el menu:) Filtered:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Informal romanization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "de el menu<...> Target: <...>", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Informal romanization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "'This is the menu' Figure 2 : A parallel example from the LDC BOLT Arabizi dataset, written in Latin script (source) and converted to Arabic (target) semi-manually. Some source-side segments (in red) are removed by annotators; we use the version without such segments (filtered) for our task. The annotators also standardize spacing on the target side, which results in difference with the source (in blue).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 19, |
|
"end": 27, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Gloss:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Arabic We use the LDC BOLT Phase 2 corpus (Bies et al., 2014; for training and testing the Arabic transliteration models (Figure 2) . The corpus consists of short SMS and chat in Egyptian Arabic represented using Latin script (Arabizi). The corpus is fully parallel: each message is automatically converted into the standardized dialectal Arabic orthography (CODA; Habash et al., 2012) and then manually corrected by human annotators. We split and preprocess the data according to Ryskina et al. (2020) , discarding the target (native script) and source (romanized) parallel sentences to create the source and target monolingual training splits respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 61, |
|
"text": "(Bies et al., 2014;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 385, |
|
"text": "Habash et al., 2012)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 481, |
|
"end": 502, |
|
"text": "Ryskina et al. (2020)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 131, |
|
"text": "(Figure 2)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Gloss:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Russian We use the romanized Russian dataset collected by Ryskina et al. (2020) , augmented with the monolingual Cyrillic data from the Taiga corpus of Shavrina and Shapovalova (2017) (Figure 3 ). The romanized data is split into training, validation, and test portions, and all validation and test sentences are converted to Cyrillic by native speaker annotators. Both the romanized and the nativescript sequences are collected from public posts and comments on a Russian social network vk.com, and they are on average 3 times longer than the messages in the Arabic dataset (Table 1) . However, although both sides were scraped from the same online platform, the relevant Taiga data is collected primarily from political discussion groups, so there is still a substantial domain mismatch between the source and target sides of the data. Table 1 : Dataset splits for each task and language. The source and target train data are monolingual, and the validation and test sentences are parallel. For the informal romanization task, the source and target sides correspond to the Latin and the original script respectively. For the translation task, the source and target sides correspond to source and target languages. The validation and test character statistics are reported for the source side.", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 79, |
|
"text": "Ryskina et al. (2020)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 152, |
|
"end": 183, |
|
"text": "Shavrina and Shapovalova (2017)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 193, |
|
"text": "(Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 575, |
|
"end": 584, |
|
"text": "(Table 1)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 838, |
|
"end": 845, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Gloss:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Annotated Source:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gloss:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "proishodit s prirodoy 4to to very very bad Filtered:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gloss:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "proishodit s prirodoy 4to to <...> Target:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gloss:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "proishodit s prirodo\u020b qto-to <...> Gloss:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gloss:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "'Something very very bad is happening to the environment' Monolingual Source:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gloss:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-Target:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gloss:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "to videoroliki so s ezda partii\"Edina Rossi \" Gloss:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gloss:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "'These are the videos from the \"United Russia\" party congress' Figure 3 : Top: A parallel example from the romanized Russian dataset. We use the filtered version of the romanized (source) sequences, removing the segments the annotators were unable to convert to Cyrillic, e.g. code-switched phrases (in red). The annotators also standardize minor spelling variation such as hyphenation (in blue). Bottom: a monolingual Cyrillic example from the vk.com portion of the Taiga corpus, which mostly consists of comments in political discussion groups.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 71, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Gloss:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Kannada Our Kannada data ( Figure 4 ) is taken from the Dakshina dataset (Roark et al., 2020) , a large collection of native-script text from Wikipedia for 12 South Asian languages. Unlike the Russian and Arabic data, the romanized portion of Dakshina is not scraped directly from the users' online communication, but instead elicited from native speakers given the native-script sequences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 93, |
|
"text": "(Roark et al., 2020)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 35, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Gloss:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Because of this, all romanized sentences in the data are parallel: we allocate most of them to the source side training data, discarding their original script counterparts, and split the remaining annotated ones between validation and test. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gloss:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Kagbi mozno r \u0cae\u0cc2\u0cb2 +,-$ .\u0ca8/$ 0 DDR3\u0caf\u0ca8\u0cc12 \u0cac\u0cb3\u0cb8\u0cb2\u0cc1 Source:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Kagbi mozno ri", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "moola saaketnalli ddr3yannu balasalu Gloss:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Kagbi mozno ri", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "'to use DDR3 in the source circuit' Figure 4 : A parallel example from the Kannada portion of the Dakshina dataset. The Kannada script data (target) is scraped from Wikipedia and manually converted to Latin (source) by human annotators. Foreign target-side characters (in red) get preserved in the annotation but our preprocessing replaces them with UNK on the target side.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 44, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Kagbi mozno ri", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Serbian: svako ima pravo na ivot, slobodu i bezbednost liqnosti. Bosnian:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Kagbi mozno ri", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "svako ima pravo na \u017eivot, slobodu i osobnu sigurnost.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Kagbi mozno ri", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "'Everyone has the right to life, liberty and security of person.' Figure 5 : A parallel example from the Serbian-Cyrillic and Bosnian-Latin UDHR. The sequences are not entirely parallel on character level due to paraphrases and non-cognate translations (in blue).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 74, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Gloss:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Following prior work (Pourdamghani and Knight, 2017; Yang et al., 2018; He et al., 2020) , we train our unsupervised models on the monolingual data from the Leipzig corpora (Goldhahn et al., 2012) . We reuse the non-parallel training and synthetic parallel validation splits of Yang et al. (2018) , who generated their parallel data using the Google Translation API. Rather than using their synthetic test set, we opt to test on natural parallel data from the Universal Declaration of Human Rights (UDHR), following Pourdamghani and Knight (2017) . We manually sentence-align the Serbian-Cyrillic and Bosnian-Latin declaration texts and follow the preprocessing guidelines of Pourdamghani and Knight (2017). Although we strive to approximate the training and evaluation setup of their work for fair comparison, there are some discrepancies: for example, our manual alignment of UDHR yields 100 sentence pairs compared to 104 of Pourdamghani and Knight (2017) . We use the data to train the translation models in both directions, simply switching the source and target sides from Serbian to Bosnian and vice versa.", |
|
"cite_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 52, |
|
"text": "(Pourdamghani and Knight, 2017;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 53, |
|
"end": 71, |
|
"text": "Yang et al., 2018;", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 72, |
|
"end": 88, |
|
"text": "He et al., 2020)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 173, |
|
"end": 196, |
|
"text": "(Goldhahn et al., 2012)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 296, |
|
"text": "Yang et al. (2018)", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 516, |
|
"end": 546, |
|
"text": "Pourdamghani and Knight (2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 928, |
|
"end": 958, |
|
"text": "Pourdamghani and Knight (2017)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related language translation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "As discussed in \u00a71, the WFST models are less powerful than the seq2seq models; however, they are also more structured, which we can use to introduce inductive bias to aid unsupervised training. Following Ryskina et al. 2020, we introduce informative priors on character substitution operations (for a description of the WFST parameterization, see \u00a74.1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inductive bias", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The priors reflect the visual and phonetic similarity between characters in different alphabets and are sourced from human-curated resources built with the same concepts of similarity in mind. For all tasks and languages, we collect phonetically similar character pairs from the phonetic keyboard layouts (or, in case of the translation task, from the default Serbian keyboard layout, which is phonetic in nature due to the dual orthography standard of the language). We also add some visually similar character pairs by automatically pairing all symbols that occur in both source and target alphabets (same Unicode codepoints). For Russian, which exhibits a greater degree of visual similarity than Arabic or Kannada, we also make use of the Unicode confusables list (different Unicode codepoints but same or similar glyphs). 3 It should be noted that these automatically generated informative priors also contain noise: keyboard layouts have spurious mappings because each symbol must be assigned to exactly one key in the QWERTY layout, and Unicode-constrained visual mappings might prevent the model from learning correspondences between punctuation symbols (e.g. Arabic question mark \u2192 ?).", |
|
"cite_spans": [ |
|
{ |
|
"start": 827, |
|
"end": 828, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inductive bias", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We lowercase and segment all sequences into characters as defined by Unicode codepoints, so dia-critics and non-printing characters like ZWJ are also treated as separate vocabulary items. To filter out foreign or archaic characters and rare diacritics, we restrict the alphabets to characters that cover 99% of the monolingual training data. After that, we add any standard alphabetical characters and numerals that have been filtered out back into the source and target alphabets. All remaining filtered characters are replaced with a special UNK symbol in all splits except for the target-side test.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We perform our analysis using the finite-state and seq2seq models from prior work and experiment with two joint decoding strategies, reranking and product of experts. Implementation details and hyperparameters are described in Appendix B.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our finite-state model is the WFST cascade introduced by Ryskina et al. (2020) . The model is composed of a character-level n-gram language model and a script conversion transducer (emission model), which supports one-to-one character substitutions, insertions, and deletions. Character operation weights in the emission model are parameterized with multinomial distributions, and similar character mappings ( \u00a73.3) are used to create Dirichlet priors on the emission parameters. To avoid marginalizing over sequences of infinite length, a fixed limit is set on the delay of any path (the difference between the cumulative number of insertions and deletions at any timestep). Ryskina et al. (2020) train the WFST using stochastic stepwise EM (Liang and Klein, 2009) , marginalizing over all possible target sequences and their alignments with the given source sequence. To speed up training, we modify their training procedure towards 'hard EM': given a source sequence, we predict the most probable target sequence under the model, marginalize over alignments and then update the parameters. Although the unsupervised WFST training is still slow, the stepwise training procedure is designed to converge using fewer data points, so we choose to train the WFST model only on the 1,000 shortest source-side training sequences (500 for Kannada).", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 78, |
|
"text": "Ryskina et al. (2020)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 676, |
|
"end": 697, |
|
"text": "Ryskina et al. (2020)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 742, |
|
"end": 765, |
|
"text": "(Liang and Klein, 2009)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base models", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Our default seq2seq model is the unsupervised neural machine translation (UNMT) model of Lample et al. (2018 Lample et al. ( , 2019 Table 3 : Character and word error rates (lower is better) and BLEU scores (higher is better) for the related language translation task. Bold indicates best per column. The WFST and the seq2seq have comparable CER and WER despite the WFST being trained on up to 160x less source-side data ( \u00a74.1). While none of our models achieve the scores reported by Pourdamghani and Knight (2017) , they all substantially outperform the subword-level model of He et al. (2020) . Note: base model results are not intended as a direct comparison between the WFST and seq2seq, since they are trained on different amounts of data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 108, |
|
"text": "Lample et al. (2018", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 109, |
|
"end": 131, |
|
"text": "Lample et al. ( , 2019", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 516, |
|
"text": "Pourdamghani and Knight (2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 580, |
|
"end": 596, |
|
"text": "He et al. (2020)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 139, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Base models", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "LSTM (Hochreiter and Schmidhuber, 1997) encoder and decoder with attention, trained to map sentences from each domain into a shared latent space. Using a combined objective, the UNMT model is trained to denoise, translate in both directions, and discriminate between the latent representation of sequences from different domains. Since the sufficient amount of balanced data is crucial for the UNMT performance, we train the seq2seq model on all available data on both source and target sides. Additionally, the seq2seq model decides on early stopping by evaluating on a small parallel validation set, which our WFST model does not have access to.", |
|
"cite_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 39, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base models", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The WFST model treats the target and source training data differently, using the former to train the language model and the latter for learning the emission parameters, while the UNMT model is trained to translate in both directions simultaneously. Therefore, we reuse the same seq2seq model for both directions of the translation task, but train a separate finite-state model for each direction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base models", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The simplest way to combine two independently trained models is reranking: using one model to produce a list of candidates and rescoring them according to another model. To generate candidates with a WFST, we apply the n-shortest paths algorithm (Mohri and Riley, 2002) . It should be noted that the n-best list might contain duplicates since each path represents a specific source-target character alignment. The length constraints encoded in the WFST also restrict its capacity as a reranker: beam search in the UNMT model may produce hypotheses too short or long to have a non-zero Input svako ima pravo da slobodno uqestvuje u kulturnom ivotu zajednice, da u iva u umetnosti i da uqestvuje u nauqnom napretku i u dobrobiti koja otuda proistiqe. Ground truth svako ima pravo da slobodno sudjeluje u kulturnom \u017eivotu zajednice, da u\u017eiva u umjetnosti i da u\u010destvuje u znanstvenom napretku i u njegovim koristima.", |
|
"cite_spans": [ |
|
{ |
|
"start": 246, |
|
"end": 269, |
|
"text": "(Mohri and Riley, 2002)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model combinations", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "WFST svako ima pravo da slobodno u\u010destvuje u kulturnom \u017eivotu s jednice , da u\u017eiva u m etnosti i da u\u010destvuje u nau\u010dnom napretku i u dobrobiti koja otuda pr isti\u010de . Reranked WFST svako ima pravo da slobodno u\u010destvuje u kulturnom \u017eivotu s jednice , da u\u017eiva u m etnosti i da u\u010destvuje u nau\u010dnom napretku i u dobrobiti koja otuda pr isti\u010de . Seq2Seq svako ima pravo da slobodno u\u010destvuje u kulturnom \u017eivotu zajednice , da u\u010destvuje u nau\u010dnom napretku i u dobrobiti koja otuda proisti\u010de . Reranked Seq2Seq svako ima pravo da slobodno u\u010destvuje u kulturnom \u017eivotu zajednice , da u\u017eiva u umjetnosti i da u\u010destvuje u nau\u010dnom napretku i u dobrobiti koja otuda proisti\u010de Product of experts svako ima pravo da slobodno u\u010destvuje u kulturnom za u s ajednice , da \u017eiva u umjetnosti i da u\u010destvuje u nau\u010dnom napretku i u dobro j i koja otuda proisti Subword Seq2Seq s ami ima pravo da slobodno u ti\u010de na srpskom nivou vlasti da razgovaraju u bosne i da djeluje u medunarodnom turizmu i na buducnosti koja mu\u017ea decisno . are shown in yellow. Here the WFST errors are substitutions or deletions of individual characters, while the seq2seq drops entire words from the input ( \u00a75 #4). The latter problem is solved by reranking with a WFST for this example. The seq2seq model with subword tokenization (He et al., 2020) produces mostly hallucinated output ( \u00a75 #2). Example outputs for all other datasets can be found in the Appendix.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1286, |
|
"end": 1303, |
|
"text": "(He et al., 2020)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model combinations", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "probability under the WFST.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model combinations", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Our second approach is a product-of-expertsstyle joint decoding strategy (Hinton, 2002) : we perform beam search on the WFST lattice, reweighting the arcs with the output distribution of the seq2seq decoder at the corresponding timestep. For each partial hypothesis, we keep track of the WFST state s and the partial input and output sequences x 1:k and y 1:t . 4 When traversing an arc with input label i \u2208 {x k+1 , } and output label o, we multiply the arc weight by the probability of the neural model outputting o as the next character: p seq2seq (y t+1 = o|x, y 1:t ). Transitions with o = (i.e. deletions) are not rescored by the seq2seq. We group hypotheses by their consumed input length k and select n best extensions at each timestep.", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 87, |
|
"text": "(Hinton, 2002)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 362, |
|
"end": 363, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model combinations", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For the translation task, we also compare to prior unsupervised approaches of different granularity: the deep generative style transfer model of He et al. (2020) and the character-and word-level WFST decipherment model of Pourdamghani and Knight (2017) . The former is trained on the same training set tokenized into subword units (Sennrich et al., 2016) , and we evaluate it on our UDHR test set for fair comparison. While the train and test data of Pourdamghani and Knight (2017) also use the same respective sources, we cannot account for tokenization differences that could affect the scores reported by the authors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 161, |
|
"text": "He et al. (2020)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 222, |
|
"end": 252, |
|
"text": "Pourdamghani and Knight (2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 331, |
|
"end": 354, |
|
"text": "(Sennrich et al., 2016)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 451, |
|
"end": 481, |
|
"text": "Pourdamghani and Knight (2017)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Additional baselines", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Tables 2 and 3 present our evaluation of the two base models and three decoding-time model combinations on the romanization decipherment and related language translation tasks respectively. For each experiment, we report character error rate, word error rate, and BLEU (see Appendix C). The results for the base models support what we show later in this section: the seq2seq model is more likely to recover words correctly (higher BLEU, lower WER), while the WFST is more faithful on character level and avoids word-level substitution errors (lower CER). Example predictions can be found in Table 4 and in the Appendix.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 591, |
|
"end": 598, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our further qualitative and quantitative findings are summarized in the following high-level takeaways:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "#1: Model combinations still suffer from search issues. We would expect the combined decoding to discourage all errors common under one model but not the other, improving the performance by leveraging the strengths of both model classes. However, as Tables 2 and 3 show, they instead WFST Seq2Seq Figure 6 : Highest-density submatrices of the two base models' character confusion matrices, computed in the Russian romanization task. White cells represent zero elements. The WFST confusion matrix (left) is noticeably sparser than the seq2seq one (right), indicating more repetitive errors. # symbol stands for UNK.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 264, |
|
"text": "Tables 2 and 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 305, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "mostly interpolate between the scores of the two base models. In the reranking experiments, we find that this is often due to the same base model error (e.g. the seq2seq model hallucinating a word mid-sentence) repeating across all the hypotheses in the final beam. This suggests that successful reranking would require a much larger beam size or a diversity-promoting search mechanism. Interestingly, we observe that although adding a reranker on top of a decoder does improve performance slightly, the gain is only in terms of the metrics that the base decoder is already strong atcharacter-level for reranked WFST and word-level for reranked seq2seq-at the expense of the other scores. Overall, none of our decoding strategies achieves best results across the board, and no model combination substantially outperforms both base models in any metric. #2: Character tokenization boosts performance of the neural model. In the past, UNMT-style models have been applied to various unsupervised sequence transduction problems. However, since these models were designed to operate on word or subword level, prior work assumes the same tokenization is necessary. We show that for the tasks allowing character-level framing, such models in fact respond extremely well to character input. Table 3 compares the UNMT model trained on characters with the seq2seq style transfer model of He et al. (2020) trained on subword units. The original paper shows improvement over the UNMT baseline in the same setting, but simply switching to character-level tokenization without any other changes results in a 30 BLEU points gain for either direction. This suggests that the tokenization choice could act as an inductive bias for seq2seq models, and character-level framing could be useful even for tasks that are not truly character-level.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1378, |
|
"end": 1394, |
|
"text": "He et al. (2020)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1283, |
|
"end": 1290, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "This observation also aligns with the findings of the recent work on language modeling complexity (Park et al., 2021; Mielke et al., 2019) . For many languages, including several Slavic ones related to the Serbian-Bosnian pair, a character-level language model yields lower surprisal than the one trained on BPE units, suggesting that the effect might also be explained by the character tokenization making the language easier to language-model. #3: WFST model makes more repetitive errors. Although two of our evaluation metrics, CER and WER, are based on edit distance, they do not distinguish between the different types of edits (substitutions, insertions and deletions). Breaking them down by the edit operation, we find that while both models favor substitutions on both word and character levels, insertions and deletions are more frequent under the neural model (43% vs. 30% of all edits on the Russian romanization task). We also find that the character substitution choices of the neural model are more context-dependent: while the total counts of substitution errors for the two models are comparable, the WFST is more likely to repeat the same few substitutions per character type. This is illustrated by Figure 6 , which visualizes the most populated submatrices of the confusion matrices for the same task as heatmaps. The WFST confusion matrix is noticeably more sparse, with the same few substitutions occurring much more frequently than others: for example, WFST often mistakes for a and rarely for other characters, while the neural model's substitutions of are distributed closer to uniform. This suggests that the WFST errors might be easier to correct with rule-based postprocessing. Interestingly, we did not observe the same effect for the translation task, likely due to a more constrained nature of the orthography conversion. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 117, |
|
"text": "(Park et al., 2021;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 118, |
|
"end": 138, |
|
"text": "Mielke et al., 2019)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1217, |
|
"end": 1225, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Figure 7: Character error rate per word for the WFST (left) and seq2seq (right) bos\u2192srp translation outputs. The predictions are segmented using Moses tokenizer (Koehn et al., 2007) and aligned to ground truth with word-level edit distance. The increased frequency of CER=1 for the seq2seq model as compared to the WFST indicates that it replaces entire words more often.", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 181, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Seq2Seq", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "#4: Neural model is more sensitive to data distribution shifts. The language model aiming to replicate its training data distribution could cause the output to deviate from the input significantly. This could be an artifact of a domain shift, such as in Russian, where the LM training data came from a political discussion forum: the seq2seq model frequently predicts unrelated domain-specific proper names in place of very common Russian words, e.g.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Seq2Seq", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "izn [\u017eizn, 'life'] \u2192 Z ganov [Zjuganov, 'Zyuganov (politician's last name)'] or to [\u00e8to, 'this'] \u2192 Edina Rossi [Edinaja Rossija, 'United Russia (political party)'],", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Seq2Seq", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "presumably distracted by the shared first character in the romanized version. To quantify the effect of a mismatch between the train and test data distributions in this case, we inspect the most common word-level substitutions under each decoding strategy, looking at all substitution errors covered by the 1,000 most frequent substitution 'types' (ground truth-prediction word pairs) under the respective decoder. We find that 25% of the seq2seq substitution errors fall into this category, as compared to merely 3% for the WFST-notable given the relative proportion of in-vocabulary words in the models' outputs (89% for UNMT vs. 65% for WFST).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Seq2Seq", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Comparing the error rate distribution across output words for the translation task also supports this observation. As can be seen from Figure 7 , the seq2seq model is likely to either predict the word correctly (CER of 0) or entirely wrong (CER of 1), while the the WFST more often predicts the word partially correctly-examples in Table 4 illustrate this as well. We also see this in the Kannada outputs: WFST typically gets all the consonants right but makes mistakes in the vowels, while the seq2seq tends to replace the entire word.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 143, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 332, |
|
"end": 339, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Seq2Seq", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We perform comparative error analysis in finitestate and seq2seq models and their combinations for two unsupervised character-level tasks, informal romanization decipherment and related language translation. We find that the two model types tend towards different errors: seq2seq models are more prone to word-level errors caused by distributional shifts while WFSTs produce more characterlevel noise despite the hard alignment constraints.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Despite none of our simple decoding-time combinations substantially outperforming the base models, we believe that combining neural and finitestate models to harness their complementary advantages is a promising research direction. Such combinations might involve biasing seq2seq models towards WFST-like behavior via pretraining or directly encoding constraints such as hard alignment or monotonicity into their parameterization (Wu et al., 2018; Wu and Cotterell, 2019) . Although recent work has shown that the Transformer can learn to perform character-level transduction without such biases in a supervised setting (Wu et al., 2021) , exploiting the structured nature of the task could be crucial for making up for the lack of large parallel corpora in low-data and/or unsupervised scenarios. We hope that our analysis provides insight into leveraging the strengths of the two approaches for modeling character-level phenomena in the absence of parallel data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 430, |
|
"end": 447, |
|
"text": "(Wu et al., 2018;", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 448, |
|
"end": 471, |
|
"text": "Wu and Cotterell, 2019)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 620, |
|
"end": 637, |
|
"text": "(Wu et al., 2021)", |
|
"ref_id": "BIBREF44" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The romanized Russian and Arabic data and preprocessing scripts can be downloaded here. This repository also contains the relevant portion of the Taiga dataset, which can be downloaded in full at this link. The romanized Kannada data was downloaded from the Dakshina dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Data download links", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The scripts to download the Serbian and Bosnian Leipzig corpora data can be found here. The UDHR texts were collected from the corresponding pages: Serbian, Bosnian.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Data download links", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The keyboard layouts used to construct the phonetic priors are collected from the following sources: Arabic 1, Arabic 2, Russian, Kannada, Serbian. The Unicode confusables list used for the Russian visual prior can be found here.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Data download links", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "WFST We reuse the unsupervised WFST implementation of Ryskina et al. (2020) , 5 which utilizes the OpenFst (Allauzen et al., 2007) and Open-Grm (Roark et al., 2012) libraries. We use the default hyperparameter settings described by the authors (see Appendix B in the original paper). We keep the hyperparameters unchanged for the translation experiment and set the maximum delay value to 2 for both translation directions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 75, |
|
"text": "Ryskina et al. (2020)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 107, |
|
"end": 130, |
|
"text": "(Allauzen et al., 2007)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 144, |
|
"end": 164, |
|
"text": "(Roark et al., 2012)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Implementation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "UNMT We use the PyTorch UNMT implementation of He et al. (2020) 6 which incorporates improvements introduced by Lample et al. (2019) such as the addition of a max-pooling layer. We use a single-layer LSTM (Hochreiter and Schmidhuber, 1997) with hidden state size 512 for both the encoder and the decoder and embedding dimension 128. For the denoising autoencoding loss, we adopt the default noise model and hyperparameters as described by Lample et al. (2018) . The autoencoding loss is annealed over the first 3 epochs. We predict the output using greedy decoding and set the maximum output length equal to the length of the input sequence. Patience for early stopping is set to 10.", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 132, |
|
"text": "Lample et al. (2019)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 205, |
|
"end": 239, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 439, |
|
"end": 459, |
|
"text": "Lample et al. (2018)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Implementation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Model combinations Our joint decoding implementations rely on PyTorch and the Pynini finitestate library (Gorman, 2016) . In reranking, we rescore n = 5 best hypotheses produced using 5 https://github.com/ryskina/ romanization-decipherment 6 https://github.com/cindyxinyiwang/ deep-latent-sequence-model beam search and n-shortest path algorithm for the UNMT and WFST respectively. Product of experts decoding is also performed with beam size 5.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 119, |
|
"text": "(Gorman, 2016)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Implementation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The character error rate (CER) and word error rate (WER) as measured as the Levenshtein distance between the hypothesis and reference divided by reference length:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Metrics", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ER(h, r) = dist(h, r) len(r)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Metrics", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "with both the numerator and the denominator measured in characters and words respectively. We report BLEU-4 score (Papineni et al., 2002) , measured using the Moses toolkit script. 7 For both BLEU and WER, we split sentences into words using the Moses tokenizer (Koehn et al., 2007) . Table 6 : Different model outputs for an Arabizi transliteration example (left column-Arabic, right-Buckwalter transliteration). Prediction errors are highlighted in red in the romanized versions. Correctly transliterated segments that do not match the ground truth because of spelling standardization during annotation are highlighted in yellow.", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 137, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 262, |
|
"end": 282, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 285, |
|
"end": 292, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "C Metrics", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Input kshullaka baalina avala horaatavannu adu vivarisuttade. Ground truth \u0cae\u0cc2\u0cb2 +,-$ .\u0ca8/$ 0 DDR3\u0caf\u0ca8\u0cc12 \u0cac\u0cb3\u0cb8\u0cb2\u0cc1 5\u0cc1\u0cb20 \u0c95 7,8$ \u0ca8 \u0c85\u0cb5\u0cb3 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c85\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cc1\u0cb9\u0cc2E,0 F$ 7,/$ \u0ca8\u0cc1 G,\u0cb3 ;$ \u0cc2\u0cb0I,\u0cb5\u0ca8\u0cc12 \u0c86\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cc1\u0cb9\u0cc2E,0 F$ 7,/$ \u0ca8 G,\u0cb3u ;$ \u0cc2\u0cb0I,\u0cb5\u0ca8\u0cc12 \u0c86\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cb3u\u0cb9\u0cc1\u0cb3L 7,@$ \u0c82N \u0c87\u0cb20 P$ \u0cd5 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c87\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cb3L 7,\u0c95/$ \u0ca82 G,E, ;$ \u0cc2\u0cd5=,\u0c9fI,Q\u0ca8\u0cc12 \u0ca6\u0cc1 @$ G,A$ \u0cb8\u0cc1\u0ca4B \u0ca6 \u0c95\u0cb3u\u0cb9\u0cc1\u0cb3L 7,@$ \u0c82\u0ca4 \u0c87\u0cb20 P$ \u0cd5 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c87\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Metrics", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ks . ullaka b\u0101l . ina aval . a h\u014dr\u0101t . avannu adu vivarisuttade.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Metrics", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "WFST \u0cae\u0cc2\u0cb2 +,-$ .\u0ca8/$ 0 DDR3\u0caf\u0ca8\u0cc12 \u0cac\u0cb3\u0cb8\u0cb2\u0cc1 5\u0cc1\u0cb20 \u0c95 7,8$ \u0ca8 \u0c85\u0cb5\u0cb3 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c85\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cc1\u0cb9\u0cc2E,0 F$ 7,/$ \u0ca8\u0cc1 G,\u0cb3 ;$ \u0cc2\u0cb0I,\u0cb5\u0ca8\u0cc12 \u0c86\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cc1\u0cb9\u0cc2E,0 F$ 7,/$ \u0ca8 G,\u0cb3u ;$ \u0cc2\u0cb0I,\u0cb5\u0ca8\u0cc12 \u0c86\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cb3u\u0cb9\u0cc1\u0cb3L 7,@$ \u0c82N \u0c87\u0cb20 P$ \u0cd5 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c87\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Metrics", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u0c95\u0cb3L 7,\u0c95/$ \u0ca82 G,E, ;$ \u0cc2\u0cd5=,\u0c9fI,Q\u0ca8\u0cc12 \u0ca6\u0cc1 @$ G,A$ \u0cb8\u0cc1\u0ca4B \u0ca6 \u0c95\u0cb3u\u0cb9\u0cc1\u0cb3L 7,@$ \u0c82\u0ca4 \u0c87\u0cb20 P$ \u0cd5 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c87\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . k u h\u016b ll\u0101 k he b\u0101 l in u v\u0101 l . a h o r at\u0101 vannu\u0101 du vivarisuttade. Reranked WFST \u0cae\u0cc2\u0cb2 +,-$ .\u0ca8/$ 0 DDR3\u0caf\u0ca8\u0cc12 \u0cac\u0cb3\u0cb8\u0cb2\u0cc1 5\u0cc1\u0cb20 \u0c95 7,8$ \u0ca8 \u0c85\u0cb5\u0cb3 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c85\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cc1\u0cb9\u0cc2E,0 F$ 7,/$ \u0ca8\u0cc1 G,\u0cb3 ;$ \u0cc2\u0cb0I,\u0cb5\u0ca8\u0cc12 \u0c86\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cc1\u0cb9\u0cc2E,0 F$ 7,/$ \u0ca8 G,\u0cb3u ;$ \u0cc2\u0cb0I,\u0cb5\u0ca8\u0cc12 \u0c86\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cb3u\u0cb9\u0cc1\u0cb3L 7,@$ \u0c82N \u0c87\u0cb20 P$ \u0cd5 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c87\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cb3L 7,\u0c95/$ \u0ca82 G,E, ;$ \u0cc2\u0cd5=,\u0c9fI,Q\u0ca8\u0cc12 \u0ca6\u0cc1 @$ G,A$ \u0cb8\u0cc1\u0ca4B \u0ca6 \u0c95\u0cb3u\u0cb9\u0cc1\u0cb3L 7,@$ \u0c82\u0ca4 \u0c87\u0cb20 P$ \u0cd5 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c87\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . k u h\u016b ll\u0101 k he b\u0101 l ina v\u0101 l . u h o r at\u0101 vannu\u0101 du vivarisuttade. Seq2Seq \u0cae\u0cc2\u0cb2 +,-$ .\u0ca8/$ 0 DDR3\u0caf\u0ca8\u0cc12 \u0cac\u0cb3\u0cb8\u0cb2\u0cc1 5\u0cc1\u0cb20 \u0c95 7,8$ \u0ca8 \u0c85\u0cb5\u0cb3 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c85\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cc1\u0cb9\u0cc2E,0 F$ 7,/$ \u0ca8\u0cc1 G,\u0cb3 ;$ \u0cc2\u0cb0I,\u0cb5\u0ca8\u0cc12 \u0c86\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cc1\u0cb9\u0cc2E,0 F$ 7,/$ \u0ca8 G,\u0cb3u ;$ \u0cc2\u0cb0I,\u0cb5\u0ca8\u0cc12 \u0c86\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cb3u\u0cb9\u0cc1\u0cb3L 7,@$ \u0c82N \u0c87\u0cb20 P$ \u0cd5 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c87\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cb3L 7,\u0c95/$ \u0ca82 G,E, ;$ \u0cc2\u0cd5=,\u0c9fI,Q\u0ca8\u0cc12 \u0ca6\u0cc1 @$ G,A$ \u0cb8\u0cc1\u0ca4B \u0ca6 \u0c95\u0cb3u\u0cb9\u0cc1\u0cb3L 7,@$ \u0c82\u0ca4 \u0c87\u0cb20 P$ \u0cd5 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c87\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . k al . u hul . l . a b\u0101 v i\u1e41g ill av\u0113 h\u014dr\u0101t . avannu i du vivarisuttade.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Metrics", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u0cae\u0cc2\u0cb2 +,-$ .\u0ca8/$ 0 DDR3\u0caf\u0ca8\u0cc12 \u0cac\u0cb3\u0cb8\u0cb2\u0cc1 5\u0cc1\u0cb20 \u0c95 7,8$ \u0ca8 \u0c85\u0cb5\u0cb3 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c85\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cc1\u0cb9\u0cc2E,0 F$ 7,/$ \u0ca8\u0cc1 G,\u0cb3 ;$ \u0cc2\u0cb0I,\u0cb5\u0ca8\u0cc12 \u0c86\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cc1\u0cb9\u0cc2E,0 F$ 7,/$ \u0ca8 G,\u0cb3u ;$ \u0cc2\u0cb0I,\u0cb5\u0ca8\u0cc12 \u0c86\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cb3u\u0cb9\u0cc1\u0cb3L 7,@$ \u0c82N \u0c87\u0cb20 P$ \u0cd5 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c87\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cb3L 7,\u0c95/$ \u0ca82 G,E, ;$ \u0cc2\u0cd5=,\u0c9fI,Q\u0ca8\u0cc12 \u0ca6\u0cc1 @$ G,A$ \u0cb8\u0cc1\u0ca4B \u0ca6 \u0c95\u0cb3u\u0cb9\u0cc1\u0cb3L 7,@$ \u0c82\u0ca4 \u0c87\u0cb20 P$ \u0cd5 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c87\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . k al . u hul . l . a b\u0101 v i\u1e41t a ill av\u0113 h\u014dr\u0101t . avannu i du vivarisuttade. Product of experts \u0cae\u0cc2\u0cb2 +,-$ .\u0ca8/$ 0 DDR3\u0caf\u0ca8\u0cc12 \u0cac\u0cb3\u0cb8\u0cb2\u0cc1 5\u0cc1\u0cb20 \u0c95 7,8$ \u0ca8 \u0c85\u0cb5\u0cb3 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c85\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cc1\u0cb9\u0cc2E,0 F$ 7,/$ \u0ca8\u0cc1 G,\u0cb3 ;$ \u0cc2\u0cb0I,\u0cb5\u0ca8\u0cc12 \u0c86\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cc1\u0cb9\u0cc2E,0 F$ 7,/$ \u0ca8 G,\u0cb3u ;$ \u0cc2\u0cb0I,\u0cb5\u0ca8\u0cc12 \u0c86\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cb3u\u0cb9\u0cc1\u0cb3L 7,@$ \u0c82N \u0c87\u0cb20 P$ \u0cd5 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c87\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . \u0c95\u0cb3L 7,\u0c95/$ \u0ca82 G,E, ;$ \u0cc2\u0cd5=,\u0c9fI,Q\u0ca8\u0cc12 \u0ca6\u0cc1 @$ G,A$ \u0cb8\u0cc1\u0ca4B \u0ca6 \u0c95\u0cb3u\u0cb9\u0cc1\u0cb3L 7,@$ \u0c82\u0ca4 \u0c87\u0cb20 P$ \u0cd5 ;$ \u0cc2\u0cd5=,\u0c9f\u0cb5\u0ca8\u0cc12 \u0c87\u0ca6\u0cc1 @$ \u0cb5A$ \u0cb8\u0cc1\u0ca4B C$ . k a l . l . a b\u0101 kal in n a v\u0101l\u0101 h\u014dr\u0101t . a t v\u0101 nnu du viv\u0101 risuttad a Table 7 : Different model outputs for a Kannada transliteration example (left column-Kannada, right-ISO 15919 transliterations). The ISO romanization is generated using the Nisaba library (Johny et al., 2021) . Prediction errors are highlighted in red in the romanized versions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1081, |
|
"end": 1101, |
|
"text": "(Johny et al., 2021)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 893, |
|
"end": 900, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Reranked Seq2Seq", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Links to download the corpora and other data sources discussed in this section can be found in Appendix A.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Links to the keyboard layouts and the confusables list can be found in Appendix A.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Due to insertions and deletions in the emission model, k and t might differ; epsilon symbols are not counted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/ generic/multi-bleu.perl", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors thank Badr Abdullah, Deepak Gopinath, Junxian He, Shruti Rijhwani, and Stas Kashepava for helpful discussion, and the anonymous reviewers for their valuable feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Morphological inflection generation with hard monotonic attention", |
|
"authors": [ |
|
{ |
|
"first": "Roee", |
|
"middle": [], |
|
"last": "Aharoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2004--2015", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1183" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roee Aharoni and Yoav Goldberg. 2017. Morphologi- cal inflection generation with hard monotonic atten- tion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 2004-2015, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "OpenFst: A general and efficient weighted finite-state transducer library", |
|
"authors": [ |
|
{ |
|
"first": "Cyril", |
|
"middle": [], |
|
"last": "Allauzen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Riley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Schalkwyk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Ninth International Conference on Implementation and Application of Automata", |
|
"volume": "4783", |
|
"issue": "", |
|
"pages": "11--23", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cyril Allauzen, Michael Riley, Johan Schalkwyk, Woj- ciech Skut, and Mehryar Mohri. 2007. OpenFst: A general and efficient weighted finite-state transducer library. In Proceedings of the Ninth International Conference on Implementation and Application of Automata, (CIAA 2007), volume 4783 of Lecture Notes in Computer Science, pages 11-23. Springer. http://www.openfst.org.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Resource creation for training and testing of transliteration systems for Indian languages", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Sowmya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Monojit", |
|
"middle": [], |
|
"last": "Choudhury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kalika", |
|
"middle": [], |
|
"last": "Bali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tirthankar", |
|
"middle": [], |
|
"last": "Dasgupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anupam", |
|
"middle": [], |
|
"last": "Basu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sowmya V. B., Monojit Choudhury, Kalika Bali, Tirthankar Dasgupta, and Anupam Basu. 2010. Re- source creation for training and testing of translit- eration systems for Indian languages. In Proceed- ings of the Seventh International Conference on Lan- guage Resources and Evaluation (LREC'10), Val- letta, Malta. European Language Resources Associ- ation (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Transliteration of Arabizi into Arabic orthography: Developing a parallel annotated Arabizi-Arabic script SMS/chat corpus", |
|
"authors": [ |
|
{ |
|
"first": "Ann", |
|
"middle": [], |
|
"last": "Bies", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyi", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Maamouri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Grimes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haejoong", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Wright", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Strassel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nizar", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramy", |
|
"middle": [], |
|
"last": "Eskander", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "93--103", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/W14-3612" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ann Bies, Zhiyi Song, Mohamed Maamouri, Stephen Grimes, Haejoong Lee, Jonathan Wright, Stephanie Strassel, Nizar Habash, Ramy Eskander, and Owen Rambow. 2014. Transliteration of Arabizi into Ara- bic orthography: Developing a parallel annotated Arabizi-Arabic script SMS/chat corpus. In Proceed- ings of the EMNLP 2014 Workshop on Arabic Nat- ural Language Processing (ANLP), pages 93-103, Doha, Qatar. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Coarseto-fine n-best parsing and MaxEnt discriminative reranking", |
|
"authors": [ |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "173--180", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1219840.1219862" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse- to-fine n-best parsing and MaxEnt discriminative reranking. In Proceedings of the 43rd Annual Meet- ing of the Association for Computational Linguis- tics (ACL'05), pages 173-180, Ann Arbor, Michi- gan. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Arabizi detection and conversion to Arabic", |
|
"authors": [ |
|
{ |
|
"first": "Kareem", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "217--224", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/W14-3629" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kareem Darwish. 2014. Arabizi detection and conver- sion to Arabic. In Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP), pages 217-224, Doha, Qatar. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Parameter estimation for probabilistic finite-state transducers", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1073083.1073085" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Eisner. 2002. Parameter estimation for prob- abilistic finite-state transducers. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 1-8, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Building large monolingual dictionaries at the Leipzig corpora collection: From 100 to 200 languages", |
|
"authors": [ |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Goldhahn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Eckart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Uwe", |
|
"middle": [], |
|
"last": "Quasthoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "759--765", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dirk Goldhahn, Thomas Eckart, and Uwe Quasthoff. 2012. Building large monolingual dictionaries at the Leipzig corpora collection: From 100 to 200 lan- guages. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 759-765, Istanbul, Turkey. Euro- pean Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Pynini: A Python library for weighted finite-state grammar compilation", |
|
"authors": [ |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Gorman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the SIGFSM Workshop on Statistical NLP and Weighted Automata", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "75--80", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W16-2409" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyle Gorman. 2016. Pynini: A Python library for weighted finite-state grammar compilation. In Pro- ceedings of the SIGFSM Workshop on Statistical NLP and Weighted Automata, pages 75-80, Berlin, Germany. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Conventional orthography for dialectal Arabic", |
|
"authors": [ |
|
{ |
|
"first": "Nizar", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Diab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "711--718", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nizar Habash, Mona Diab, and Owen Rambow. 2012. Conventional orthography for dialectal Arabic. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 711-718, Istanbul, Turkey. European Lan- guage Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Solving substitution ciphers with combined language models", |
|
"authors": [ |
|
{ |
|
"first": "Bradley", |
|
"middle": [], |
|
"last": "Hauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Hayward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Grzegorz", |
|
"middle": [], |
|
"last": "Kondrak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2314--2325", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bradley Hauer, Ryan Hayward, and Grzegorz Kon- drak. 2014. Solving substitution ciphers with com- bined language models. In Proceedings of COLING 2014, the 25th International Conference on Compu- tational Linguistics: Technical Papers, pages 2314- 2325, Dublin, Ireland. Dublin City University and Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A probabilistic formulation of unsupervised text style transfer", |
|
"authors": [ |
|
{ |
|
"first": "Junxian", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xinyi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taylor", |
|
"middle": [], |
|
"last": "Berg-Kirkpatrick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Junxian He, Xinyi Wang, Graham Neubig, and Taylor Berg-Kirkpatrick. 2020. A probabilistic formulation of unsupervised text style transfer. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Bag-of-words forced decoding for cross-lingual information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hieber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Riezler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1172--1182", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/N15-1123" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felix Hieber and Stefan Riezler. 2015. Bag-of-words forced decoding for cross-lingual information re- trieval. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 1172-1182, Denver, Colorado. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Training products of experts by minimizing contrastive divergence", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Neural Computation", |
|
"volume": "14", |
|
"issue": "8", |
|
"pages": "1771--1800", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. E. Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural Compu- tation, 14(8):1771-1800.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The curious case of neural text degeneration", |
|
"authors": [ |
|
{ |
|
"first": "Ari", |
|
"middle": [], |
|
"last": "Holtzman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Buys", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxwell", |
|
"middle": [], |
|
"last": "Forbes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text de- generation. In International Conference on Learn- ing Representations.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Finite-state script normalization and processing utilities: The Nisaba Brahmic library", |
|
"authors": [ |
|
{ |
|
"first": "Cibu", |
|
"middle": [], |
|
"last": "Johny", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Wolf-Sonkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Gutkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "14--23", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cibu Johny, Lawrence Wolf-Sonkin, Alexander Gutkin, and Brian Roark. 2021. Finite-state script normal- ization and processing utilities: The Nisaba Brahmic library. In Proceedings of the 16th Conference of the European Chapter of the Association for Compu- tational Linguistics: System Demonstrations, pages 14-23, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Machine transliteration", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Graehl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Computational Linguistics", |
|
"volume": "24", |
|
"issue": "4", |
|
"pages": "599--612", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Knight and Jonathan Graehl. 1998. Ma- chine transliteration. Computational Linguistics, 24(4):599-612.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Unsupervised analysis for decipherment problems", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anish", |
|
"middle": [], |
|
"last": "Nair", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nishit", |
|
"middle": [], |
|
"last": "Rathod", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Yamada", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the COL-ING/ACL 2006 Main Conference Poster Sessions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "499--506", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Knight, Anish Nair, Nishit Rathod, and Kenji Yamada. 2006. Unsupervised analysis for deci- pherment problems. In Proceedings of the COL- ING/ACL 2006 Main Conference Poster Sessions, pages 499-506, Sydney, Australia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Moses: Open source toolkit for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Bertoldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brooke", |
|
"middle": [], |
|
"last": "Cowan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wade", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Moran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Zens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Constantin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evan", |
|
"middle": [], |
|
"last": "Herbst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Ses- sions, pages 177-180, Prague, Czech Republic. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Unsupervised machine translation using monolingual corpora only", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018. Unsupervised ma- chine translation using monolingual corpora only. In International Conference on Learning Represen- tations.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Multiple-attribute text rewriting", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandeep", |
|
"middle": [], |
|
"last": "Subramanian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y-Lan", |
|
"middle": [], |
|
"last": "Boureau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Sandeep Subramanian, Eric Smith, Ludovic Denoyer, Marc'Aurelio Ranzato, and Y- Lan Boureau. 2019. Multiple-attribute text rewrit- ing. In International Conference on Learning Rep- resentations.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Online EM for unsupervised models", |
|
"authors": [ |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "611--619", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Percy Liang and Dan Klein. 2009. Online EM for unsupervised models. In Proceedings of Human Language Technologies: The 2009 Annual Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, pages 611-619, Boulder, Colorado. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Neural finite-state transducers: Beyond rational relations", |
|
"authors": [ |
|
{ |
|
"first": "Chu-Cheng", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Gormley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "272--283", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1024" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chu-Cheng Lin, Hao Zhu, Matthew R. Gormley, and Jason Eisner. 2019. Neural finite-state transducers: Beyond rational relations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 272-283, Minneapolis, Min- nesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "{bs,hr,sr}WaC -web corpora of Bosnian, Croatian and Serbian", |
|
"authors": [ |
|
{ |
|
"first": "Nikola", |
|
"middle": [], |
|
"last": "Ljube\u0161i\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Klubi\u010dka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 9th Web as Corpus Workshop (WaC-9", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "29--35", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/W14-0405" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikola Ljube\u0161i\u0107 and Filip Klubi\u010dka. 2014. {bs,hr,sr}WaC -web corpora of Bosnian, Croa- tian and Serbian. In Proceedings of the 9th Web as Corpus Workshop (WaC-9), pages 29-35, Gothen- burg, Sweden. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Align and copy: UZH at SIGMORPHON 2017 shared task for morphological reinflection", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Makarov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tatiana", |
|
"middle": [], |
|
"last": "Ruzsics", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Clematide", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the CoNLL SIGMORPHON 2017", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K17-2004" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Makarov, Tatiana Ruzsics, and Simon Clematide. 2017. Align and copy: UZH at SIGMORPHON 2017 shared task for morphological reinflection. In Proceedings of the CoNLL SIGMORPHON 2017", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Vancouver. Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "49--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shared Task: Universal Morphological Reinflection, pages 49-57, Vancouver. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "What kind of language is hard to language-model?", |
|
"authors": [ |
|
{ |
|
"first": "Sabrina", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mielke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Gorman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4975--4989", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1491" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sabrina J. Mielke, Ryan Cotterell, Kyle Gorman, Brian Roark, and Jason Eisner. 2019. What kind of lan- guage is hard to language-model? In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4975-4989, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Weighted automata algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Mehryar", |
|
"middle": [], |
|
"last": "Mohri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Handbook of weighted automata", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "213--254", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mehryar Mohri. 2009. Weighted automata algorithms. In Handbook of weighted automata, pages 213-254. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "An efficient algorithm for the n-best-strings problem", |
|
"authors": [ |
|
{ |
|
"first": "Mehryar", |
|
"middle": [], |
|
"last": "Mohri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Riley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Seventh International Conference on Spoken Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mehryar Mohri and Michael Riley. 2002. An efficient algorithm for the n-best-strings problem. In Seventh International Conference on Spoken Language Pro- cessing.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "BLEU: A method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1073083.1073135" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Morphology matters: A multilingual language modeling analysis", |
|
"authors": [ |
|
{ |
|
"first": "Hayley", |
|
"middle": [], |
|
"last": "Hyunji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katherine", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Coleman", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenneth", |
|
"middle": [], |
|
"last": "Haley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Han", |
|
"middle": [], |
|
"last": "Steimel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lane", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "261--276", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00365" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hyunji Hayley Park, Katherine J. Zhang, Coleman Ha- ley, Kenneth Steimel, Han Liu, and Lane Schwartz. 2021. Morphology matters: A multilingual lan- guage modeling analysis. Transactions of the Asso- ciation for Computational Linguistics, 9:261-276.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Translit: Computer-mediated digraphia on the Runet. Digital Russia: The Language, Culture and Politics of New Media Communication", |
|
"authors": [ |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Paulsen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martin Paulsen. 2014. Translit: Computer-mediated digraphia on the Runet. Digital Russia: The Lan- guage, Culture and Politics of New Media Commu- nication.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Deciphering related languages", |
|
"authors": [ |
|
{ |
|
"first": "Nima", |
|
"middle": [], |
|
"last": "Pourdamghani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2513--2518", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1266" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nima Pourdamghani and Kevin Knight. 2017. Deci- phering related languages. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 2513-2518, Copen- hagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Weighting finite-state transductions with neural context", |
|
"authors": [ |
|
{ |
|
"first": "Pushpendre", |
|
"middle": [], |
|
"last": "Rastogi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "623--633", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1076" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighting finite-state transductions with neu- ral context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 623-633, San Diego, California. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Learning phoneme mappings for transliteration without parallel data", |
|
"authors": [ |
|
{ |
|
"first": "Sujith", |
|
"middle": [], |
|
"last": "Ravi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "37--45", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sujith Ravi and Kevin Knight. 2009. Learning phoneme mappings for transliteration without paral- lel data. In Proceedings of Human Language Tech- nologies: The 2009 Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 37-45, Boulder, Colorado. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "The OpenGrm open-source finite-state grammar software libraries", |
|
"authors": [ |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Sproat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cyril", |
|
"middle": [], |
|
"last": "Allauzen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Riley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Sorensen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Terry", |
|
"middle": [], |
|
"last": "Tai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the ACL 2012 System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "61--66", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brian Roark, Richard Sproat, Cyril Allauzen, Michael Riley, Jeffrey Sorensen, and Terry Tai. 2012. The OpenGrm open-source finite-state grammar soft- ware libraries. In Proceedings of the ACL 2012 Sys- tem Demonstrations, pages 61-66, Jeju Island, Ko- rea. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Processing South Asian languages written in the Latin script: The Dakshina dataset", |
|
"authors": [ |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Wolf-Sonkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabrina", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mielke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cibu", |
|
"middle": [], |
|
"last": "Johny", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2413--2423", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brian Roark, Lawrence Wolf-Sonkin, Christo Kirov, Sabrina J. Mielke, Cibu Johny, Isin Demirsahin, and Keith Hall. 2020. Processing South Asian languages written in the Latin script: The Dakshina dataset. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 2413-2423, Mar- seille, France. European Language Resources Asso- ciation.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Phonetic and visual priors for decipherment of informal Romanization", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Ryskina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Gormley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taylor", |
|
"middle": [], |
|
"last": "Berg-Kirkpatrick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8308--8319", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.737" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Ryskina, Matthew R. Gormley, and Taylor Berg- Kirkpatrick. 2020. Phonetic and visual priors for decipherment of informal Romanization. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8308- 8319, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1715--1725", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1162" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "To the methodology of corpus construction for machine learning: Taiga syntax tree corpus and parser", |
|
"authors": [ |
|
{ |
|
"first": "Tatiana", |
|
"middle": [], |
|
"last": "Shavrina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Shapovalova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proc. CORPORA 2017 International Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "78--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tatiana Shavrina and Olga Shapovalova. 2017. To the methodology of corpus construction for machine learning: Taiga syntax tree corpus and parser. In Proc. CORPORA 2017 International Conference, pages 78-84, St. Petersburg.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Collecting natural SMS and chat conversations in multiple languages: The BOLT phase 2 corpus", |
|
"authors": [ |
|
{ |
|
"first": "Zhiyi", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Strassel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haejoong", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Walker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Wright", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Garland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dana", |
|
"middle": [], |
|
"last": "Fore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Gainor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Preston", |
|
"middle": [], |
|
"last": "Cabe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brendan", |
|
"middle": [], |
|
"last": "Callahan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ann", |
|
"middle": [], |
|
"last": "Sawyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1699--1704", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiyi Song, Stephanie Strassel, Haejoong Lee, Kevin Walker, Jonathan Wright, Jennifer Garland, Dana Fore, Brian Gainor, Preston Cabe, Thomas Thomas, Brendan Callahan, and Ann Sawyer. 2014. Collect- ing natural SMS and chat conversations in multiple languages: The BOLT phase 2 corpus. In Proceed- ings of the Ninth International Conference on Lan- guage Resources and Evaluation (LREC'14), pages 1699-1704, Reykjavik, Iceland. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "On NMT search errors and model errors: Cat got your tongue?", |
|
"authors": [ |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Stahlberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3356--3362", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1331" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felix Stahlberg and Bill Byrne. 2019. On NMT search errors and model errors: Cat got your tongue? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3356- 3362, Hong Kong, China. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Exact hard monotonic attention for character-level transduction", |
|
"authors": [ |
|
{ |
|
"first": "Shijie", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1530--1537", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1148" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shijie Wu and Ryan Cotterell. 2019. Exact hard mono- tonic attention for character-level transduction. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1530- 1537, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Applying the transformer to character-level transduction", |
|
"authors": [ |
|
{ |
|
"first": "Shijie", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mans", |
|
"middle": [], |
|
"last": "Hulden", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1901--1907", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shijie Wu, Ryan Cotterell, and Mans Hulden. 2021. Applying the transformer to character-level trans- duction. In Proceedings of the 16th Conference of the European Chapter of the Association for Com- putational Linguistics: Main Volume, pages 1901- 1907, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Hard non-monotonic attention for character-level transduction", |
|
"authors": [ |
|
{ |
|
"first": "Shijie", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pamela", |
|
"middle": [], |
|
"last": "Shapiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4425--4438", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1473" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shijie Wu, Pamela Shapiro, and Ryan Cotterell. 2018. Hard non-monotonic attention for character-level transduction. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing, pages 4425-4438, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Unsupervised text style transfer using language models as discriminators", |
|
"authors": [ |
|
{ |
|
"first": "Zichao", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiting", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Xing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taylor", |
|
"middle": [], |
|
"last": "Berg-Kirkpatrick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "NeurIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7298--7309", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zichao Yang, Zhiting Hu, Chris Dyer, Eric P. Xing, and Taylor Berg-Kirkpatrick. 2018. Unsupervised text style transfer using language models as discrimina- tors. In NeurIPS, pages 7298-7309.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Inputkongress ne odobril biudjet dlya osuchestvleniye \"bor'bi s kommunizmom\" v yuzhniy amerike. Ground truth kongress ne odobril b d et dl osuwestvleni \"bor by s kommunizmom\" v no\u020b amerike.kongress ne odobril bjud\u017eet dlja osu\u0161\u010destvlenija \"bor'by s kommunizmom\" v ju\u017enoj amerike.WFSTkongress ne odobril viu d et dl a osu sq estvleni y e \"bor # b i s kommunizmom\" v uuz n ani amerike. kongress ne odobril viu d et dl a osu s\u010d estvleni y e \"bor # b i s kommunizmom\" v uuz n ani amerike. Reranked WFST kongress ne odobril vi d et d e l a osu sq estvleni y e \"bor # b i s kommunizmom\" v uuz n ani amerike. kongress ne odobril vi d et d e l a osu s\u010d estvleni y e \"bor # b i s kommunizmom\" v uuz n ani amerike. b y udivitel'no s kommunizmom\" v ju\u017en y j amerike. Reranked Seq2Seq kongress ne odobril b d et dl osuwestvleni e \"bor by s kommunizmom\" v n y\u020b amerike. kongress ne odobril bjud\u017eet dlja osu\u0161\u010destvleni e \"bor'by s kommunizmom\" v ju\u017en y j amerike. Product of experts kongress ne odobril b i d et dl a osuwestvleni y e \"bor by s kommunizmom\" v uuz n nik ameri kongress ne odobril b i d et dlja a osu\u0161\u010destvleni y e \"bor'by s kommunizmom\" v uuz n nik ameri", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"text": "Char. Sent. Char. Sent. Char. Sent. Char.", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"4\">Train (source) Train (target)</td><td colspan=\"2\">Validation</td><td>Test</td></tr><tr><td colspan=\"3\">Sent. Romanized Arabic 5K 104K</td><td colspan=\"2\">49K 935K</td><td>301</td><td>8K</td><td>1K</td><td>20K</td></tr><tr><td>Romanized Russian</td><td colspan=\"4\">5K 319K 307K 111M</td><td>227</td><td>15K</td><td>1K</td><td>72K</td></tr><tr><td>Romanized Kannada</td><td>10K</td><td colspan=\"2\">1M 679K</td><td>64M</td><td>100</td><td>11K</td><td>100</td><td>10K</td></tr><tr><td>Serbian\u2192Bosnian</td><td>160K</td><td colspan=\"2\">9M 136K</td><td colspan=\"3\">9M 16K 923K</td><td>100</td><td>9K</td></tr><tr><td>Bosnian\u2192Serbian</td><td>136K</td><td colspan=\"2\">9M 160K</td><td colspan=\"3\">9M 16K 908K</td><td>100</td><td>10K</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"text": "in the parameterization ofHe et al. (2020). The model consists of an Character and word error rates (lower is better) and BLEU scores (higher is better) for the romanization decipherment task. Bold indicates best per column. Model combinations mostly interpolate between the base models' scores, although reranking yields minor improvements in character-level and word-level metrics for the WFST and seq2seq respectively. Note: base model results are not intended as a direct comparison between the WFST and seq2seq, since they are trained on different amounts of data.", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td>Arabic</td><td/><td/><td/><td>Russian</td><td/><td colspan=\"2\">Kannada</td></tr><tr><td/><td colspan=\"10\">CER WER BLEU CER WER BLEU CER WER BLEU</td></tr><tr><td>WFST</td><td>.405</td><td>.86</td><td>2.3</td><td/><td>.202</td><td>.58</td><td>14.8</td><td>.359</td><td>.71</td><td>12.5</td></tr><tr><td>Seq2Seq</td><td>.571</td><td>.85</td><td>4.0</td><td/><td>.229</td><td>.38</td><td>48.3</td><td>.559</td><td>.79</td><td>11.3</td></tr><tr><td>Reranked WFST</td><td>.398</td><td>.85</td><td>2.8</td><td/><td>.195</td><td>.57</td><td>16.1</td><td>.358</td><td>.71</td><td>12.5</td></tr><tr><td colspan=\"2\">Reranked Seq2Seq .538</td><td>.82</td><td>4.6</td><td/><td>.216</td><td>.39</td><td>45.6</td><td>.545</td><td>.78</td><td>12.6</td></tr><tr><td colspan=\"2\">Product of experts .470</td><td>.88</td><td>2.5</td><td/><td>.178</td><td>.50</td><td>22.9</td><td>.543</td><td>.93</td><td>7.0</td></tr><tr><td colspan=\"7\">Table 2: srp\u2192bos</td><td/><td colspan=\"2\">bos\u2192srp</td></tr><tr><td/><td/><td/><td colspan=\"8\">CER WER BLEU CER WER BLEU</td></tr><tr><td>WFST</td><td/><td/><td colspan=\"2\">.314</td><td>.50</td><td>25.3</td><td>.319</td><td>.52</td><td>25.5</td></tr><tr><td>Seq2Seq</td><td/><td/><td colspan=\"2\">.375</td><td>.49</td><td>34.5</td><td>.395</td><td>.49</td><td>36.3</td></tr><tr><td colspan=\"2\">Reranked WFST</td><td/><td colspan=\"2\">.314</td><td>.49</td><td>26.3</td><td>.317</td><td>.50</td><td>28.1</td></tr><tr><td colspan=\"2\">Reranked Seq2Seq</td><td/><td colspan=\"2\">.376</td><td>.48</td><td>35.1</td><td>.401</td><td>.47</td><td>37.0</td></tr><tr><td colspan=\"2\">Product of experts</td><td/><td colspan=\"2\">.329</td><td>.54</td><td>24.4</td><td>.352</td><td>.66</td><td>20.6</td></tr><tr><td colspan=\"4\">(Pourdamghani and Knight, 2017)</td><td>-</td><td>-</td><td>42.3</td><td>-</td><td>-</td><td>39.2</td></tr><tr><td>(He et al., 2020)</td><td/><td/><td colspan=\"2\">.657</td><td>.81</td><td>5.6</td><td>.693</td><td>.83</td><td>4.7</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"text": "Different model outputs for a srp\u2192bos translation example. Prediction errors are highlighted in red. Correctly transliterated segments that do not match the ground truth (e.g. due to paraphrasing)", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"text": "Different model outputs for a Russian transliteration example (left column-Cyrillic, rightscientific transliteration). Prediction errors are shown in red. Correctly transliterated segments that do not match the ground truth because of spelling standardization in annotation are in yellow. # stands for UNK.", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Input</td><td>ana h3dyy 3lek bokra 3la 8 kda</td><td/></tr><tr><td>Ground truth</td><td>8</td><td>AnA H>Edy Elyk bkrp ElY 8 kdh</td></tr></table>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |