|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:31:41.550355Z" |
|
}, |
|
"title": "The CMU-LTI submission to the SIGMORPHON 2020 Shared Task 0: Language-Specific Cross-Lingual Transfer", |
|
"authors": [ |
|
{ |
|
"first": "Nikitha", |
|
"middle": [], |
|
"last": "Murikinati", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Language Technologies Institute Carnegie Mellon University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Antonios", |
|
"middle": [], |
|
"last": "Anastasopoulos", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Language Technologies Institute Carnegie Mellon University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes the CMU-LTI submission to the SIGMORPHON 2020 Shared Task 0 on typologically diverse morphological inflection. The (unrestricted) submission uses the cross-lingual approach of our last year's winning submission (Anastasopoulos and Neubig, 2019), but adapted to use specific transfer languages for each test language. Our system, with fixed non-tuned hyperparameters, achieved a macro-averaged accuracy of 80.65 ranking 20 th among 31 systems, but it was still tied for best system in 25 of the 90 total languages.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes the CMU-LTI submission to the SIGMORPHON 2020 Shared Task 0 on typologically diverse morphological inflection. The (unrestricted) submission uses the cross-lingual approach of our last year's winning submission (Anastasopoulos and Neubig, 2019), but adapted to use specific transfer languages for each test language. Our system, with fixed non-tuned hyperparameters, achieved a macro-averaged accuracy of 80.65 ranking 20 th among 31 systems, but it was still tied for best system in 25 of the 90 total languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Morphological inflection is the process that creates grammatical forms (typically guided by sentence structure) of a lexeme/lemma. As a computational task it is framed as mapping from the lemma and a set of morphological tags to the desired form, which simplifies the task by removing the necessity to infer the form from context. For an example from Asturian, given the lemma aguar and tags V;PRS;2;PL;IND, the task is to create the indicative voice, present tense, 2 nd person plural form agu\u00e0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Let X = x 1 . . . x N be a character sequence of the lemma, T = t 1 . . . t M a set of morphological tags, and Y = y 1 . . . y K be an inflection target character sequence. The goal is to model P (Y | X, T). The problem has been studied in various settings through the SIGMORPHON shared tasks (Cotterell et al., 2016 (Cotterell et al., , 2017 (Cotterell et al., , 2018 Mc-Carthy et al., 2019) , with the 2019 edition focusing in particularly challenging low-resource scenarios. The 2020 edition (Vylomova et al., 2020) focused on generalization of systems across typologically diverse languages, regardless of data size.", |
|
"cite_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 316, |
|
"text": "(Cotterell et al., 2016", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 342, |
|
"text": "(Cotterell et al., , 2017", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 343, |
|
"end": 368, |
|
"text": "(Cotterell et al., , 2018", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 369, |
|
"end": 392, |
|
"text": "Mc-Carthy et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 495, |
|
"end": 518, |
|
"text": "(Vylomova et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In our submission we built upon our previous work , utilizing cross-lingual transfer from related languages, data hallucination, and a series of training techniques and regularizers. The defining change was that we attempted to create language-specific regimes for each test language, depending on the particular characteristics of the language, on the data availability for the particular test language and the availability of other related language data. As a result, for some high-resource languages we submitted systems without cross-lingual transfer, for some we used a single related high resource language, and for some we used multiple related languages. Last, for a few test languages we augmented our datasets with romanized versions of the training data, an approach that has shown promising results in concurrent work (Murikinati et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 830, |
|
"end": 855, |
|
"text": "(Murikinati et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our submissions are very competitive in 25 of the 90 test languages, with performance statistically significant similar to the best performing system, but fall behind in many other languages. We suspect that this is due to our not tuning of the system's hyperparameters towards higher-resource settings. Table 1 : Accuracy of our system on every language. We highlight the languages where our system was statistically equal to the best system (with p < 0.005).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 304, |
|
"end": 311, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our system is the same as the one of Anastasopoulos and Neubig (2019): a neural multi-source encoder-decoder (which reads in the lemma and the tag sequences in a disentangled manner using two separate encoders) with a task-specific attention mechanism. We skip providing further redundant information and we direct the interested reader to for all details. It is important to note, however, that we did not tune any model hyperparameters for our submissions (which we suspect contributed to the poor performance of our system in some languages); we used the default parameters from the system's distribution 1 which are tuned towards extremely low-resource settings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Here, we provide an exhaustive list of modifications to the general pipeline that we devised for specific languages and language families. Tonal languages like Eastern Highland Chatino (cly), importantly, often denote the syllable's tone through superscript diacritics: take the Eastern Highland Chatino lemma sqwe 14 and its second person singular number habitual mood inflected form nsqwe 20 . The data hallucination technique would identify the substring sqwe as a stem-like region, and replace its characters with random ones. A completely random substitution, however, could lead to the creation of nonsensical syllables, if tone diacritics are inserted instead of letter characters e.g. if we hallucinated a s 3 ae 14 lemma for the above example. Similarly, if a stem-like region includes a tone diacritic, we would not want to randomly replace it with non-diacritic characters, lest we end up with badly formed syllables without tone information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To avoid these issues, we restrict the random substitutions for Oto-Manguean languages with tone diacritics, so that we only sample tone diacritics if we are substituting a tone diacritic (and similarly for letter characters). We have found this approach to significantly improve results in previous work on morphological inflection for Eastern Highland Chatino (Cruz et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 362, |
|
"end": 381, |
|
"text": "(Cruz et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Languages For languages with more than 20,000 training examples, we decided to not use crosslingual transfer nor data hallucination, as systems in previous SIGMORPHON shared tasks achieved very competitive performance on such high-resource settings without these additions. For languages with less than 20,000 but more than 10,000 training examples, we used our data hallucination process to create 10,000 additional training examples to be used for training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single-Language Systems for High Resource", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Cross-Lingual Transfer from a Single Language For some languages we decided to use a single, high-resource related language to combine into our training to perform cross-lingual transfer, along with data hallucination. We based most these decisions in previous results (mainly from (Anastasopoulos and Neubig, 2019)), but some where our semi-arbitrary experimenter's intuitions. We provide a complete list of these settings: ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single-Language Systems for High Resource", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 for", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single-Language Systems for High Resource", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We submitted systems with unique transfer language combinations for extremely low-resource languages for which several very related languages were available (all systems also included hallucinated data in the test language). Specifically:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple-Language Cross-Lingual Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 for Ingrian (izh) we used Estonian (est), Votic (vot), and a random sample (20,000 instances) from Finnish (fin) data, \u2022 for Votic (vot) we used Estonian (est), Ingrian (izh), and a random sample (20,000 instances) from Finnish (fin) data, \u2022 for Urdu (urd) we used Hindi (hin) and Bengali (ben),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple-Language Cross-Lingual Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 for Bashkir (bad) we used Turkish (tur), Kazakh (kaz), and Kyrgyz (kir), \u2022 for Crimean Tatar (crh) we used Turkish (tur), Kazakh (kaz), and Kyrgyz (kir), \u2022 for Kazakh (kaz) we used Turkish (tur), Bashkir (bad), and Kyrgyz (kir), \u2022 for Kyrgyz (kir) we used Turkish (tur), Bashkir (bad), and Kazakh (kaz), \u2022 for Uighur (uig) we used Turkish (tur) and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple-Language Cross-Lingual Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Uzbek (uzb), and \u2022 for Ludian (lud) we used 20,000 random samples from Karelian (krl) and Veps (vep).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple-Language Cross-Lingual Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Romanization for Different Scripts Last, we experimented with cross-lingual transfer and transliteration of related languages written in different script. The motivation lies in the observation made by that often cross-lingual transfer results in smaller improvements if the transfer and the test language do not share the same script, even if the languages are related. They bring Arabic-Maltese and Kurmanji-Sorani as possible examples. In concurrent work (Murikinati et al., 2020) we experimented with transliterating the transfer language into the test language's script, with encouraging results in low-resource settings. Alternatively, if the training languages use the latin script but the test language does not, we found that that by romanizing the test language training data and concatenating them as another language (along with the data in the original script) also helped. We applied these strategies on the following language pairs. Transliterating a transfer language into the test language's script:", |
|
"cite_spans": [ |
|
{ |
|
"start": 458, |
|
"end": 483, |
|
"text": "(Murikinati et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple-Language Cross-Lingual Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. for Maltese (mlt) we used Italian (ita) and romanized Hebrew (heb), 2. for Oromo (orm) we used romanized Arabic (ara) and romanized Hebrew (heb), and 3. for Bengali (ben) we used Sanskrit (san), Hindi (hin), and Sanskrit transliterated into the Bengali script using the Indic NLP library 2 (Kunchukuttan, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 313, |
|
"text": "(Kunchukuttan, 2020)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple-Language Cross-Lingual Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Romanizing the test language training data and training with both romanized and original, along with more romanized, related languages:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple-Language Cross-Lingual Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. for Classical Syriac (syc) we used romanized Arabic (ara) and romanized Hebrew (heb), as well as romanized Classical Syriac (Classical Syriac originally uses a distinct script), 2. for Pashto (pus) we used romanized Farsi (fas) and romanized Pashto, while 3. for Tajik (tgk) we used romanized Farsi (fas) and romanized Tajik.", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 230, |
|
"text": "Farsi (fas)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 296, |
|
"end": 307, |
|
"text": "Farsi (fas)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple-Language Cross-Lingual Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3 Results Table 1 lists the accuracy of our submitted system in every language. We also report results per language family and genus in Table 2 , to further facilitate an equitable evaluation across language families. Our system achieves a macro-averaged accuracy of 86.6% with a standard deviation of 14.3. Even though it does not use self-attention and we did not tune any hyper-parameters, our system still achieved competitive performance, tying for first in 25 of the 90 total languages (it still however does not outperform the best baseline system (Wu et al., 2020) ). These include languages that were generally easy for all systems, such as the Austronesian and the Niger-Congo ones. However, they also include the extremely low-resource languages like Ludian (lud), V\u00f5ro (vro), and Middle Low German (gml), where we suspect that our system performed en par with the more sophisticated (and we suspect, tuned) systems due to our informed selection of languages for cross-lingual transfer.", |
|
"cite_spans": [ |
|
{ |
|
"start": 555, |
|
"end": 572, |
|
"text": "(Wu et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 17, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 143, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multiple-Language Cross-Lingual Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The two languages where our system performs the worst are Algic (Cree) and Tungusic (Evenki). We suspect this is due to the fact that the data hallucination technique, which is crucial for such low resource settings, is not appropriate for capturing the vowel harmony of Evenki along with its agglutinating morphological patterns -the hallucinated data do not follow these patterns and hence do not guide the model towards learning them. As for Cree, we suspect that the problem lies again in the data hallucination process: the polysynthetic and fusional nature of Cree verb inflected forms is too complicated to be modeled by the simple characterlevel alignment model which is the first step for hallucination.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple-Language Cross-Lingual Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The performance of our system in the 2020 SIG-MORPHON Shared Task leaves many questions unanswered and several avenues to explore in future work. Regarding the choice of languages to use for cross-lingual transfer, we will further in-vestigate the use of automatic suggestion systems such as the one of Lin et al. (2019) . With regards to modeling, we will update our model to use sparsemax (Martins and Astudillo, 2016) , which can facilitate exact search and hopefully lead to better results (Peters and Martins, 2019 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 303, |
|
"end": 320, |
|
"text": "Lin et al. (2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 391, |
|
"end": 420, |
|
"text": "(Martins and Astudillo, 2016)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 519, |
|
"text": "(Peters and Martins, 2019", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "As we anticipate and hope the shared task and the whole community will become more multilingual in the future, in the future we will employ the language/task selection method of Xia et al. (2020) , which will allow us to tune the systems in a small subset of languages that will generalize well in all others. Similarly, we will employ more sophisticated techniques for learning in multilingual settings, such as differential data selection (Wang et al., 2019 (Wang et al., , 2020 which will allow us to optimize a single model to multiple model objectives (namely, each target language).", |
|
"cite_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 195, |
|
"text": "Xia et al. (2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 441, |
|
"end": 459, |
|
"text": "(Wang et al., 2019", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 460, |
|
"end": 480, |
|
"text": "(Wang et al., , 2020", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "https://github.com/anoopkunchukuttan/ indic_nlp_library", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This material is based upon work generously supported by the National Science Foundation under grant 1761548.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Pushing the limits of low-resource morphological inflection", |
|
"authors": [ |
|
{ |
|
"first": "Antonios", |
|
"middle": [], |
|
"last": "Anastasopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proc. EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antonios Anastasopoulos and Graham Neubig. 2019. Pushing the limits of low-resource morphological in- flection. In Proc. EMNLP, Hong Kong.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The CoNLL-SIGMORPHON 2018 shared task: Universal morphological reinflection", |
|
"authors": [ |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Kann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Garrett", |
|
"middle": [], |
|
"last": "Mielke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miikka", |
|
"middle": [], |
|
"last": "Nicolai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Silfverberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proc. CoNLL-SIGMORPHON", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "McCarthy, Katharina Kann, Sebastian Mielke, Gar- rett Nicolai, Miikka Silfverberg, David Yarowsky, Jason Eisner, and Mans Hulden. 2018. The CoNLL-SIGMORPHON 2018 shared task: Univer- sal morphological reinflection. In Proc. CoNLL- SIGMORPHON.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00e9raldine", |
|
"middle": [], |
|
"last": "Walther", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ekaterina", |
|
"middle": [], |
|
"last": "Vylomova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manaal", |
|
"middle": [], |
|
"last": "Faruqui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "K\u00fcbler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proc. CoNLL SIGMORPHON", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, G\u00e9raldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra K\u00fcbler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017. CoNLL-SIGMORPHON 2017 shared task: Univer- sal morphological reinflection in 52 languages. In Proc. CoNLL SIGMORPHON.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared taskmorphological reinflection", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proc. SIGMOR-PHON", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared task- morphological reinflection. In Proc. SIGMOR- PHON.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A resource for studying chatino verbal morphology", |
|
"authors": [ |
|
{ |
|
"first": "Hilaria", |
|
"middle": [], |
|
"last": "Cruz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonios", |
|
"middle": [], |
|
"last": "Anastasopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [], |
|
"last": "Stump", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proc. LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hilaria Cruz, Antonios Anastasopoulos, and Gregory Stump. 2020. A resource for studying chatino verbal morphology. In Proc. LREC. To appear.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "The indicnlp library", |
|
"authors": [ |
|
{ |
|
"first": "Anoop", |
|
"middle": [], |
|
"last": "Kunchukuttan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anoop Kunchukuttan. 2020. The indicnlp library.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Choosing transfer languages for cross-lingual learning", |
|
"authors": [ |
|
{ |
|
"first": "Yu-Hsiang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chian-Yu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zirui", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuyan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mengzhou", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shruti", |
|
"middle": [], |
|
"last": "Rijhwani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junxian", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhisong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuezhe", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonios", |
|
"middle": [], |
|
"last": "Anastasopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Littell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3125--3135", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1301" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019. Choosing transfer languages for cross-lingual learning. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 3125-3135, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "From softmax to sparsemax: A sparse model of attention and multi-label classification", |
|
"authors": [ |
|
{ |
|
"first": "Andre", |
|
"middle": [], |
|
"last": "Martins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramon", |
|
"middle": [], |
|
"last": "Astudillo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proc. ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1614--1623", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andre Martins and Ramon Astudillo. 2016. From soft- max to sparsemax: A sparse model of attention and multi-label classification. In Proc. ICML, pages 1614-1623.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The SIGMORPHON 2019 shared task: Morphological analysis in context and cross-lingual transfer for inflection", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Arya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ekaterina", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shijie", |
|
"middle": [], |
|
"last": "Vylomova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chaitanya", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Malaviya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Garrett", |
|
"middle": [], |
|
"last": "Wolf-Sonkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miikka", |
|
"middle": [], |
|
"last": "Nicolai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Silfverberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Mielke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Heinz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mans", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hulden", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "229--244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arya D. McCarthy, Ekaterina Vylomova, Shijie Wu, Chaitanya Malaviya, Lawrence Wolf-Sonkin, Gar- rett Nicolai, Miikka Silfverberg, Sebastian J. Mielke, Jeffrey Heinz, Ryan Cotterell, and Mans Hulden. 2019. The SIGMORPHON 2019 shared task: Mor- phological analysis in context and cross-lingual transfer for inflection. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 229-244, Flo- rence, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Transliteration for cross-lingual morphological inflection", |
|
"authors": [ |
|
{ |
|
"first": "Nikitha", |
|
"middle": [], |
|
"last": "Murikinati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonios", |
|
"middle": [], |
|
"last": "Anastasopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proc. SIGMORPHON", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikitha Murikinati, Antonios Anastasopoulos, and Gra- ham Neubig. 2020. Transliteration for cross-lingual morphological inflection. In Proc. SIGMORPHON. To appear.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "It-ist at the sigmorphon 2019 shared task: Sparse two-headed models for inflection", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Andr\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Martins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "50--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ben Peters and Andr\u00e9 FT Martins. 2019. It-ist at the sigmorphon 2019 shared task: Sparse two-headed models for inflection. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 50-56.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Data augmentation for morphological reinflection", |
|
"authors": [ |
|
{ |
|
"first": "Miikka", |
|
"middle": [], |
|
"last": "Silfverberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Wiemerslage", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ling", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lingshuang Jack", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proc. SIGMORPHON", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miikka Silfverberg, Adam Wiemerslage, Ling Liu, and Lingshuang Jack Mao. 2017. Data augmentation for morphological reinflection. Proc. SIGMORPHON.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Miikka Silfverberg, and Mans Hulden. 2020. The SIG-MORPHON 2020 Shared Task 0: Typologically diverse morphological inflection", |
|
"authors": [ |
|
{ |
|
"first": "Ekaterina", |
|
"middle": [], |
|
"last": "Vylomova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "White", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Salesky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabrina", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mielke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shijie", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edoardo", |
|
"middle": [], |
|
"last": "Ponti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rowan", |
|
"middle": [], |
|
"last": "Hall Maudslay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ran", |
|
"middle": [], |
|
"last": "Zmigrod", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Valvoda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Svetlana", |
|
"middle": [], |
|
"last": "Toldova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Tyers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Klyachko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Yegorov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Krizhanovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paula", |
|
"middle": [], |
|
"last": "Czarnowska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Irene", |
|
"middle": [], |
|
"last": "Nikkarinen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrej", |
|
"middle": [], |
|
"last": "Krizhanovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tiago", |
|
"middle": [], |
|
"last": "Pimentel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "Torroba Hennigen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ekaterina Vylomova, Jennifer White, Elizabeth Salesky, Sabrina J. Mielke, Shijie Wu, Edoardo Ponti, Rowan Hall Maudslay, Ran Zmigrod, Joseph Valvoda, Svetlana Toldova, Francis Tyers, Elena Klyachko, Ilya Yegorov, Natalia Krizhanovsky, Paula Czarnowska, Irene Nikkarinen, Andrej Krizhanovsky, Tiago Pimentel, Lucas Torroba Hennigen, Christo Kirov, Garrett Nicolai, Ad- ina Williams, Antonios Anastasopoulos, Hilaria Cruz, Eleanor Chodroff, Ryan Cotterell, Miikka Silfverberg, and Mans Hulden. 2020. The SIG- MORPHON 2020 Shared Task 0: Typologically diverse morphological inflection. In Proceedings of the 17th SIGMORPHON Workshop on Compu- tational Research in Phonetics, Phonology, and Morphology.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Antonios Anastasopoulos, Graham Neubig, and Jaime Carbonell", |
|
"authors": [ |
|
{ |
|
"first": "Xinyi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Michel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Optimizing data usage via differentiable rewards", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.10088" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xinyi Wang, Hieu Pham, Paul Michel, Antonios Anas- tasopoulos, Graham Neubig, and Jaime Carbonell. 2019. Optimizing data usage via differentiable re- wards. arXiv:1911.10088.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Balancing training for multilingual neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Xinyi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yulia", |
|
"middle": [], |
|
"last": "Tsvetkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Annual Conference of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xinyi Wang, Yulia Tsvetkov, and Graham Neubig. 2020. Balancing training for multilingual neural ma- chine translation. In Annual Conference of the Asso- ciation for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Predicting performance for natural language processing tasks", |
|
"authors": [ |
|
{ |
|
"first": "Mengzhou", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonios", |
|
"middle": [], |
|
"last": "Anastasopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruochen", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proc. ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mengzhou Xia, Antonios Anastasopoulos, Ruochen Xu, Yiming Yang, and Graham Neubig. 2020. Pre- dicting performance for natural language processing tasks. In Proc. ACL. To appear.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "https://github.com/antonisa/inflection Data Hallucination for tonal languages The data hallucination process of Anastasopoulos and Neubig (2019), inspired by Silfverberg et al. (2017), samples random characters from the language's alphabet to replace characters in stem-like regions discovered from the training examples through a simple alignment-based heuristic.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"text": "", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |