|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:30:59.382977Z" |
|
}, |
|
"title": "The SIGMORPHON 2020 Shared Task on Unsupervised Morphological Paradigm Completion", |
|
"authors": [ |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Kann", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Arya", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Mccarthy", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Garrett", |
|
"middle": [], |
|
"last": "Nicolai", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Mans", |
|
"middle": [], |
|
"last": "Hulden", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we describe the findings of the SIGMORPHON 2020 shared task on unsupervised morphological paradigm completion (SIGMORPHON 2020 Task 2), a novel task in the field of inflectional morphology. Participants were asked to submit systems which take raw text and a list of lemmas as input, and output all inflected forms, i.e., the entire morphological paradigm, of each lemma. In order to simulate a realistic use case, we first released data for 5 development languages. However, systems were officially evaluated on 9 surprise languages, which were only revealed a few days before the submission deadline. We provided a modular baseline system, which is a pipeline of 4 components. 3 teams submitted a total of 7 systems, but, surprisingly, none of the submitted systems was able to improve over the baseline on average over all 9 test languages. Only on 3 languages did a submitted system obtain the best results. This shows that unsupervised morphological paradigm completion is still largely unsolved. We present an analysis here, so that this shared task will ground further research on the topic.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we describe the findings of the SIGMORPHON 2020 shared task on unsupervised morphological paradigm completion (SIGMORPHON 2020 Task 2), a novel task in the field of inflectional morphology. Participants were asked to submit systems which take raw text and a list of lemmas as input, and output all inflected forms, i.e., the entire morphological paradigm, of each lemma. In order to simulate a realistic use case, we first released data for 5 development languages. However, systems were officially evaluated on 9 surprise languages, which were only revealed a few days before the submission deadline. We provided a modular baseline system, which is a pipeline of 4 components. 3 teams submitted a total of 7 systems, but, surprisingly, none of the submitted systems was able to improve over the baseline on average over all 9 test languages. Only on 3 languages did a submitted system obtain the best results. This shows that unsupervised morphological paradigm completion is still largely unsolved. We present an analysis here, so that this shared task will ground further research on the topic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In morphologically rich languages, words inflect: grammatical information like person, number, tense, and case are incorporated into the word itself, rather than expressed via function words. Not all languages mark the same properties: German nouns, for instance, have more inflected forms than their English counterparts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "When acquiring a language, humans usually learn to inflect words without explicit instruction. Thus, most native speakers are capable of generating inflected forms even of artificial lemmas (Berko, 1958) . However, models that can generate paradigms without explicit morphological train- * Equal contribution. Figure 1 : The task of unsupervised morphological paradigm completion (Jin et al., 2020) consists of generating complete inflectional paradigms for given lemmas, with the only additional available information being a corpus without annotations. ing have not yet been developed. We anticipate that such systems will be extremely useful, as they will open the possibility of rapid development of first-pass inflectional paradigms in a large set of languages. These can be utilized both in se for generation and as a starting point for elicitation , thus aiding the development of low-resource human language technologies (Christianson et al., 2018) . In this paper, we present the SIGMORPHON 2020 shared task on unsupervised morphological paradigm completion (SIGMORPHON 2020 Task 2). We asked participants to produce systems that can learn to inflect in an unsupervised fashion: given a small corpus (the Bible) together with a list of lemmas for each language, systems for the shared task should output all corresponding inflected forms. In their output, systems had to mark which forms expressed the same morphosyntactic features, e.g., demonstrate knowledge of the fact that walks is to walk as listens is to listen, despite not recognizing the morphological features explic-itly. We show a visualization of our shared task setup in Figure 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 190, |
|
"end": 203, |
|
"text": "(Berko, 1958)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 398, |
|
"text": "(Jin et al., 2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 555, |
|
"end": 558, |
|
"text": "ing", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 929, |
|
"end": 956, |
|
"text": "(Christianson et al., 2018)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 310, |
|
"end": 318, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1645, |
|
"end": 1653, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Unsupervised morphological paradigm completion requires solving multiple subproblems either explicitly or implicitly. First, a system needs to figure out which words in the corpus belong to the same paradigm. This can, for instance, be done via string similarity: walks is similar to walk, but less so to listen. Second, it needs to figure out the shape of the paradigm. This requires detecting which forms of different lemmas express the same morphosyntactic features, even if they are not constructed from their respective lemmas in the exact same way. Third, a system needs to generate all forms not attested in the provided corpus. Using the collected inflected forms as training data, this can be reduced to the supervised morphological inflection task (Cotterell et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 758, |
|
"end": 782, |
|
"text": "(Cotterell et al., 2016)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This year's submitted systems can be split into two categories: those that built on the baseline (Retrieval+X) and those that did not (Segment+Conquer). The baseline system is set up as a pipeline which performs the following steps: edit tree retrieval, additional lemma retrieval, paradigm size discovery, and inflection generation (Jin et al., 2020) . As it is highly modular, we provided two versions that employ different inflection models. 1 All systems built on the baseline substituted the morphological inflection component.", |
|
"cite_spans": [ |
|
{ |
|
"start": 333, |
|
"end": 351, |
|
"text": "(Jin et al., 2020)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "No system outperformed the baseline overall. However, two Retrieval+X models slightly improved over the baseline on three individual languages. We conclude that the task of unsupervised morphological paradigm completion is still an open challenge, and we hope that this shared task will inspire future research in this area.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Informal description. The task of unsupervised morphological paradigm completion mimics a setting where the only resources available in a language are a corpus and a short list of dictionary forms, i.e., lemmas. The latter could, for instance, be obtained via basic word-to-word translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Morphological Paradigm Completion", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The goal is to generate all inflected forms of the given lemmas.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Morphological Paradigm Completion", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "For an English example, assume the following lemma list to be given: The numbers serve as unique identifiers for paradigm slots: in above example, \"4\" corresponds to the present participle. The inflections walking and talking therefore belong to the same paradigm slot. For the task, participants are not provided any knowledge of the grammatical content of the slots.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Morphological Paradigm Completion", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Formal definition. We denote the paradigm \u03c0( ) of a lemma as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Morphological Paradigm Completion", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c0( ) = f ( , t \u03b3 ) \u03b3\u2208\u0393( ) ,", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Unsupervised Morphological Paradigm Completion", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "with f : \u03a3 * \u00d7 T \u2192 \u03a3 * being a function that maps a lemma and a vector of morphological features t \u03b3 \u2208 T expressed by paradigm slot \u03b3 to the corresponding inflected form. \u0393( ) is the set of slots in lemma 's paradigm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Morphological Paradigm Completion", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We then formally describe the task of unsupervised morphological paradigm completion as follows. Given a corpus D = w 1 , . . . , w |D| together with a list L = { j } of |L| lemmas belonging to the same part of speech, 2 unsupervised morphological paradigm completion consists of generating the paradigms {\u03c0( )} of all lemmas \u2208 L.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Morphological Paradigm Completion", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Remarks. It is impossible for unsupervised systems to predict the names of the features expressed by paradigm slots, an arbitrary decision made by human annotators. This is why, for the shared task, we asked systems to mark which forms belong to the same slot by numbering them, e.g., to predict that walked is the form for slot 3, while listens corresponds to slot 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Morphological Paradigm Completion", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The official evaluation metric was macro-averaged best-match accuracy (BMAcc; Jin et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 95, |
|
"text": "Jin et al., 2020)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Macro-averaged Best-Match Accuracy", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In contrast to supervised morphological inflection (Cotterell et al., 2016) , our task cannot be evaluated with word-level accuracy. For the former, one can compare the prediction for each lemma and morphological feature vector to the ground truth. However, for unsupervised paradigm completion, this requires a mapping from predicted slots to the gold standard's paradigm slots.", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 75, |
|
"text": "(Cotterell et al., 2016)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Macro-averaged Best-Match Accuracy", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "BMAcc, thus, first computes the word-level accuracy each predicted slot would obtain against each true slot. It then constructs a complete bipartite graph, with those accuracies as edge weights. This enables computing of the maximum-weight full matching with the algorithm of Karp (1980) . BMAcc then corresponds to the sum of all accuracies for the best matching, divided by the maximum of the number of gold and predicted slots.", |
|
"cite_spans": [ |
|
{ |
|
"start": 276, |
|
"end": 287, |
|
"text": "Karp (1980)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Macro-averaged Best-Match Accuracy", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "BMAcc penalizes systems for predicting a wrong number of paradigm slots. However, detecting the correct number of identical slots -something we encounter in some languages due to syncretism -is extremely challenging. Thus, we merge slots with identical forms for all lemmas in both the predictions and the ground truth before evaluating.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Macro-averaged Best-Match Accuracy", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Example. Assume our gold standard is (1) (the complete, 5-slot English paradigms for the verbs walk and listen) and a system outputs the following, including an error in the fourth row:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Macro-averaged Best-Match Accuracy", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "walk walks 1 walk walking 2 listen listens 1 listen listenen 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Macro-averaged Best-Match Accuracy", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "First, we merge slots 3 and 5 in the gold standard, since they are identical for both lemmas. Ignoring slot 5, we then compute the BMAcc as follows. Slot 1 yields an accuracy of 100% as compared to gold slot 2, and 0% otherwise. Similarly, slot 2 reaches an accuracy of 50% for gold slot 4, and 0% otherwise. Additionally, given the best mapping of those two slots, we obtain 0% accuracy for gold slots 1 and 3. Thus, the BMAcc is BMAcc = 1 + 0.5 + 0 + 0 4 = 0.375 33 Shared Task Data", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Macro-averaged Best-Match Accuracy", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We provided data for 5 development and 9 test languages. The development languages were available for system development and hyperparameter tuning, while the test languages were released shortly before the shared task deadline. For the test languages, no ground truth data was available before system submission. This setup emulated a realworld scenario with the goal to create a system for languages about which we have no information. For the raw text corpora, we leveraged the JHU Bible Corpus . This resource covers 1600 languages, which will enable future work to quickly produce systems for a large set of languages. Additionally, using the Bible allowed for a fair comparison of models across languages without potential confounds such as domain mismatch. 7 of the languages have only the New Testament available (approximately 8k sentences), and 7 have both the New and Old Testaments (approximately 31k sentences).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Provided Resources", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "All morphological information was taken from UniMorph (Sylak-Glassman et al., 2015; Kirov et al., 2018) , a resource which contains paradigms for more than 100 languages. However, this information was only accessible to the participants for the development languages. UniMorph paradigms were further used internally for evaluation on the test languages-this data was then released after the conclusion of the shared task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 83, |
|
"text": "(Sylak-Glassman et al., 2015;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 84, |
|
"end": 103, |
|
"text": "Kirov et al., 2018)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Provided Resources", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "During the development phase of the shared task, we released 5 languages to allow participants to investigate various design decisions: Maltese (MLT), Persian (FAS), Portuguese (POR), Russian (RUS), and Swedish (SWE). These languages are typologically and genetically varied, representing a number of verbal inflectional phenomena. Swedish and Portuguese are typical of Western European languages, and mostly exhibit fusional, suffixing verbal inflection. Russian, as an exemplar of Slavic languages, is still mostly suffixing, but does observe regular ablaut, and has considerable phonologicallyconditioned allomorphy. Maltese is a Semitic language with a heavy Romance influence, and verbs # Inflections in corpus=number of inflections from the gold file which can be found in the corpus, token-based; Paradigm size=number of different morphological feature vectors in the dataset for the language; Paradigm size (merged)=paradigm size, but counting slots with all forms being identical only once.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Languages", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "combine templatic and suffixing inflection. Persian is mostly suffixing, but does allow for verbal inflectional prefixation, such as negation and marking subjunctive mood. Since the development languages were used for system tuning, their scores did not count towards the final ranking.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Languages", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "After a suitable period for system development and tuning, we released nine test languages: Basque (EUS), Bulgarian (BUL), English (ENG), Finnish (FIN), German (DEU), Kannada (KAN), Navajo (NAV), Spanish (SPA), and Turkish (TUR). Although these languages observe many features common to the development languages, such as fusional inflection, suffixation, and ablaut, they also cover inflectional categories absent in the development languages. Navajo, unlike any of the development languages, is strongly prefixing. Basque, Finnish, and Turkish are largely agglutinative, with long, complex affix chains that are difficult to identify through longest suffix matching. Furthermore, Finnish and Turkish feature vowel harmony and consonant gradation, which both require a method to identify allomorphs correctly to be able to merge different variants of the same paradigm slot.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Languages", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Statistics of the resources provided for all languages are shown in Table 1 for the development languages and in Table 2 for the test languages.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 75, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 113, |
|
"end": 120, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Statistics", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The token count (line 1) and, thus, the size of the provided Bible corpora, differs between 104,631 (Kannada) and 871,707 (Swedish). This number depends both on the typology of a language and on the completeness of the provided Bible translation. The number of types (line 2) is between 7,144 (English) and 59,458 (Turkish). It is strongly influenced by how morphologically rich a language is, i.e., how large the paradigms are, which is often approximated with the type-token ratio. The verbal paradigm size is listed in line 7: English has with a size of 5 the smallest paradigms, and, correspondingly, the lowest type count. Turkish, which has the highest number of types, in contrast, has large paradigms (120). The last line serves as an indicator of syncretism: subtracting line 8 from line 7 results in the number of paradigm slots that have been merged as a language evolved to use identical forms for different inflectional categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistics", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Lines 3 and 4 show the number of lemmas in the lemma lists for all languages, as well as the number of lemmas which can be found in the corpus. For the majority of languages, 100 lemmas are provided, out of which 50 appear in the Bible. Exceptions are Maltese (20, 10), Persian (100, 22), Basque (20, 4), Kannada (20, 10), and Navajo (100, 9). These are due to limited UniMorph coverage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistics", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In line 5, we list the number of total inflections, counting each one in the case of identical forms, i.e., this corresponds to the number of lines in our gold inflection file. English, due to its small verbal paradigm size, has only 500 inflections in our data. Conversely, Finnish has with 14,100 the largest number of inflections. Line 6 describes how many of the forms from line 5 appear in the corpus. As before, all forms are counted, even if they are identical. For all languages, a large majority of forms cannot be found in the corpus. This makes the task of unsupervised morphological paradigm completion with our provided data a challenging one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistics", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In this section, we first review the baseline before describing the submitted systems. An additional overview of the submissions is shown in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 148, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Systems", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We compared all submissions to the baseline system of Jin et al. (2020) , graphically summarized in Figure 2 . It is a pipeline system, which consists of 4 separate modules, which, in turn, can be grouped into two major components: retrieval and generation. The retrieval component discovers and returns inflected forms -and, less importantly, additional lemmas -from the provided Bible corpus. The generation component produces new inflected forms which cannot be found in the raw text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 71, |
|
"text": "Jin et al. (2020)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 108, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baseline", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The retrieval component performs three steps: First, it extracts the most common edit trees (Chrupa\u0142a, 2008) , i.e., it detects regularities with regards to word formation, based on the lemma list. If, for instance, both walk and listen are the lemmas provided and both walked and listened are encountered in the corpus, the system notes that appending -ed is a common transformation, which might correspond to an inflectional strategy.", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 108, |
|
"text": "(Chrupa\u0142a, 2008)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Second, it retrieves new lemmas, with the goal to gather additional evidence for our collected edit trees. If, for instance, it has already identified the suffix -ed as an inflectional marker, finding both pray and prayed in the Bible is an indication that pray might be a lemma. New lemmas can then, in turn, be used to detect new regularities, e.g., in the case that listen and listens as well as pray and prays are attested in the corpus, but walks is not. Due to their complementary nature, components one and two can, as a unit, be applied iteratively to bootstrap a larger list of lemmas and transformations. For the baseline, we apply each of them only once.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Finally, the baseline's retrieval component predicts the paradigm size by analyzing which edit trees might be representing the same inflection. For instance, the suffixes -d and -ed both represent the past tense in English. The output of the retrieval component is a list of inflected forms with their lemmas, annotated with a paradigm slot number.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The generation component receives this output and prepares the data to train an inflectional generator. First, identified inflections are divided into a training and development split, and missing paradigm slots are identified. The generator is trained on the discovered inflections, and new forms are predicted for each missing slot.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We used two morphological inflection systems for the two variants of our baseline: the non-neural baseline from Cotterell et al. (2017) and the model proposed by Makarov and Clematide (2018) . Both are highly suitable for the low-resource setting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 135, |
|
"text": "Cotterell et al. (2017)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 190, |
|
"text": "Makarov and Clematide (2018)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We now describe the first category of shared task submissions: Retrieval+X. Systems in this category leverage the retrieval component of the baseline, while substituting the morphological inflection component with a custom inflection system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Submitted Systems: Retrieval+X", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The IMS-CUBoulder team relied on LSTM (Hochreiter and Schmidhuber, 1997) sequence-tosequence models for inflection. In IMS-CUB-1, the generation component is based on the architecture by Bahdanau et al. (2015) , but with fewer parameters, as suggested by Kann and Sch\u00fctze (2016) . This model -as well as all other inflection components used for systems in this category -receives the sequence of the lemma's characters and the paradigm slot number as input and produces a sequence of output characters.", |
|
"cite_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 209, |
|
"text": "Bahdanau et al. (2015)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 255, |
|
"end": 278, |
|
"text": "Kann and Sch\u00fctze (2016)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Submitted Systems: Retrieval+X", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Their second system, IMS-CUB-2, uses an LSTM pointer-generator network (See et al., 2017) instead. This architecture has originally been proposed for low-resource morphological inflection by Sharma et al. (2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 89, |
|
"text": "(See et al., 2017)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 191, |
|
"end": 211, |
|
"text": "Sharma et al. (2018)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Submitted Systems: Retrieval+X", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The NYU-CUBoulder team also substituted the baseline's generation component. Their morphological inflection models are ensembles of dif-ferent combinations of transformer sequence-tosequence models (Vaswani et al., 2017) and pointergenerator transformers, a model they introduced for the task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 220, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Submitted Systems: Retrieval+X", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "NYU-CUB-1 is an ensemble of 6 pointergenerator transformers, while NYU-CUB-2 is an ensemble of 6 vanilla transformers. Their last system, NYU-CUB-3, is an ensemble of all 12 models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Submitted Systems: Retrieval+X", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The KU-CST team did not modify the baseline directly, but, nevertheless, was heavily inspired by it. Their system first employs a charactersegmentation algorithm to identify stem-suffix splits in both the provided lemma list and the corpus, thus identifying potential suffix-replacement rules. Next, k-means is used to cluster the extracted suffixes into allomorphic groups. These suffixes are then concatenated with the most frequent stems obtained from the lemma list, and scored by a language model, in order to arrive at plausible inflectional candidates. This approach is KU-CST-2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Submitted Systems: Segment+Conquer", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "However, KU-CST-2 often produces very small inflectional paradigms; unsurprisingly, given that the provided corpora are small as well, and, thus, any particular lemma is only inflected in limited ways -if at all. Therefore, KU-CST-1 expands the lemma list with a logistic-regression classifier that identifies novel verbs to be added.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Submitted Systems: Segment+Conquer", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "To encourage reproducibility, we first report the performance of all systems on the development languages in the upper part of Table 4 . Although participants were not evaluated on these languages, the results provide insight and enable future researchers to benchmark their progress, while maintaining the held-out status of the test languages.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 134, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results on Development Languages", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We show the official test results in the lower part of Table 4 . Baseline-2 obtained the highest BMAcc on average, followed in order by Baseline-1, IMS-CUB-2, and NU-CUB-2. Overall, systems built on top of the baseline, i.e., systems from Re-trieval+X, performed better than systems from Seg-ment+Conquer: the best Segment+Conquer system only reached 4.66% BMAcc on average. This shows the effectiveness of the baseline. However, it also shows that we still have substantial room for improvement on unsupervised morphological paradigm completion. Looking at individual languages, Baseline-2 performed best for all languages except for EUS, where NYU-CUB-3 obtained the highest BMAcc, and BUL and KAN, where IMS-CUB-2 was best.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 55, |
|
"end": 62, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Official Shared Task Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We further look separately at the results for lemmas which appear in the corpus and those that do not. While seeing a lemma in context might help some systems, we additionally assume that inflections of attested lemmas are also more likely to appear in the corpus. Thus, we expect the performance for seen lemmas to be higher on average.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis: Seen and Unseen Lemmas", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Examining the performance with respect to observed inflected forms might give cleaner results. However, we instead perform this analysis on a per-lemma basis, since the lemmas are part of a system's input, while the inflected forms are not. Table 5 shows the performance of all systems for seen and unseen lemmas. Surprisingly, both versions of the baseline show similar BMAcc for both settings with a maximum difference of 0.12% on average. However, the baseline is the only system that performs equally well for unseen lemmas; IMS-CUB-1 observes the largest difference, with an absolute drop of 7.85% BMAcc when generating the paradigms of unseen lemmas. Investigating the cause for IMS-CUB-1's low BMAcc, we manually inspected the English output files, and found that, for unseen lemmas, many generations are nonsensi-cal (e.g., demoates as an inflected form of demodulate). This does not happen in the case of seen lemmas. A similar effect has been found by Kann and Sch\u00fctze (2018) , who concluded that this might be caused by the LSTM sequence-to-sequence model not having seen similar character sequences during training. The fact that IMS-CUB-2, which uses another inflection model, performs better for unseen lemmas confirms this suspicion. Thus, additional training of the inflection component of IMS-CUB-1 on words from the corpus might improve generation. Conversely, the baseline -which benefits from inflection models specifically catered to low-resource settings -is better suited to inflecting unseen lemmas. Overall, we conclude that there is little evidence that the difficulty of the task increases for unseen lemmas. Rather, inflection systems need to compensate for the low contextual variety in their training data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 962, |
|
"end": 985, |
|
"text": "Kann and Sch\u00fctze (2018)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 248, |
|
"text": "Table 5", |
|
"ref_id": "TABREF11" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis: Seen and Unseen Lemmas", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "6 Where from and Where to?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis: Seen and Unseen Lemmas", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Prior to this shared task, most research on unsupervised systems for morphology was concerned with developing approaches to segment words into morphemes, i.e., their smallest meaning-bearing units (Goldsmith, 2001; Creutz, 2003; Creutz and Lagus, 2007; Snyder and Barzilay, 2008; Goldwater et al., 2009; Kurimo et al., 2010; Kudo and Richardson, 2018) . These methods were built around the observation that inflectional morphemes are very common across word types, and leveraged probabil- 927.00 (9) 1.14 (425) 8.75 (40) 27.40 (9) 27.30 (9) 27.50 (9) 27.60 (9) 27.40 (9) KAN 16.35 171) 15.61 (172) 6.61 (44) 1.69 (1) 13.99 (172) 16.49 (172) 14.63 (172) 14.68 (172) 14.63 (172 29) 4.43 (225) 16.37 (40) 20.40 (29) 21.14 (29) 21.17 (29) 21.09 (29) 21.14 (29TUR 14.68 104) 16.38 (104) 0.23(1772) 1.42 (502) 16.98 (104) 18.02 (104) 18.30 (104 ity estimates such as maximum likelihood (MLE) or maximum a posteriori (MAP) estimations to determine segmentation points, or minimum description length (MDL)-based approaches. However, they tended to make assumptions regarding how morphemes are combined, and worked best for purely concatenative morphology. Furthermore, these methods had no productive method of handling allomorphy-morphemic variance was simply treated as separate morphemes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 214, |
|
"text": "(Goldsmith, 2001;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 215, |
|
"end": 228, |
|
"text": "Creutz, 2003;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 229, |
|
"end": 252, |
|
"text": "Creutz and Lagus, 2007;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 253, |
|
"end": 279, |
|
"text": "Snyder and Barzilay, 2008;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 280, |
|
"end": 303, |
|
"text": "Goldwater et al., 2009;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 304, |
|
"end": 324, |
|
"text": "Kurimo et al., 2010;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 325, |
|
"end": 351, |
|
"text": "Kudo and Richardson, 2018)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "The task of unsupervised morphological paradigm completion concerns more than just segmentation: besides capturing how morphology is reflected in the word form, it also requires correctly clustering transformations into paradigm slots and, finally, generation of unobserved forms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "While Xu et al. (2018) did discover something similar to paradigms, those paradigms were a means to a segmentation end and the shape or size of the paradigms was not a subject of their research. Moon et al. (2009) similarly uses segmentation and clustering of affixes to group words into conflation sets, groups of morphologically related words, in an unsupervised way. Their work assumes prefixing and suffixing morphology. In a more task-driven line of research, Soricut and Och (2015) develop an approach to learn morphological transformation rules from observing how consis-tently word embeddings change between related word forms, with the goal of providing useful word embeddings for unseen words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 22, |
|
"text": "Xu et al. (2018)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 195, |
|
"end": 213, |
|
"text": "Moon et al. (2009)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 465, |
|
"end": 487, |
|
"text": "Soricut and Och (2015)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Our task further differs from traditional paradigm completion (e.g., Dreyer and Eisner, 2011; Ahlberg et al., 2015) in that no seed paradigms are observed. Thus, no information is being provided regarding the paradigm size, inflectional features, or relationships between lemmas and inflected forms. Other recent work (Nicolai and Yarowsky, 2019; learned fine-grained morphosyntactic tools from the Bible, though they leveraged supervision projected from higher-resource languages (Yarowsky et al., 2001; T\u00e4ckstr\u00f6m et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 93, |
|
"text": "Dreyer and Eisner, 2011;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 94, |
|
"end": 115, |
|
"text": "Ahlberg et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 346, |
|
"text": "(Nicolai and Yarowsky, 2019;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 481, |
|
"end": 504, |
|
"text": "(Yarowsky et al., 2001;", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 505, |
|
"end": 528, |
|
"text": "T\u00e4ckstr\u00f6m et al., 2013)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Past shared tasks. This task extends a tradition of SIGMORPHON shared tasks concentrating on inflectional morphology.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "The first such task (Cotterell et al., 2016) encouraged participants to create inflectional tools in a typologically diverse group of 10 languages. The task was fully-supervised, requiring systems to learn inflectional morphology from a large annotated database. This task is similar to human learners needing to generate inflections of previously unencountered word forms, after having studied thousands of other types.", |
|
"cite_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 44, |
|
"text": "(Cotterell et al., 2016)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "The second task (Cotterell et al., 2017) extended the first task from 10 to 52 languages and started to encourage the development of tools for the lowresource setting. While the first shared task approximated an adult learner with experience with thousands of word forms, low-resource inflection was closer to the language learner that has only studied a small number of inflections-however, it was closer to L2 learning than L1, as it still required training sets with lemma-inflection-slot triplets. The 2017 edition of the shared task also introduced a paradigm-completion subtask: participants were given partially observed paradigms and asked to generate missing forms, based on complete paradigms observed during training. This could be described as the supervised version of our unsupervised task, and notably did not require participants to identify inflected forms from raw text-a crucial step in L1 learning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 40, |
|
"text": "(Cotterell et al., 2017)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "The third year of the shared task (Cotterell et al., 2018 ) saw a further extension to more than 100 languages and another step away from supervised learning, in the form of a contextual prediction task. This task stripped away inflectional annotations, requiring participants to generate an inflection solely utilizing a provided lemma and sentential cues. This task further imitated language learners, but extended beyond morphological learning to morphosyntactic incorporation. Furthermore, removing the requirement of an inflectional feature vector more closely approximated the generation step in our task. However, it was still supervised in that participants were provided with lemma-inflection pairs in context during training. We, in contrast, made no assumption of the existence of such pairs. Finally, the fourth iteration of the task (Mc-Carthy et al., 2019) again concentrated on lesssupervised inflection. Cross-lingual training allowed low-resource inflectors to leverage information from high-resource languages, while a contextual analysis task flipped the previous year's contextual task on its head-tagging a sentence with inflectional information. This process is very similar to the retrieval portion of our task. We extended this effort to not only identify the paradigm slot of particular word, but to combine learned information from each class to extend and complete existing paradigms. Furthermore, we lifted the requirement of named inflectional features, more closely approximating the problem as approached by L1 language learners.", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 57, |
|
"text": "(Cotterell et al., 2018", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 846, |
|
"end": 870, |
|
"text": "(Mc-Carthy et al., 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Future editions of the shared task could extend this year's Task 2 to a larger variety of languages or parts of speech. Another possible direction is to focus on derivational morphology instead of or in addition to inflectional morphology. We are also considering merging Task 2 with the traditional morphological inflection task: participants could then choose to work on the overall task or on either of the retrieval or generation subproblem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Shared Tasks", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Finally, we are looking into extending the shared task to use speech data as input. This is closer to how L1 learners acquire morphological knowledge, and, while this could make the task harder in some aspects, it could make it easier in others.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Shared Tasks", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "We presented the findings of the SIGMORPHON 2020 shared task on unsupervised morphological paradigm completion (SIGMORPHON 2020 Task 2), in which participants were asked to generate paradigms without explicit supervision. Surprisingly, no team was able to outperform the provided baseline, a pipeline system, on average over all test languages. Even though 2 submitted systems were better on 3 individual languages, this highlights that the task is still an open challenge for the NLP community. We argue that it is an important one: systems obtaining high performance will be able to aid the development of human language technologies for low-resource languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "All teams that participated in the shared task devised modular approaches. Thus, it will be easy to include improved components in the future as, for instance, systems for morphological inflection improve. We released all data, the baseline, the evaluation script, and the system outputs in the official repository, 3 in the hope that this shared task will lay the foundation for future research on unsupervised morphological paradigm completion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In this report, we use the words baseline and baselines interchangeably.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This edition of the shared task was only concerned with verbs, though we are considering extending the task to other parts of speech in the future.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/sigmorphon/2020/tree/ master/task2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "First and foremost, we would like to thank all of our shared task participants. We further thank the passionate morphologists who joined for lunch in Florence's mercato centrale on the last day of ACL 2019 to plan the 2020 shared task, as well as the SIGMORPHON Exec, who made this shared task possible.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "KU-CST at the SIGMORPHON 2020 task 2 on unsupervised morphological paradigm completion", |
|
"authors": [ |
|
{ |
|
"first": "Manex", |
|
"middle": [], |
|
"last": "Agirrezabal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Wedekind", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 17th Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manex Agirrezabal and J\u00fcrgen Wedekind. 2020. KU- CST at the SIGMORPHON 2020 task 2 on unsuper- vised morphological paradigm completion. In Pro- ceedings of the 17th Workshop on Computational Research in Phonetics, Phonology, and Morphology. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Paradigm classification in supervised learning of morphology", |
|
"authors": [ |
|
{ |
|
"first": "Malin", |
|
"middle": [], |
|
"last": "Ahlberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Forsberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mans", |
|
"middle": [], |
|
"last": "Hulden", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1024--1029", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/N15-1107" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Malin Ahlberg, Markus Forsberg, and Mans Hulden. 2015. Paradigm classification in supervised learn- ing of morphology. In Proceedings of the 2015 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 1024-1029, Denver, Col- orado. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "3rd International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The child's learning of english morphology", |
|
"authors": [ |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Berko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1958, |
|
"venue": "Word", |
|
"volume": "14", |
|
"issue": "2-3", |
|
"pages": "150--177", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean Berko. 1958. The child's learning of english mor- phology. Word, 14(2-3):150-177.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Overview of the DARPA LORELEI program. Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Caitlin", |
|
"middle": [], |
|
"last": "Christianson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Duncan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Boyan", |
|
"middle": [], |
|
"last": "Onyshkevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "3--9", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/s10590-017-9212-4" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Caitlin Christianson, Jason Duncan, and Boyan Onyshkevych. 2018. Overview of the DARPA LORELEI program. Machine Translation, 32(1):3- 9.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Towards a machinelearning architecture for lexical functional grammar parsing", |
|
"authors": [ |
|
{ |
|
"first": "Grzegorz", |
|
"middle": [], |
|
"last": "Chrupa\u0142a", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grzegorz Chrupa\u0142a. 2008. Towards a machine- learning architecture for lexical functional grammar parsing. Ph.D. thesis, Dublin City University.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The CoNLL-SIGMORPHON 2018 shared task: Universal morphological reinflection", |
|
"authors": [ |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Kann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Garrett", |
|
"middle": [], |
|
"last": "Mielke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miikka", |
|
"middle": [], |
|
"last": "Nicolai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Silfverberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--27", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K18-3001" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "McCarthy, Katharina Kann, Sebastian Mielke, Gar- rett Nicolai, Miikka Silfverberg, David Yarowsky, Jason Eisner, and Mans Hulden. 2018. The CoNLL- SIGMORPHON 2018 shared task: Universal mor- phological reinflection. In Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Univer- sal Morphological Reinflection, pages 1-27, Brus- sels. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00e9raldine", |
|
"middle": [], |
|
"last": "Walther", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ekaterina", |
|
"middle": [], |
|
"last": "Vylomova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manaal", |
|
"middle": [], |
|
"last": "Faruqui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "K\u00fcbler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the CoNLL SIGMORPHON 2017", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K17-2001" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, G\u00e9raldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra K\u00fcbler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017. CoNLL-SIGMORPHON 2017 shared task: Univer- sal morphological reinflection in 52 languages. In Proceedings of the CoNLL SIGMORPHON 2017", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Shared Task: Universal Morphological Reinflection, pages 1-30, Vancouver. Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shared Task: Universal Morphological Reinflection, pages 1-30, Vancouver. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "The SIGMORPHON 2016 shared Task-Morphological reinflection", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "10--22", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W16-2002" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared Task- Morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphol- ogy, pages 10-22, Berlin, Germany. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Unsupervised segmentation of words using prior distributions of morph length and frequency", |
|
"authors": [ |
|
{ |
|
"first": "Mathias", |
|
"middle": [], |
|
"last": "Creutz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "280--287", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1075096.1075132" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mathias Creutz. 2003. Unsupervised segmentation of words using prior distributions of morph length and frequency. In Proceedings of the 41st Annual Meet- ing of the Association for Computational Linguis- tics, pages 280-287, Sapporo, Japan. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Unsupervised models for morpheme segmentation and morphology learning", |
|
"authors": [ |
|
{ |
|
"first": "Mathias", |
|
"middle": [], |
|
"last": "Creutz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krista", |
|
"middle": [], |
|
"last": "Lagus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "ACM Trans. Speech Lang. Process", |
|
"volume": "4", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/1187415.1187418" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphol- ogy learning. ACM Trans. Speech Lang. Process., 4(1).", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Discovering morphological paradigms from plain text using a Dirichlet process mixture model", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Dreyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "616--627", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Dreyer and Jason Eisner. 2011. Discovering morphological paradigms from plain text using a Dirichlet process mixture model. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 616-627, Edin- burgh, Scotland, UK. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Unsupervised learning of the morphology of a natural language", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Goldsmith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Computational Linguistics", |
|
"volume": "27", |
|
"issue": "2", |
|
"pages": "153--198", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/089120101750300490" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational Linguistics, 27(2):153-198.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A Bayesian framework for word segmentation: Exploring the effects of context. Cognition", |
|
"authors": [ |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "112", |
|
"issue": "", |
|
"pages": "21--54", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.cognition.2009.03.008" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2009. A Bayesian framework for word seg- mentation: Exploring the effects of context. Cogni- tion, 112(1):21 -54.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural Computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/neco.1997.9.8.1735" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Unsupervised morphological paradigm completion", |
|
"authors": [ |
|
{ |
|
"first": "Huiming", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liwei", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yihui", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arya", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Kann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huiming Jin, Liwei Cai, Yihui Peng, Chen Xia, Arya D. McCarthy, and Katharina Kann. 2020. Unsuper- vised morphological paradigm completion. In Pro- ceedings of the 58th Annual Meeting of the Associa- tion for Computational Linguistics. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Singlemodel encoder-decoder with explicit morphological representation for reinflection", |
|
"authors": [ |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Kann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "555--560", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-2090" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2016. Single- model encoder-decoder with explicit morphological representation for reinflection. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 555-560, Berlin, Germany. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Neural transductive learning and beyond: Morphological generation in the minimal-resource setting", |
|
"authors": [ |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Kann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3254--3264", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1363" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2018. Neural transductive learning and beyond: Morphological generation in the minimal-resource setting. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 3254- 3264, Brussels, Belgium. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "An algorithm to solve the m \u00d7 n assignment problem in expected time O(mn log n)", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Karp", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "Networks", |
|
"volume": "10", |
|
"issue": "2", |
|
"pages": "143--152", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1002/net.3230100205" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard M. Karp. 1980. An algorithm to solve the m \u00d7 n assignment problem in expected time O(mn log n). Networks, 10(2):143-152.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "UniMorph 2.0: Universal morphology", |
|
"authors": [ |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00e9raldine", |
|
"middle": [], |
|
"last": "Walther", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ekaterina", |
|
"middle": [], |
|
"last": "Vylomova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manaal", |
|
"middle": [], |
|
"last": "Faruqui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Mielke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arya", |
|
"middle": [], |
|
"last": "Mc-Carthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "K\u00fcbler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christo Kirov, Ryan Cotterell, John Sylak-Glassman, G\u00e9raldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sebastian Mielke, Arya Mc- Carthy, Sandra K\u00fcbler, David Yarowsky, Jason Eis- ner, and Mans Hulden. 2018. UniMorph 2.0: Uni- versal morphology. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan. Eu- ropean Languages Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", |
|
"authors": [ |
|
{ |
|
"first": "Taku", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "66--71", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-2012" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Morpho challenge 2005-2010: Evaluations and results", |
|
"authors": [ |
|
{ |
|
"first": "Mikko", |
|
"middle": [], |
|
"last": "Kurimo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sami", |
|
"middle": [], |
|
"last": "Virpioja", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ville", |
|
"middle": [], |
|
"last": "Turunen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krista", |
|
"middle": [], |
|
"last": "Lagus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 11th Meeting of the ACL Special Interest Group on Computational Morphology and Phonology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "87--95", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikko Kurimo, Sami Virpioja, Ville Turunen, and Krista Lagus. 2010. Morpho challenge 2005-2010: Evaluations and results. In Proceedings of the 11th Meeting of the ACL Special Interest Group on Computational Morphology and Phonology, pages 87-95, Uppsala, Sweden. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "The IMS-CUBoulder system for the SIGMORPHON 2020 shared task on unsupervised morphological paradigm completion", |
|
"authors": [ |
|
{ |
|
"first": "Manuel", |
|
"middle": [], |
|
"last": "Mager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Kann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 17th Workshop on Computational Research in Phonetics, Phonology, and Morphology. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manuel Mager and Katharina Kann. 2020. The IMS-CUBoulder system for the SIGMORPHON 2020 shared task on unsupervised morphological paradigm completion. In Proceedings of the 17th Workshop on Computational Research in Phonetics, Phonology, and Morphology. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Imitation learning for neural morphological string transduction", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Makarov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Clematide", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2877--2882", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1314" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Makarov and Simon Clematide. 2018. Imita- tion learning for neural morphological string trans- duction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2877-2882, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "The SIGMORPHON 2019 shared task: Morphological analysis in context and cross-lingual transfer for inflection", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Arya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ekaterina", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shijie", |
|
"middle": [], |
|
"last": "Vylomova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chaitanya", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Malaviya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Garrett", |
|
"middle": [], |
|
"last": "Wolf-Sonkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Nicolai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miikka", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Silfverberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Mielke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Heinz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mans", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hulden", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "229--244", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-4226" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arya D. McCarthy, Ekaterina Vylomova, Shijie Wu, Chaitanya Malaviya, Lawrence Wolf-Sonkin, Gar- rett Nicolai, Christo Kirov, Miikka Silfverberg, Se- bastian J. Mielke, Jeffrey Heinz, Ryan Cotterell, and Mans Hulden. 2019. The SIGMORPHON 2019 shared task: Morphological analysis in context and cross-lingual transfer for inflection. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 229- 244, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "The Johns Hopkins University Bible Corpus: 1600+ tongues for typological exploration", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Arya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rachel", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dylan", |
|
"middle": [], |
|
"last": "Wicks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Winston", |
|
"middle": [], |
|
"last": "Mueller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oliver", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Garrett", |
|
"middle": [], |
|
"last": "Adams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Nicolai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020). European Language Resources Association (ELRA)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arya D. McCarthy, Rachel Wicks, Dylan Lewis, Aaron Mueller, Winston Wu, Oliver Adams, Garrett Nico- lai, Matt Post, and David Yarowsky. 2020. The Johns Hopkins University Bible Corpus: 1600+ tongues for typological exploration. In Proceed- ings of the Twelfth International Conference on Lan- guage Resources and Evaluation (LREC 2020). Eu- ropean Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Unsupervised morphological segmentation and clustering with document boundaries", |
|
"authors": [ |
|
{ |
|
"first": "Taesun", |
|
"middle": [], |
|
"last": "Moon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katrin", |
|
"middle": [], |
|
"last": "Erk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "668--677", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taesun Moon, Katrin Erk, and Jason Baldridge. 2009. Unsupervised morphological segmentation and clus- tering with document boundaries. In Proceedings of the 2009 Conference on Empirical Methods in Nat- ural Language Processing, pages 668-677, Singa- pore. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Fine-grained morphosyntactic analysis and generation tools for more than one thousand languages", |
|
"authors": [ |
|
{ |
|
"first": "Garrett", |
|
"middle": [], |
|
"last": "Nicolai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dylan", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arya", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Mueller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Winston", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3963--3972", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Garrett Nicolai, Dylan Lewis, Arya D. McCarthy, Aaron Mueller, Winston Wu, and David Yarowsky. 2020. Fine-grained morphosyntactic analysis and generation tools for more than one thousand lan- guages. In Proceedings of The 12th Language Re- sources and Evaluation Conference, pages 3963- 3972, Marseille, France. European Language Re- sources Association.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Learning morphosyntactic analyzers from the Bible via iterative annotation projection across 26 languages", |
|
"authors": [ |
|
{ |
|
"first": "Garrett", |
|
"middle": [], |
|
"last": "Nicolai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1765--1774", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1172" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Garrett Nicolai and David Yarowsky. 2019. Learning morphosyntactic analyzers from the Bible via itera- tive annotation projection across 26 languages. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1765- 1774, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Get to the point: Summarization with pointergenerator networks", |
|
"authors": [ |
|
{ |
|
"first": "Abigail", |
|
"middle": [], |
|
"last": "See", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1073--1083", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1099" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083, Vancouver, Canada. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "IIT(BHU)-IIITH at CoNLL-SIGMORPHON 2018 shared task on universal morphological reinflection", |
|
"authors": [ |
|
{ |
|
"first": "Abhishek", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ganesh", |
|
"middle": [], |
|
"last": "Katrapati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipti Misra", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "105--111", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K18-3013" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abhishek Sharma, Ganesh Katrapati, and Dipti Misra Sharma. 2018. IIT(BHU)-IIITH at CoNLL- SIGMORPHON 2018 shared task on universal mor- phological reinflection. In Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Uni- versal Morphological Reinflection, pages 105-111, Brussels. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "The NYU-CUBoulder systems for SIGMORPHON 2020 Task 0 and Task 2", |
|
"authors": [ |
|
{ |
|
"first": "Assaf", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Kann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 17th Workshop on Computational Research in Phonetics, Phonology, and Morphology. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Assaf Singer and Katharina Kann. 2020. The NYU- CUBoulder systems for SIGMORPHON 2020 Task 0 and Task 2. In Proceedings of the 17th Workshop on Computational Research in Phonetics, Phonol- ogy, and Morphology. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Unsupervised multilingual learning for morphological segmentation", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Snyder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of ACL-08: HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "737--745", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Snyder and Regina Barzilay. 2008. Unsuper- vised multilingual learning for morphological seg- mentation. In Proceedings of ACL-08: HLT, pages 737-745, Columbus, Ohio. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Unsupervised morphology induction using word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Soricut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franz", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1627--1637", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/N15-1186" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Radu Soricut and Franz Och. 2015. Unsupervised mor- phology induction using word embeddings. In Pro- ceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1627-1637, Denver, Colorado. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Remote elicitation of inflectional paradigms to seed morphological analysis in lowresource languages", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3116--3120", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Sylak-Glassman, Christo Kirov, and David Yarowsky. 2016. Remote elicitation of inflectional paradigms to seed morphological analysis in low- resource languages. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation (LREC 2016), pages 3116-3120, Por- toro\u017e, Slovenia. European Language Resources As- sociation (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "A language-independent feature schema for inflectional morphology", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Que", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "674--680", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P15-2111" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Sylak-Glassman, Christo Kirov, David Yarowsky, and Roger Que. 2015. A language-independent fea- ture schema for inflectional morphology. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 674- 680, Beijing, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Token and type constraints for cross-lingual part-of-speech tagging", |
|
"authors": [ |
|
{ |
|
"first": "Oscar", |
|
"middle": [], |
|
"last": "T\u00e4ckstr\u00f6m", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mc-Donald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1--12", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00205" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, Slav Petrov, Ryan Mc- Donald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. Transactions of the Association for Computational Linguistics, 1:1-12.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 Decem- ber 2017, Long Beach, CA, USA, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Unsupervised morphology learning with statistical paradigms", |
|
"authors": [ |
|
{ |
|
"first": "Hongzhi", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charles", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lyle", |
|
"middle": [], |
|
"last": "Ungar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "44--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hongzhi Xu, Mitchell Marcus, Charles Yang, and Lyle Ungar. 2018. Unsupervised morphology learning with statistical paradigms. In Proceedings of the 27th International Conference on Computational Linguistics, pages 44-54, Santa Fe, New Mexico, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Inducing multilingual text analysis tools via robust projection across aligned corpora", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Grace", |
|
"middle": [], |
|
"last": "Ngai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Wicentowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the First International Conference on Human Language Technology Research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Yarowsky, Grace Ngai, and Richard Wicen- towski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the First International Conference on Human Language Technology Research.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "Our baseline system: the retrieval component bootstraps lemma-form-slot triplets, which are then used by the generation component to generate unobserved inflections in the paradigm of each input lemma.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td/><td>EUS</td><td>BUL</td><td>ENG</td><td>FIN</td><td>DEU</td><td>KAN</td><td>NAV</td><td>SPA</td><td>TUR</td></tr><tr><td>1 # Tokens in corpus</td><td colspan=\"9\">195459 801657 236465 685699 826119 193213 104631 251581 616418</td></tr><tr><td>2 # Types in corpus</td><td>18367</td><td>37048</td><td>7144</td><td>54635</td><td>22584</td><td>28561</td><td>18799</td><td>9755</td><td>59458</td></tr><tr><td>3 # Lemmas</td><td>20</td><td>100</td><td>100</td><td>100</td><td>100</td><td>20</td><td>100</td><td>100</td><td>100</td></tr><tr><td>4 # Lemmas in corpus</td><td>4</td><td>50</td><td>50</td><td>50</td><td>50</td><td>10</td><td>9</td><td>50</td><td>50</td></tr><tr><td>5 # Inflections</td><td>10446</td><td>5600</td><td>500</td><td>14100</td><td>2900</td><td>2612</td><td>3000</td><td>7000</td><td>12000</td></tr><tr><td>6 # Inflections in corpus</td><td>97</td><td>915</td><td>127</td><td>497</td><td>631</td><td>1040</td><td>54</td><td>630</td><td>986</td></tr><tr><td>7 Paradigm size</td><td>1659</td><td>56</td><td>5</td><td>141</td><td>29</td><td>85</td><td>30</td><td>70</td><td>120</td></tr><tr><td>8 Paradigm size (merged)</td><td>1658</td><td>54</td><td>5</td><td>141</td><td>20</td><td>59</td><td>30</td><td>70</td><td>120</td></tr></table>", |
|
"type_str": "table", |
|
"text": "Dataset statistics: development languages. # Inflections=number of inflected forms in the gold file, token-based; # Inflections in corpus=number of inflections from the gold file which can be found in the corpus, token-based; Paradigm size=number of different morphological feature vectors in the dataset for the language; Paradigm size (merged)=paradigm size, but counting slots with all forms being identical only once." |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Train,</td><td>lemma 1</td></tr><tr><td>dev,</td><td>guess 1 guess 2</td></tr><tr><td>test</td><td>guess 3 guess 4</td></tr><tr><td/><td>guess 5 guess 6</td></tr></table>", |
|
"type_str": "table", |
|
"text": "All submitted systems by institution, together with a reference to their description paper. The rank is relative to all other submitted systems and does not take the baselines into account." |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td/><td/><td>Baseline</td><td/><td>KU-CST</td><td/><td>IMS-CUB</td><td/><td>NYU-CUB</td></tr><tr><td/><td>1</td><td>2</td><td>1</td><td>2</td><td>1</td><td>2</td><td>1</td><td>2</td><td>3</td></tr><tr><td colspan=\"2\">avg. 28.39</td><td>29.74</td><td>0.83</td><td>8.93</td><td>26.37</td><td>28.05</td><td>27.41</td><td>27.64</td><td>27.71</td></tr><tr><td colspan=\"10\">EUS 0.06 (30) 0.06 (27) 0.02 (30) 0.01 (2) 0.04 (30) 0.06 (30) 0.05 (30) 0.05 (30) 0.07 (30)</td></tr><tr><td colspan=\"10\">BUL 28.30 (35) 31.69 (34) 2.99 (138) 4.15 (13) 27.22 (35) 32.11 (35) 27.69 (35) 28.94 (35) 27.89 (35)</td></tr><tr><td colspan=\"10\">ENG 65.60 (4) 66.20 (4) 3.53 (51) 17.29 (7) 47.80 (4) 61.00 (4) 50.20 (4) 52.80 (4) 51.20 (4)</td></tr><tr><td>FIN</td><td colspan=\"9\">5.33 (21) 5.50 (21) 0.39(1169) 2.08 (108) 4.90 (21) 5.38 (21) 5.36 (21) 5.47 (21) 5.35 (21)</td></tr><tr><td colspan=\"10\">DEU 28.35 (9) 29.00 (9) 0.70 (425) 4.98 (40) 24.60 (9) 28.35 (9) 27.30 (9) 27.35 (9) 27.35 (9)</td></tr><tr><td colspan=\"10\">KAN 15.49 (172) 15.12 (172) 4.27 (44) 1.69 (1) 10.50 (172) 15.65 (172) 11.10 (172) 11.16 (172) 11.10 (172)</td></tr><tr><td colspan=\"10\">NAV 3.23 (3) 3.27 (3) 0.13 (38) 0.20 (2) 0.33 (3) 1.17 (3) 0.40 (3) 0.43 (3) 0.43 (3)</td></tr><tr><td colspan=\"10\">SPA 22.96 (29) 23.67 (29) 3.52 (225) 10.84 (40) 19.50 (29) 22.34 (29) 20.39 (29) 20.56 (29) 20.30 (29)</td></tr><tr><td colspan=\"10\">TUR 14.21 (104) 15.53 (104) 0.11(1772) 0.71 (502) 13.54 (104) 14.73 (104) 14.88 (104) 15.39 (104) 15.13 (104)</td></tr><tr><td colspan=\"2\">avg. 20.39</td><td>21.12</td><td>1.74</td><td>4.66</td><td>16.49</td><td>20.09</td><td>17.49</td><td>18.02</td><td>17.65</td></tr></table>", |
|
"type_str": "table", |
|
"text": "MLT 9.12 (17) 20.00 (17) 0.22 (254) 1.30 (2) 14.41(17)17.35 (17) 15.29 (17) 15.59 (17) 15.88 (17) FAS 6.67 (31) 6.54 (31) 1.55 (11) 0.74 (2) 2.52 (31) 2.70 (31) 2.76 (31) 2.73 (31) 2.74 (31) POR 40.39 (34) 39.56 (34) 1.09(1104) 12.75 (70) 38.69 (34) 39.17 (34) 39.93 (34) 39.95 (34) 40.07 (34) RUS 40.68 (19) 41.68 (19) 0.35 (387) 7.06 (10) 38.63 (19) 41.11 (19) 39.26 (19) 40.00 (19) 39.74 (19) SWE 45.07 (15) 40.93 (15) 0.93 (588) 22.82 (17) 37.60 (15) 39.93 (15) 39.80 (15) 39.93 (15) 40.13 (15)" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "BMAcc in percentages and the number of predicted paradigm slots after merging for all submitted systems and the baselines on all development (top) and test languages (bottom). Best scores are in bold." |
|
}, |
|
"TABREF11": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "BMAcc in percentages and the number of predicted paradigm slots after merging for all submitted systems and the baselines on all test languages; listed separately for lemmas which appear in the corpus (top) and lemmas which do not (bottom). Best scores are in bold." |
|
} |
|
} |
|
} |
|
} |