|
{ |
|
"paper_id": "K17-2003", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:07:42.875898Z" |
|
}, |
|
"title": "The LMU System for the CoNLL-SIGMORPHON 2017 Shared Task on Universal Morphological Reinflection", |
|
"authors": [ |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Kann", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "CIS LMU Munich", |
|
"location": { |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "CIS LMU Munich", |
|
"location": { |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present the LMU system for the CoNLL-SIGMORPHON 2017 shared task on universal morphological reinflection, which consists of several subtasks, all concerned with producing an inflected form of a paradigm in different settings. Our solution is based on a neural sequenceto-sequence model, extended by preprocessing and data augmentation methods. Additionally, we develop a new algorithm for selecting the most suitable source form in the case of multi-source input, outperforming the baseline by 5.7% on average over all languages and settings. Finally, we propose a fine-tuning approach for the multi-source setting, and combine this with the source form detection, increasing accuracy by a further 4.6% on average.", |
|
"pdf_parse": { |
|
"paper_id": "K17-2003", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present the LMU system for the CoNLL-SIGMORPHON 2017 shared task on universal morphological reinflection, which consists of several subtasks, all concerned with producing an inflected form of a paradigm in different settings. Our solution is based on a neural sequenceto-sequence model, extended by preprocessing and data augmentation methods. Additionally, we develop a new algorithm for selecting the most suitable source form in the case of multi-source input, outperforming the baseline by 5.7% on average over all languages and settings. Finally, we propose a fine-tuning approach for the multi-source setting, and combine this with the source form detection, increasing accuracy by a further 4.6% on average.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Many of the world's languages have a rich morphology, i.e., make use of surface variations of lemmata in order to express certain properties, like the tense or mood of a verb. This makes a variety of natural language processing tasks more challenging, as it increases the number of words in a language drastically; a problem morphological analysis and generation help to mitigate. However, a big issue when developing methods for morphological processing is that for many morphologically rich languages, there are only few or no relevant training data available, making it impossible to train state-of-the-art machine learning models (e.g., (Faruqui et al., 2016; Kann and Sch\u00fctze, 2016b; Aharoni et al., 2016; Zhou and Neubig, 2017) ). This is the motivation for the CoNLL-SIGMORPHON-2017 shared task on uni-versal morphological reinflection (Cotterell et al., 2017a) , which animates the development of systems for as many as 52 different languages in 6 different low-resource settings for morphological reinflection: to generate an inflected form, given a target morphological tag and either the lemma (task 1) or a partial paradigm (task 2). An example is (use, V;3;SG;PRS) \u2192 uses In this paper, we describe the LMU system for the shared task. Since it depends on the language and the amount of resources available for training which method performs best, our approach consists of a modular system. For most medium-and high-resource, as well as some low-resource settings, we make use of the state-of-the-art encoderdecoder (Cho et al., 2014a; Sutskever et al., 2014; Bahdanau et al., 2015) network MED (Kann and Sch\u00fctze, 2016b) , while extending the training data in several ways. Whenever the given data are not sufficient, we make use of the baseline system, which can be trained on fewer instances.", |
|
"cite_spans": [ |
|
{ |
|
"start": 641, |
|
"end": 663, |
|
"text": "(Faruqui et al., 2016;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 664, |
|
"end": 688, |
|
"text": "Kann and Sch\u00fctze, 2016b;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 689, |
|
"end": 710, |
|
"text": "Aharoni et al., 2016;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 711, |
|
"end": 733, |
|
"text": "Zhou and Neubig, 2017)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 843, |
|
"end": 868, |
|
"text": "(Cotterell et al., 2017a)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1528, |
|
"end": 1547, |
|
"text": "(Cho et al., 2014a;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1548, |
|
"end": 1571, |
|
"text": "Sutskever et al., 2014;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1572, |
|
"end": 1594, |
|
"text": "Bahdanau et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1607, |
|
"end": 1632, |
|
"text": "(Kann and Sch\u00fctze, 2016b)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While we submit solutions for every language and setting, our main focus is on task 2 of the shared task and the main contributions of this paper correspondingly address a multi-source input setting: (i) We develop CIS (\"choice of important sources\"), a novel algorithm for selecting the most appropriate source form for a target tag from a partially given paradigm, which is based on edit trees (Chrupa\u0142a, 2008) . (ii) We propose to cast the task of multi-source morphological reinflection as a domain adaptation problem. By finetuning on forms from a partial paradigm, we improve the performance of a neural sequence-tosequence model for most shared task languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 396, |
|
"end": 412, |
|
"text": "(Chrupa\u0142a, 2008)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our final methods, averaged over languages, outperform the official baseline by 7.0%, 18.5%, and 16.5% for task 1 and 8.7%, 10.1%, and 10.3% for task 2 for the low-, medium-, and highresource settings, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Furthermore, our submitted sytem-a combination of our methods with the baseline systemsurpasses the baseline's accuracy on test for both tasks as well as all languages and settings. Differences in performance are between 8.69% (task 1 low) and 17.94% (task 1 medium).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The paradigm of a lemma w l is a set of tuples of inflected forms f k and tags t k describing the properties of the inflected word, which we formally denote as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphological Reinflection", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c0(w l ) = f k [w l ], t k t k \u2208T (w l )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Morphological Reinflection", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "with T (w l ) being the set of possible tags for w l . An example is the following paradigm of the Spanish lemma so\u00f1ar:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphological Reinflection", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u03c0(so\u00f1ar) = sue\u00f1o, 1SgPresInd , . . . , so\u00f1aran, 3PlPastSbj", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphological Reinflection", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The shared task has two subtasks: task 1 consists of predicting a certain form f i [w l ], given the lemma w l and the target tag t i . For task 2, one or more source forms are given for each lemma (multi-source input). Thus, additional information about the way a lemma is inflected is known and can be leveraged.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morphological Reinflection", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We apply the following preprocessing methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing Methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "String preprocessing. We determine for each language if it is predominantly prefixing or suffixing, using the same algorithm as the shared task baseline system (Cotterell et al., 2017a) . For prefixing languages, we invert all words. An example for the prefixing language Navajo is: chid\u00ed \u2192\u00eddihc New character handling. The source and target vocabularies for the languages are constructed using the respective training and development sets. Therefore, out-of-vocabulary symbols can appear in the test sets, resulting in symbols the model has no information about. In order to address this, we substitute such characters by a special NEW symbol and train the model on it by including it in the additional training samples we create, cf. \u00a74.", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 185, |
|
"text": "(Cotterell et al., 2017a)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing Methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the output, NEW is substituted back by the new characters in the input in order of appearance. An example from the German development data is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing Methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Phlo\u00ebm \u2192 PhloNEWm Tag extension. Explicit information is usually handled better by machine learning methods than implicit information. Therefore, we search for optional subtag slots, in contrast to those that are always occupied by some value, e.g., an optional negation subtag, in contrast to the part-of-speech subtag which, for most languages, is always either Verb, Noun or Adjective, but never empty. For all optional subtags, we artificially introduce a negated form.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing Methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Additional source-target form pairs. We collect all forms belonging to the same lemma. We then add additional samples by constructing source-target combinations for other sources than the lemma, using the members of each paradigm. For the two samples lemma i \u2192 word 1 and lemma i \u2192 word 2 we can introduce the new samples word 1 \u2192 word 2 and word 2 \u2192 word 1 . 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Data Augmentation Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Autoencoding samples. We further create samples for a sequence autoencoding task, i.e., we add mappings of words to themselves, with a special copy tag A. No morphological tags are given. This is a way to multi-task train on autoencoding the input string and reinflection, as we maximize the joint log-likelihood", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Data Augmentation Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "L(\u03b8) = (w l ,ts,tt)\u2208T log p \u03b8 (f t (w l ) | e(f s (w l ), t t )) (2) + w\u2208W log p \u03b8 (w | e(w))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Data Augmentation Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "for the training data T , source and target tags t s and t t , a lemma w l and an encoding function e depending on \u03b8, as well as a set of strings W. We apply two variants: autoencoding the lemmata and forms from the original training set, or using random strings for this. Random strings are produced in the following way. We first construct all possible bigrams B from the vocabulary of the language. We then combine those with a random sequence of characters r of a random length between 1 and 4 in the following way:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Data Augmentation Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "b 1 + b 2 + r + b 3 + b 4 for b i \u2208 B.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Data Augmentation Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Constructing random strings like this has the positive side-effect that we can add a NEW to the vocabulary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Data Augmentation Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Rule-based data generation. We imitate a rulebased system by, given a source form and a target form, defining the prefix (resp. suffix) of a word as the word minus the longest common suffix (resp. prefix). We then create an additional training example by generating a random string s and prepending (resp. appending) source and target prefixes (resp. suffixes) to s. For example, in German, we can find the following rule for the 2nd person singular form:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Data Augmentation Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "*en \u2192 *st", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Data Augmentation Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "From this we can create additional training instances like the following. We apply this procedure to all pairs of a source and a target tag that appear less than t times in train for a certain threshold t.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Data Augmentation Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We apply the encoder-decoder network MED (Kann and Sch\u00fctze, 2016a), due to its success in last year's edition of the shared task (Cotterell et al., 2016 ). While we extend it by new training data augmentation methods and, for task 2, the additional algorithms described below, we do not make changes to the model's architecture. We will shortly describe MED and the shared task baseline system in this section.", |
|
"cite_spans": [ |
|
{ |
|
"start": 129, |
|
"end": 152, |
|
"text": "(Cotterell et al., 2016", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Architecture", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Encoder. The format of the input of the encoder is the same as in (Kann and Sch\u00fctze, 2016a) , but with a small modification to be able to handle unlabeled data: Given the set of morphological subtags M that each target tag is composed of (e.g., the tag 1SgPresInd contains the subtags 1, Sg, Pres and Ind), and the alphabet \u03a3 of the language of application, our input is of the form (A | M * ) \u03a3 * , i.e., it consists of either a sequence of subtags or the symbol A signaling that the input is not annotated and should be autoencoded, and (in both cases) the character sequence of the input word. All parts of the input are represented by embeddings. We encode the input x = x 1 , x 2 , . . . , x Tx using a bidirectional gated recurrent neural network (GRU) (Cho et al., 2014b) . We then concatenate the forward and backward hidden states to obtain the input h i for the decoder.", |
|
"cite_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 91, |
|
"text": "(Kann and Sch\u00fctze, 2016a)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 759, |
|
"end": 778, |
|
"text": "(Cho et al., 2014b)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MED", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Decoder. The decoder is a uni-directional attention-based GRU, defining a probability distribution over strings in \u03a3 * :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MED", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "p(y | x) = Ty t=1 p(y t | y 1 , . . . , y t\u22121 , s t , c t ),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MED", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "with s t being the decoder hidden state for time t and c t being a context vector, calculated using the encoder hidden states together with attention weights. A detailed description of the encoderdecoder model can be found in (Bahdanau et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 226, |
|
"end": 249, |
|
"text": "(Bahdanau et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MED", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The shared task baseline system (BL) is wellsuited for low-resource settings. It first aligns each input and output string, and than extracts possible prefix or suffix substitution rules from the training data. At test time, it applies the most suitable one in the following way: Every input is searched for the longest contained prefix or suffix and the rule belonging to the affix and given target tag is applied to obtain the output. Whether prefixes or suffixes are used depends on the language and is determined using the training set. Figure 2 : Edit tree for the transformation from abgesagt \"canceled\" to absagen \"to cancel\". Each node contains the length of the parts before and after the respective LCS, e.g., the leftmost node contains the length of the parts before and after the LCS of abge and ab. The prefix sub indicates that the node is a substitution operation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 541, |
|
"end": 549, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baseline System", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "As our choice of important sources (CIS) algorithm is based strongly on edit trees (Chrupa\u0142a, 2008) , we will introduce them first.", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 99, |
|
"text": "(Chrupa\u0142a, 2008)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Choice of Important Sources", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Edit trees. An edit tree e(\u03c3, \u03c4 ) is a way to specify a transformation between a source string \u03c3 and a target string \u03c4 (Chrupa\u0142a, 2008) . It is constructed by first determining the longest common substring (LCS) (Gusfield, 1997) of \u03c3 and \u03c4 and then modeling the prefix and suffix pairs of the LCS recursively. In the case of an empty LCS, e(\u03c3, \u03c4 ) corresponds to the substitution operation that replaces \u03c3 with \u03c4 . Figure 2 shows an example.", |
|
"cite_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 135, |
|
"text": "(Chrupa\u0142a, 2008)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 212, |
|
"end": 228, |
|
"text": "(Gusfield, 1997)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 415, |
|
"end": 423, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Choice of Important Sources", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The entire task of paradigm completion is built upon the notion that the members of a paradigm are not independent. However, for many languages, some slots of a paradigm are more dependent on each other: For example, gehen, gehe and ging are all forms of the same German paradigm, but when aiming to produce the 3rd person plural past tense form gingen, the task is easier when starting from the (more similar) form ging. In fact, in many cases, the entire paradigm is completely deterministic when the right paradigm slots are known. A set of forms that determines all other inflected forms is called principal parts. (Cotterell et al., 2017b) use this property of morphologically rich languages to induce topologies in order to jointly decode entire paradigms and to thus make use of all known forms. However, they suppose to be able to compute and use good estimates for the probabilities p(f i (w l )|f j (w l )) for source form f j (w l ) and target form f i (w l ), since they use at least 632 entire paradigms per part of speech and language for training. Using a minimum spanning tree, they approximate a solution to the maximum-a-posteriori Figure 3 : Overview of a fine-tuning setup. In our case, \"indomain\" refers to the partial paradigm to be completed; \"outof-domain\" refers to all other paradigms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 619, |
|
"end": 644, |
|
"text": "(Cotterell et al., 2017b)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1150, |
|
"end": 1158, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "CIS.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(MAP) inference problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CIS.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to be able to apply our approach to low-resource settings, we focus instead on finding the best source form for each target form in a language, and CIS works as follows. We calculate edit trees for each pair (f j (w l ), f i (w l )) for each lemma w l in the training data. We then count the number of different edit trees for each pair of source and target tag (t j , t i ) and build an importance list for each tag t i , giving higher priorities to source tags with lower counts. The intuition behind this is that the fewer different edit trees appear in the training set, the more deterministic the paradigm slot i is, given a certain source slot j.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CIS.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "At test time, we find the form from the given slots of the paradigm which has the highest importance score, and use it to generate the target form. Note that, as the lemma is always given, there will never be a need to use a worse source form than the lemma.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CIS.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For sequence-to-sequence models for neural machine translation, it has been shown that specialized models for a certain domain are able to obtain better performances than general ones (Luong and Manning, 2015). One way to perform such a domain adaptation is fine-tuning: a general model, which has been trained on out-of-domain data, is further trained on (newly) available indomain data, cf. Figure 3 . This brings the conditional probability p(y 1 , ..., y m |x 1 , ..., x n ) for an output sequence (y 1 , ..., y m ) given an input sequence (x 1 , ..., x n ) closer to the target distribution.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 393, |
|
"end": 401, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Fine-Tuning for Multi-Source Input", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Here, we propose to improve multi-source morphological reinflection by treating each paradigm as a separate domain and performing \"domain adaptation\" everytime a new paradigm should be completed by the model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fine-Tuning for Multi-Source Input", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In particular, we have one base model (for each setting and language), trained on all available training examples. The original training data corresponds to out-of-domain data in a domain adaptation setting. At test time, we construct for each partial paradigm P known all possible training examples in the way described in the paragraphs about additional source-target form pairs and autoencoding in \u00a74. Thus, for |P known | = n, we end up with (up to) n * (n \u2212 1) + N a in-domain samples for fine-tuning where N a is the number of autoencoding training samples. We then for each partial paradigm fine-tune the original base model on all examples constructed from P known , which match the in-domain data for domain adaptation. Thus, we end up with a different fine-tuned model for each partial paradigm in the test set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fine-Tuning for Multi-Source Input", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Our method is expected to perform best in a setting in which many forms of each paradigm are given as input, e.g., when n is big. Table 1 indicates for which language we would therefore expect could performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 137, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Fine-Tuning for Multi-Source Input", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Task1. For task 1, we apply MED*: MED in combination with all preprocessing methods mentioned in \u00a73 and the following data augmentations. We create additional source-target form pairs where possible and create autoencoding samples, random ones as well as from the original data. Further, we create 5 additional rule-based samples for each existing sample of all sourcetarget tag combinations that appear less than t = 10 times in the training set for a language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Systems", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "We employ ensembles of 5 MED* models, which are trained for 90 (low and medium) or 45 (high) epochs. Ensembling is done by majority voting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Systems", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "Task2. We again apply MED*. However, for task 2 we do not create rule-based samples. 2 Models for the low-resource, medium-resource and high-resource settings are trained for 45, 30 and 20 epochs, respectively. For task 2, we do not use ensembling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Systems", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "At test time, we preprocess each newly incoming paradigm in the same way as the training data, except for the creation of random copy samples. We then fine-tune the base model for each new paradigm according to \u00a77 for 25 additional epochs. Additionally, we choose the best source form for each required target tag and predict each inflected form for this input (MED*+FT+CIS).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Systems", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "The limited amount of data makes it impossible to obtain competitive performance using MED* for some languages and settings (especially for languages with only few given slots per paradigm), even after applying all data augmentation methods described above. Thus, we apply the baseline model for those cases, but combine it with CIS (cf. \u00a76) to improve its performance (BL+CIS). We do not apply preprocessing or data augmentation methods for BL, as they would not influence its performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Systems", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "Shared task submission. The best approach depends on both the language and the setting. Thus, our final submission for each case is obtained by either BL, BL+CIS, the MED* ensemble, or MED*+FT+CIS, selected using the accuracy on the development set. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Systems", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "We use the same hyperparameters for all MED models, i.e., all languages, tasks and amounts of resources. In particular, we keep them fixed to the following. Encoder and decoder RNNs each have 100 hidden units and the embeddings size is 300. For training we use ADADELTA (Zeiler, 2012) with minibatch size 20. Following Le et al. (2015), we initialize all weights in the encoder, decoder and the embeddings except for the GRU weights in the decoder to the identity matrix. Biases are initialized to zero. We use dropout with a coefficient of 0.5. As this is the model we use to produce test results for the shared task, we report Task 1 Task 2 Low 100 535 Medium 994 2285 High 9825 8578 Table 3 : Average amount of training examples per task and resource quantity.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 629, |
|
"end": 704, |
|
"text": "Task 1 Task 2 Low 100 535 Medium 994 2285 High 9825 8578 Table 3", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "MED Hyperparameters", |
|
"sec_num": "8.2" |
|
}, |
|
{ |
|
"text": "the best numbers obtained on the development set during training (\"early stopping\"). We compare the 1-best accuracy of all systems, i.e., the percentage of predictions that match the true answer exactly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MED Hyperparameters", |
|
"sec_num": "8.2" |
|
}, |
|
{ |
|
"text": "The official shared task data consists of sets for 52 different languages, 2 tasks and 3 different settings with varying amount of resources. 3 An overview of the (averaged) amount of samples per task and setting is given in Table 3 . Development and test sets are the same for all settings for each respective task and language. The gold labels for the test set are not published yet, so the experiments in this paper are performed on the development set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 143, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 225, |
|
"end": 232, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "8.3" |
|
}, |
|
{ |
|
"text": "We compare our approaches to the official shared task baseline. Detailed results for task 1 and task 2 are shown in Table 2 and Table 4 , respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 135, |
|
"text": "Table 2 and Table 4", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "8.4" |
|
}, |
|
{ |
|
"text": "Task 1. Table 2 shows the results obtained by MED*, both for single models and ensembles. As can be seen, MED* already outperforms the baseline for the majority of languages in all settings; in average by 0.035, 0.157 and 0.151, respectively. MED*'s performance is worse for the low data quantity than for the others. This is an expected result, as neural networks are known to require a huge amount of training instances. Ensembling increases the final accuracy for all settings, by an average of 0.035 (low), 0.028 (medium) and 0.014 (high).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 15, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "8.4" |
|
}, |
|
{ |
|
"text": "Task 2. As can be seen in Table 4 , combining BL and CIS outperforms BL on its own for many languages, especially in the low-resource setting. The highest improvements for the low-, medium-and high-resource setting are for Hungarian (0.362), Latin (0.440) and Latin (0.429), respectively. For some languages, e.g., Catalan, Danish or Urdu, choosing a good source form seems to not be important. For a few languages, results even get worse. We will discuss some of those cases in \u00a79. Overall, however, we obtain 0.087 (low), 0.066 (medium) and 0.019 (high) improvement on average over all languages, which clearly shows the usefulness of CIS.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 33, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "8.4" |
|
}, |
|
{ |
|
"text": "MED* on its own does not achieve competitive performance for task 2. We attribute this to the limited number of different lemmata given for training, resulting in an overfitting model, learning, e.g., to produce certain character combinations for certain tags. However, MED*+FT+CIS outperforms both BL as well as BL+CIS for many languages in the medium-and high-resource settings and even in some low-resource scenarios. Comparing the obtained accuracies with Table 1 , it gets obvious that languages with a higher amount of given source forms per paradigm achieve better results after fine-tuning, many times reaching a higher accuracy than BL, even in the low-resource setting. In contrast, fine-tuning works poorly for languages with \u2264 1.5 given source forms per paradigm. In total, using MED*+FT+CIS, we obtain an average improvement of 0.068 (low), 0.101 (medium) and 0.103 (high) over the baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 460, |
|
"end": 467, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "8.4" |
|
}, |
|
{ |
|
"text": "Our submitted system obtained average accuracies of 0.4659 (low), 0.8264 (medium) and 0.947 (high) for task 1, and 0.6776 (low), 0.8202 (medium) and 0.8852 (high) for task 2, respectively. This corresponds to place 5 of 18, 3 of 19 and 7 of 15 for the high-, medium-and lowresource settings of task 1, respectively. Remarkably, the difference to the best system for the two higher settings is less than 0.01. Among 3 submissions for task 2, our system comes first. It beats the baseline by 17.16 (low), 15.54 (medium) and 10.84 (high).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Official Shared Task Evaluation", |
|
"sec_num": "8.5" |
|
}, |
|
{ |
|
"text": "Certain parts of our system do not perform as well for some languages as we would expect. In this section we will discuss those cases in more detail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Remaining Challenges", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "CIS. For some languages, e.g., Danish or English, CIS does not influence the performance. This might be due to those languages not having paradigm slots that are regularly closer to certain slots than others.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Remaining Challenges", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "One other problem for the algorithm are training instances that consist of multiple separate words, e.g., the edit trees for \"ride a bike\" \u2192 \"riding a bike\" and \"hike\" \u2192 \"hiking\" are not the same, even though they should be. Such cases potentially confuse the algorithm. A solution could be to detect training examples which consist of more than one token and split them up, in order to just consider the inflecting word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Remaining Challenges", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "Fine-tuning. For some settings and languages, e.g., Danish or Bokm\u00e5l, the fine-tuned model obtains a lower accuracy than the base MED* model. While this might seem strange at first, when comparing to Table 1 , it gets clear that this is mainly the case for languages where, besides the lemma, no forms of a paradigm are given. This results in the model being fine-tuned on autoencoding the lemma, and thus a strong bias to copy the input, which can hurt performance. A possible solution might be to apply a combination of fine-tuning and multi-domain training as proposed, e.g., by Chu et al. (2017) for neural machine translation. We leave respective experiments for future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 582, |
|
"end": 599, |
|
"text": "Chu et al. (2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 207, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Remaining Challenges", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "We presented the LMU system for the CoNLL-SIGMORPHON 2017 shared task on universal morphological reinflection, which is based on an encoder-decoder network. We introduced two new methods for handling multi-source morphological reinflection: CIS, a source form selection algorithm based on edit trees and a fine-tuning approach similar in spirit to domain adaptation. On average over all participating languages, our approaches outperform the official shared task baseline for both tasks and all settings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "10" |
|
}, |
|
{ |
|
"text": "The respective source and target tags are part of the input, but omitted here for clarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Using rule-based examples for training leads to worse performance of the fine-tuned system, even though the base system turns out to be better. Thus, we do not use it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A list of all languages can be found inTables 2 and 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank VolkswagenStiftung for supporting this research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Improving sequence to sequence learning for morphological inflection generation: The biumit systems for the sigmorphon 2016 shared task for morphological reinflection", |
|
"authors": [ |
|
{ |
|
"first": "Roee", |
|
"middle": [], |
|
"last": "Aharoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "SIGMORPHON", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roee Aharoni, Yoav Goldberg, and Yonatan Belinkov. 2016. Improving sequence to sequence learning for morphological inflection generation: The biu- mit systems for the sigmorphon 2016 shared task for morphological reinflection. In SIGMORPHON.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "On the properties of neural machine translation: Encoder-decoder approaches", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merri\u00ebnboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014a. On the proper- ties of neural machine translation: Encoder-decoder approaches. arXiv preprint 1409.1259 .", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merri\u00ebnboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Alar G\u00fcl\u00e7ehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, \u00c7 alar G\u00fcl\u00e7ehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Towards a machinelearning architecture for lexical functional grammar parsing", |
|
"authors": [ |
|
{ |
|
"first": "Grzegorz", |
|
"middle": [], |
|
"last": "Chrupa\u0142a", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grzegorz Chrupa\u0142a. 2008. Towards a machine- learning architecture for lexical functional grammar parsing. Ph.D. thesis, Dublin City University.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "An empirical comparison of simple domain adaptation methods for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Chenhui", |
|
"middle": [], |
|
"last": "Chu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raj", |
|
"middle": [], |
|
"last": "Dabre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sadao", |
|
"middle": [], |
|
"last": "Kurohashi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chenhui Chu, Raj Dabre, and Sadao Kurohashi. 2017. An empirical comparison of simple domain adapta- tion methods for neural machine translation. arXiv preprint 1701.03214 .", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Jason Eisner, and Mans Hulden. 2017a. The CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00e9raldine", |
|
"middle": [], |
|
"last": "Walther", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ekaterina", |
|
"middle": [], |
|
"last": "Vylomova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manaal", |
|
"middle": [], |
|
"last": "Faruqui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "K\u00fcbler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "CoNLL-SIGMORPHON", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, G\u00e9raldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra K\u00fcbler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017a. The CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 lan- guages. In CoNLL-SIGMORPHON.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared taskmorphological reinflection", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "SIGMORPHON", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared task- morphological reinflection. In SIGMORPHON.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Neural graphical models over strings for principal parts morphological paradigm completion", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Sylak-Glassman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christo", |
|
"middle": [], |
|
"last": "Kirov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, John Sylak-Glassman, and Christo Kirov. 2017b. Neural graphical models over strings for principal parts morphological paradigm comple- tion. In EACL.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Morphological inflection generation using character sequence to sequence learning", |
|
"authors": [ |
|
{ |
|
"first": "Manaal", |
|
"middle": [], |
|
"last": "Faruqui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yulia", |
|
"middle": [], |
|
"last": "Tsvetkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection genera- tion using character sequence to sequence learning. In NAACL-HLT.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Algorithms on strings, trees and sequences: computer science and computational biology", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Gusfield", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Gusfield. 1997. Algorithms on strings, trees and sequences: computer science and computational bi- ology. Cambridge university press.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection", |
|
"authors": [ |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Kann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2016a. MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection. In SIG- MORPHON.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Singlemodel encoder-decoder with explicit morphological representation for reinflection", |
|
"authors": [ |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Kann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2016b. Single- model encoder-decoder with explicit morphological representation for reinflection. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A simple way to initialize recurrent networks of rectified linear units", |
|
"authors": [ |
|
{ |
|
"first": "Navdeep", |
|
"middle": [], |
|
"last": "Quoc V Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Jaitly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. 2015. A simple way to initialize recurrent networks of rectified linear units. CoRR abs/1504.00941.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Stanford neural machine translation systems for spoken language domains", |
|
"authors": [ |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "IWSLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minh-Thang Luong and Christopher D Manning. 2015. Stanford neural machine translation systems for spo- ken language domains. In IWSLT.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In NIPS.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "ADADELTA: An adaptive learning rate method", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zeiler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew D Zeiler. 2012. ADADELTA: An adaptive learning rate method. CoRR abs/1212.5701.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Multispace variational encoder-decoders for semisupervised labeled sequence transduction", |
|
"authors": [ |
|
{ |
|
"first": "Chunting", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chunting Zhou and Graham Neubig. 2017. Multi- space variational encoder-decoders for semi- supervised labeled sequence transduction. arXiv preprint 1704.01691 .", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"text": "(jfgdgfen, V;2;SG;PRS) \u2192 jfgdgfst (Ahggen, V;2;SG;PRS) \u2192 Ahggst", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Comparison of the traditional view (left) and the result of CIS (right). Possible source forms in green, the target form in blue. Thickness of the arrows represents priorities of source forms. Most forms of the paradigm have been omitted because of space limitations.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"uris": null, |
|
"text": "* MED*+ BL BL+ MED* MED*+ BL BL+ MED* MED*0.839 0.001 0.565 0.852 0.926 0.330 0.825 0.877 0.953 0.705 0.951 lower-sorbian 0.362 0.532 0.003 0.509 0.670 0.811 0.302 0.769 0.866 0.878 0.650 0.867 macedonian 0.396 0.562 0.001 0.367 0.832 0.858 0.175 0.740 0.942 0.964 0.749 0.876 navajo 0.306 0.404 0.008 0.313 0.385 0.502 0.088 0.517 0.408 0.593 0.282 0.650 northern-sami 0.314 0.485 0.000 0.243 0.499 0.841 0.028 0.758 0.562 0.905 0.201 0.912 Norwegian-nynorsk 0.439 0.445 0.127 0.122 0.604 0.604 0.452 0.341 0.610 0.579 0.560 0.555 persian 0.822 0.159 0.000 0.990 0.911 0.185 0.203 0.997 0.889 0.190 0.854 10.928 0.036 0.780 0.847 0.963 0.100 0.906 0.847 0.965 0.238 0.933 estonian 0.385 0.734 0.001 0.806 0.551 0.767 0.064 0.953 0.581 0.779 0.273 0.951 haida 0.554 0.810 0.000 0.937 0.802 0.849 0.002 00.439 0.045 0.375 0.424 0.493 0.137 0.592 0.474 0.530 0.411 0.692 khaling 0.247 0.495 0.011 0.973 0.546 0.627 0.279 0.992 0.840 0.659 0.638 0.996 kurmanji 0.633 0.648 0.003 0.449 0.790 0.798 0.279 0.695 0.875 0.844 0.679 0.878 latin 0.336 0.594 0.000 0.157 0.449 0.889 0.112 0.691 0.493 0.922 0.301 0.820 lithuanian 0.536 0.669 0.006 0.487 0.615 0.831 0.059 0.744 0.662 0.879 0.302 0.876 norwegian-bokmal 0.417 0.438 0.396 0.306 0.590 0.590 0.576 0.340 0.750 0.750 0", |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>n \u2264 1.5 danish english norwegian-bokmal norwegian-nynorsk</td><td>1.5 < n < 10 arabic bengali bulgarian czech dutch estonian faroese finnish french georgian german hebrew hungarian icelandic irish kurmanji latin latvian lithuanian lower-sorbian macedonian navajo northern-sami polish romanian russian scottish-gaelic serbo-croatian slovak slovene swedish ukrainian</td><td>10 \u2264 n albanian armenian basque catalan haida hindi italian khaling persian portuguese quechua sorani spanish turkish urdu welsh</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Accuracies for task 1, for BL, MED* and MED* ensembles. Upper part: development languages; lower part: surprise languages.", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Accuracies for task 2. All systems are described in the text. Upper part: development languages; lower part: surprise languages.", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |