ACL-OCL / Base_JSON /prefixS /json /sigmorphon /2020.sigmorphon-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:30:45.894912Z"
},
"title": "The NYU-CUBoulder Systems for SIGMORPHON 2020 Task 0 and Task 2",
"authors": [
{
"first": "Assaf",
"middle": [],
"last": "Singer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University",
"location": {
"country": "USA"
}
},
"email": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado",
"location": {
"settlement": "Boulder",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe the NYU-CUBoulder systems for the SIGMORPHON 2020 Task 0 on typologically diverse morphological inflection and Task 2 on unsupervised morphological paradigm completion. The former consists of generating morphological inflections from a lemma and a set of morphosyntactic features describing the target form. The latter requires generating entire paradigms for a set of given lemmas from raw text alone. We model morphological inflection as a sequenceto-sequence problem, where the input is the sequence of the lemma's characters with morphological tags, and the output is the sequence of the inflected form's characters. First, we apply a transformer model to the task. Second, as inflected forms share most characters with the lemma, we further propose a pointer-generator transformer model to allow easy copying of input characters. Our best performing system for Task 0 is placed 6th out of 23 systems. We further use our inflection systems as subcomponents of approaches for Task 2. Our best performing system for Task 2 is the 2nd best out of 7 submissions.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe the NYU-CUBoulder systems for the SIGMORPHON 2020 Task 0 on typologically diverse morphological inflection and Task 2 on unsupervised morphological paradigm completion. The former consists of generating morphological inflections from a lemma and a set of morphosyntactic features describing the target form. The latter requires generating entire paradigms for a set of given lemmas from raw text alone. We model morphological inflection as a sequenceto-sequence problem, where the input is the sequence of the lemma's characters with morphological tags, and the output is the sequence of the inflected form's characters. First, we apply a transformer model to the task. Second, as inflected forms share most characters with the lemma, we further propose a pointer-generator transformer model to allow easy copying of input characters. Our best performing system for Task 0 is placed 6th out of 23 systems. We further use our inflection systems as subcomponents of approaches for Task 2. Our best performing system for Task 2 is the 2nd best out of 7 submissions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In morphologically rich languages, a word's surface form reflects syntactic and semantic properties that are expressed by the word. For example, most English nouns have both singular and plural forms (e.g., robot/robots, process/processes), which are known as the inflected forms of the noun. Some languages display little inflection. In contrast, others have many inflections per base form or lemma: a Polish verb has nearly 100 inflected forms (Janecki, 2000) and an Archi verb has around 1.5 million (Kibrik, 1998) . Morphological inflection is the task of, given an input word -a lemma -together with morphosyntactic features defining the target form, gen-",
"cite_spans": [
{
"start": 446,
"end": 461,
"text": "(Janecki, 2000)",
"ref_id": "BIBREF14"
},
{
"start": 503,
"end": 517,
"text": "(Kibrik, 1998)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Inflected form hug V;PST hugged seel V;3;SG;PRS seels erating the indicated inflected form, cf. Figure 1 . Morphological inflection is a useful tool for many natural language processing tasks (Seeker and \u00c7 etinoglu, 2015; Cotterell et al., 2016b) , especially in morphologically rich languages where handling inflected forms can reduce data sparsity (Minkov et al., 2007) . The SIGMORPHON 2020 Shared Task consists of three separate tasks. We participate in Task 0 on typologically diverse morphological inflection (Vylomova et al., 2020) and Task 2 on unsupervised morphological paradigm completion . Task 0 consists of generating morphological inflections from a lemma and a set of morphosyntactic features describing the target form. For this task, we implement a pointergenerator transformer model, based on the vanilla transformer model (Vaswani et al., 2017) and the pointer-generator model (See et al., 2017) . After adding a copy mechanism to the transformer, it produces a final probability distribution as a combination of generating elements from its output vocabulary and copying elements -characters in our case -from the input. As most inflected forms derive their characters from the source lemma, the use of a mechanism for copying characters directly from the lemma has proven to be effective for morphological inflection generation, especially in the low resource setting (Aharoni and Goldberg, 2017; Makarov et al., 2017) .",
"cite_spans": [
{
"start": 193,
"end": 222,
"text": "(Seeker and \u00c7 etinoglu, 2015;",
"ref_id": "BIBREF25"
},
{
"start": 223,
"end": 247,
"text": "Cotterell et al., 2016b)",
"ref_id": "BIBREF9"
},
{
"start": 351,
"end": 372,
"text": "(Minkov et al., 2007)",
"ref_id": "BIBREF23"
},
{
"start": 516,
"end": 539,
"text": "(Vylomova et al., 2020)",
"ref_id": null
},
{
"start": 843,
"end": 865,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 898,
"end": 916,
"text": "(See et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 1391,
"end": 1419,
"text": "(Aharoni and Goldberg, 2017;",
"ref_id": "BIBREF0"
},
{
"start": 1420,
"end": 1441,
"text": "Makarov et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 96,
"end": 105,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Lemma Features",
"sec_num": null
},
{
"text": "For our submissions, we further increase the size of all training sets by performing multi-task train-ing on morphological inflection and morphological reinflection, i.e., the task of generating inflected forms from forms different from the lemma. For languages with small training sets, we also perform hallucination pretraining (Anastasopoulos and Neubig, 2019) , where we generate pseudo training instances for the task, based on suffixation and prefixation rules collected from the original dataset.",
"cite_spans": [
{
"start": 330,
"end": 363,
"text": "(Anastasopoulos and Neubig, 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lemma Features",
"sec_num": null
},
{
"text": "For Task 2, participants are given raw text and a source file with lemmas. The objective is to generate the complete paradigms for all lemmas. Our systems for this task consist of a combination of the official baseline system (Jin et al., 2020) and our systems for Task 0. The baseline system finds inflected forms in the text, decides on the number of inflected forms per lemma, and produces pseudo training files for morphological inflection. Our inflection model then learns from these and, subsequently, generates all missing forms.",
"cite_spans": [
{
"start": 226,
"end": 244,
"text": "(Jin et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lemma Features",
"sec_num": null
},
{
"text": "SIGMORPHON and CoNLL-SIGMORPHON shared tasks. In recent years, the SIGMOR-PHON and CoNLL-SIGMORPHON shared tasks have promoted research on computational morphology, with a strong focus on morphological inflection. Research related to those shared tasks includes Kann and Sch\u00fctze (2016b) , who used an LSTM (Hochreiter and Schmidhuber, 1997) sequence-to-sequence model with soft attention (Bahdanau et al., 2015) and achieved the best result in the SIGMORPHON 2016 shared task (Kann and Sch\u00fctze, 2016a; Cotterell et al., 2016a) . Due to the often monotonic alignment between input and output, Aharoni and Goldberg (2017) proposed a model with hard monotonic attention. Based on this, Makarov et al. (2017) implemented a neural state-transition system which also used hard monotonic attention and achieved the best results for Task 1 of the SIGMORPHON 2017 shared task. In 2018, the best results were achieved by a revised version of the neural transducer, trained with imitation learning (Makarov and Clematide, 2018) . That model learned an alignment instead of maximizing the likelihood of gold action sequences given by a separate aligner.",
"cite_spans": [
{
"start": 262,
"end": 286,
"text": "Kann and Sch\u00fctze (2016b)",
"ref_id": "BIBREF19"
},
{
"start": 306,
"end": 340,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF13"
},
{
"start": 388,
"end": 411,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 476,
"end": 501,
"text": "(Kann and Sch\u00fctze, 2016a;",
"ref_id": "BIBREF18"
},
{
"start": 502,
"end": 526,
"text": "Cotterell et al., 2016a)",
"ref_id": null
},
{
"start": 592,
"end": 619,
"text": "Aharoni and Goldberg (2017)",
"ref_id": "BIBREF0"
},
{
"start": 683,
"end": 704,
"text": "Makarov et al. (2017)",
"ref_id": "BIBREF22"
},
{
"start": 987,
"end": 1016,
"text": "(Makarov and Clematide, 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Transformers. Transformers have produced state-of-the-art results on various tasks such as machine translation (Vaswani et al., 2017) language modeling (Al-Rfou et al., 2019 ), question answering (Devlin et al., 2019) and language understand-ing (Devlin et al., 2019) . There has been very little work on transformers for morphological inflection, with, to the best of our knowledge, Erdmann et al. (2020) being the only published paper. However, the widespread success of transformers in NLP leads us to believe that a transformer model could perform well on morphological inflection.",
"cite_spans": [
{
"start": 111,
"end": 133,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 152,
"end": 173,
"text": "(Al-Rfou et al., 2019",
"ref_id": "BIBREF1"
},
{
"start": 196,
"end": 217,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 246,
"end": 267,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 384,
"end": 405,
"text": "Erdmann et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Pointer-generators. In addition to the transformer, the architecture of our model is also inspired by See et al. (2017) , who used a pointergenerator network for abstractive summarization. Their model could choose between generating a new element and copying an element from the input directly to the output. This copying of words from the source text via pointing (Vinyals et al., 2015) , improved the handling of out-of-vocabulary words. Copy mechanisms have also been used for other tasks, including morphological inflection (Sharma et al., 2018) . Transformers with copy mechanisms have been used for word-level tasks (Zhao et al., 2019) , but, as far as we know, never before on the character level.",
"cite_spans": [
{
"start": 102,
"end": 119,
"text": "See et al. (2017)",
"ref_id": "BIBREF24"
},
{
"start": 365,
"end": 387,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF28"
},
{
"start": 528,
"end": 549,
"text": "(Sharma et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 622,
"end": 641,
"text": "(Zhao et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The SIGMORPHON 2020 Shared Task is composed of three tasks: Task 0 on typologically diverse morphological inflection (Vylomova et al., 2020) , Task 1 on multilingual grapheme-tophoneme conversion (Gorman et al., 2020) , and Task 2 on unsupervised morphological paradigm completion . We submit systems to Tasks 0 and 2.",
"cite_spans": [
{
"start": 117,
"end": 140,
"text": "(Vylomova et al., 2020)",
"ref_id": null
},
{
"start": 196,
"end": 217,
"text": "(Gorman et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SIGMORPHON 2020 Shared Task",
"sec_num": "3"
},
{
"text": "SIGMORPHON 2020 Task 0 focuses on morphological inflection in a set of typologically diverse languages. Different languages inflect differently, so it is not trivially clear that systems that work on some languages also perform well on others. For Task 0, systems need to generalize well to a large group of languages, including languages unseen during model development.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 0: Typologically Diverse Morphological Inflection",
"sec_num": "3.1"
},
{
"text": "The task features 90 languages in total. 45 of them are development languages, coming from five families: Austronesian, Niger-Congo, Uralic, Oto-Manguean, and Indo-European. The remaining 45 are surprise languages, and many of those are from language families different from the development languages. Some languages have very small training sets, which makes them hard to model. For those cases, the organizers recommend a familybased multilingual approach to exploit similarities between related languages. While this might be effective, we believe that using multitask training in combination with hallucination pretraining can give the model enough information to learn the task well, while staying true to the specific structure of each individual language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 0: Typologically Diverse Morphological Inflection",
"sec_num": "3.1"
},
{
"text": "Task 2 is a novel task, designed to encourage work on unsupervised methods for computational morphology. As morphological annotations are limited for many of the world's languages, the study of morphological generation in the low-resource setting is of great interest (Cotterell et al., 2018) . However, a different way to tackle the problem is by creating systems that are able to use data without annotations.",
"cite_spans": [
{
"start": 268,
"end": 292,
"text": "(Cotterell et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task 2: Unsupervised Morphological Paradigm Completion",
"sec_num": "3.2"
},
{
"text": "For Task 2, a tokenized Bible in each language is given to the participants, along with a list of lemmas. Participants should then produce complete paradigms for each lemma. As slots in the paradigm are not labeled with gold data paradigm slot descriptions, an evaluation metric called bestmatch accuracy was designed for this task. First, this metric matches predicted paradigm slots with gold slots in the way which leads to the highest overall accuracy. It then evaluates the correctness of individual inflected forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 2: Unsupervised Morphological Paradigm Completion",
"sec_num": "3.2"
},
{
"text": "In this section, we introduce our models for Tasks 0 and 2 and describe all approaches we use, such as multitask training, hallucination pretraining and ensembling. The code for our models is available online. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "Our model is built on top of the transformer architecture (Vaswani et al., 2017) . It consists of an encoder and a decoder, each composed of a stack of layers. Each encoder layer consists, in turn, of a self-attention layer, followed by a fully connected layer. Decoder layers contain an additional interattention layer between the two.",
"cite_spans": [
{
"start": 58,
"end": 80,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer",
"sec_num": "4.1"
},
{
"text": "With",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer",
"sec_num": "4.1"
},
{
"text": "inputs (x 1 , \u2022 \u2022 \u2022 , x T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer",
"sec_num": "4.1"
},
{
"text": "being a lemma's characters followed by tags representing the mor-phosyntactic features of the target form, the encoder processes the input sequence and outputs hid-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer",
"sec_num": "4.1"
},
{
"text": "den states (h 1 , \u2022 \u2022 \u2022 , h T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer",
"sec_num": "4.1"
},
{
"text": ". At generation step t, the decoder reads the previously generated sequence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer",
"sec_num": "4.1"
},
{
"text": "(y 1 , \u2022 \u2022 \u2022 , y t\u22121 ) to produce states (s 1 , \u2022 \u2022 \u2022 , s t\u22121 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer",
"sec_num": "4.1"
},
{
"text": "The last decoder state s t\u22121 is then passed through a linear layer followed by a softmax, to generate a probability distribution over the output vocabulary:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P vocab = softmax(V s t\u22121 + b)",
"eq_num": "(1)"
}
],
"section": "Transformer",
"sec_num": "4.1"
},
{
"text": "During training, the entire target sequence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer",
"sec_num": "4.1"
},
{
"text": "y 1 , \u2022 \u2022 \u2022 , y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer",
"sec_num": "4.1"
},
{
"text": "Ty is input to the decoder at once, along with a sequential mask to prevent positions from attending to subsequent positions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer",
"sec_num": "4.1"
},
{
"text": "The pointer-generator transformer allows for both generating characters from a fixed vocabulary, as well as copying from the source sequence via pointing (Vinyals et al., 2015) . This is managed by p genthe probability of generating as opposed to copying -which acts as a soft switch between the two actions. p gen is computed by passing a concatenation of the decoder state s t , the previously generated output y t\u22121 , and a context vector c t through a linear layer, followed by the sigmoid function.",
"cite_spans": [
{
"start": 154,
"end": 176,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pointer-Generator Transformer",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p gen = \u03c3(w[s t ; c t ; y t\u22121 ] + b)",
"eq_num": "(2)"
}
],
"section": "Pointer-Generator Transformer",
"sec_num": "4.2"
},
{
"text": "The context vector is computed as the weighted sum of the encoder hidden states",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pointer-Generator Transformer",
"sec_num": "4.2"
},
{
"text": "c t = T i=1 a t i h i (3) with attention weights a t 1 , \u2022 \u2022 \u2022 , a t T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pointer-Generator Transformer",
"sec_num": "4.2"
},
{
"text": "For each inflection example, let the extended vocabulary denote the union of the output vocabulary, and all characters appearing in the source lemma. We then use p gen , P vocab produced by the transformer, and the attention weights of the last decoder layer a t 1 , \u2022 \u2022 \u2022 , a t T to compute a distribution over the extended vocabulary: then i:x i =c a t i is zero. The ability to produce OOV characters is one of the primary advantages of pointer-generator models; by contrast models such as our vanilla transformer are restricted to their pre-set vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pointer-Generator Transformer",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (c) = p gen P vocab (c) + (1 \u2212 p gen )P copy (c), (4) with P copy (c) = i:x i =c a t i",
"eq_num": "(5)"
}
],
"section": "Pointer-Generator Transformer",
"sec_num": "4.2"
},
{
"text": "Some languages in Task 0 have small training sets, which makes them hard to model. In order to handle that, we perform multitask training, and, thereby, increase the amount of examples available for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multitask Training",
"sec_num": "4.3"
},
{
"text": "Morphological reinflection. Morphological reinflection is a generalized version of the morphological inflection task, which consists of producing an inflected form for any given source form -i.e., not necessarily the lemma -, and target tag. For example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multitask Training",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(hugging; V;PST) \u2192 hugged.",
"eq_num": "(6)"
}
],
"section": "Multitask Training",
"sec_num": "4.3"
},
{
"text": "This is a more complex task, since a model needs to infer the underlying lemma of the source form in order to inflect it correctly to the desired form.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multitask Training",
"sec_num": "4.3"
},
{
"text": "Many morphological inflection datasets contain lemmas that are converted to several inflected forms. Treating separate instances for the same source lemma as independent is missing an opportunity to utilize the connection between the different inflected forms. We approach this by converting our morphological inflection training set into one for morphological reinflection as described in the following.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multitask Training",
"sec_num": "4.3"
},
{
"text": "From inflection to reinflection. Inflected forms of the same lemma are grouped together to sets of one or more (inflected form, morphological features) pairs. Then, for each set, we create new training instances by inflecting all forms to one another, as shown in Figure 2 . We also let the model inflect forms back to the lemma by adding the lemma as one of the inflected forms, marked with the synthetically generated LEMMA tag. forms in the paradigm, and, in that way, provides more training instances to our model.",
"cite_spans": [],
"ref_spans": [
{
"start": 264,
"end": 272,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multitask Training",
"sec_num": "4.3"
},
{
"text": "Another effective tool to improve training in the low-resource setting is data hallucination (Anastasopoulos and Neubig, 2019). Using hallucination, new pseudo-instances are generated for training, based on suffixation and prefixation rules collected from the original dataset. For languages with less than 1000 training instances, we pretrain our models on a hallucinated training set consisting of 10,000 instances, before training on the multitask training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hallucination Pretraining",
"sec_num": "4.4"
},
{
"text": "We submit 4 different systems for Task 0. NYU-CUBoulder-2 consists of one pointer-generator transformer model, and, for NYU-CUBoulder-4, we train one vanilla transformer. Those two are our simplest systems and can be seen as baselines for our other submissions. Because of the effects of random initialization in non-convex objective functions, we further use ensembling in combination with both architectures: NYU-CUBoulder-1 is an ensemble of three pointergenerator transformers, and NYU-CUBoulder-3 is an ensemble of five pointer-generator transformers. The final decision is made by majority voting. In case of a tie, the answer is chosen randomly among the most frequent predictions. Models participating in the ensembles are from different epochs during the same training run.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submissions and Ensembling Strategies",
"sec_num": "4.5"
},
{
"text": "As previously stated, all systems are trained on the augmented multitask training sets, and systems trained on languages with less than 1000 training instances were pretrained on the hallucinated datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submissions and Ensembling Strategies",
"sec_num": "4.5"
},
{
"text": "Our systems for Task 2 consist of a combination of the official baseline system (Jin et al., 2020) and our inflection systems for Task 0. The system is given raw text and a source file with lemmas, and generates the complete paradigm of each lemma. The baseline system finds inflected forms in the text, decides on the number of inflected forms per lemma, and produces pseudo training files for morphological inflection. Any inflections that the system has not found in the raw text are given as test instances. Our inflection model then learns from the files and, subsequently, generates all missing forms. We use the pointer-generator and vanilla transformers as our inflection models.",
"cite_spans": [
{
"start": 80,
"end": 98,
"text": "(Jin et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task 2: Model description",
"sec_num": "4.6"
},
{
"text": "For Task 2, we use ensembling for all submissions. NYU-CUBoulder-1 is an ensemble of six pointer-generator transformers, NYU-CUBoulder-2 is an ensemble of six vanilla transformers, and NYU-CUBoulder-3 is an ensemble of all twelve models. For all models in both tasks, we use the hyperparameters described in Table 1 . Baselines. This year, several baselines are provided for the task. The first system has also been used as a baseline in previous shared tasks on morphological reinflection (Cotterell et al., , 2018 . It is a non-neural system which first scans the dataset to extract suffix-or prefix-based lemmato-form transformations. Then, based on the morphological tag at inference time, it applies the most frequent suitable transformation to an input lemma to yield the output form . The other two baselines are neural models. One is a transformer (Vaswani et al., 2017; Wu et al., 2020) , and the second one is a hard-attention model (Wu and Cotterell, 2019) , which enforces strict monotonicity and learns a latent alignment while learning to transduce. To account for the low-resource settings for some languages, the organizers also employ two additional methods: constructing a multilingual model trained for all languages belonging to each language family (Kann et al., 2017) , and data augmentation using halluci- nation (Anastasopoulos and Neubig, 2019) . Four model types are trained for each neural architecture: a plain model, a family-multilingual model, a data augmented model, and an augmented familymultilingual model. Overall, there are nine baseline systems for each language. We compare our models to an oracle baseline by choosing the best score over all baseline systems for each language.",
"cite_spans": [
{
"start": 490,
"end": 515,
"text": "(Cotterell et al., , 2018",
"ref_id": null
},
{
"start": 856,
"end": 878,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF27"
},
{
"start": 879,
"end": 895,
"text": "Wu et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 943,
"end": 967,
"text": "(Wu and Cotterell, 2019)",
"ref_id": "BIBREF30"
},
{
"start": 1270,
"end": 1289,
"text": "(Kann et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 1336,
"end": 1369,
"text": "(Anastasopoulos and Neubig, 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 308,
"end": 315,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Task 2: Model description",
"sec_num": "4.6"
},
{
"text": "Results. Our results for Task 0 are displayed in Table 2 . All four systems produce relatively similar results. NYU-CUBoulder-3, our five-model ensemble, performs best overall with 88.8% accuracy on average. We further look at the results for low-resource (< 1000 training examples) and highresource (>= 1000 training examples) languages separately. This way, we are able to see the advantage of the pointer-generator transformer in the low-resource setting, where all pointer-generator systems achieve an at least 0.9% higher accuracy than the vanilla transformer model. However, in the setting where training data is abundant, the effect of the copy mechanism vanishes, as NYU-CUBoulder-4 -our only vanilla transformer -achieved the best results for our high-resource languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 56,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Task 2: Model description",
"sec_num": "4.6"
},
{
"text": "Data. For Task 2, a tokenized Bible in each language is given to the participants, along with a list of lemmas. Participants are required to construct the paradigms for all given lemmas. Baselines. The baseline system for the task is composed of four components, eventually producing morphological paradigms (Jin et al., 2020) . The first three modules perform edit tree (Chrupala, 2020) retrieval, additional lemma retrieval from the corpus, and paradigm size discovery, using distributional information. After the first three steps, pseudo training and test files for morphological inflection are produced. Finally, the non-neural Task 0 baseline system or the neural transducer by Makarov and Clematide (2018) are used to create missing inflected forms.",
"cite_spans": [
{
"start": 308,
"end": 326,
"text": "(Jin et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 371,
"end": 387,
"text": "(Chrupala, 2020)",
"ref_id": "BIBREF4"
},
{
"start": 684,
"end": 712,
"text": "Makarov and Clematide (2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task 2",
"sec_num": "5.2"
},
{
"text": "Results. Systems for Task 2 are evaluated using macro-averaged best-match accuracy (Jin et al., 2020) . Results are shown in in Table 3 . All three systems produce relatively similar results. NYU-CUBoulder-2, our vanilla transformer ensemble, performed slightly better overall with an average best-match accuracy of 18.02%. Since our system is close to the baseline models, it performs similarly, achieving slightly worse results. For Basque, our all-round ensemble NYU-CUBoulder-2 outperformed both baselines with a best-match accuracy of 00.07%, achieving the highest result in the shared task.",
"cite_spans": [
{
"start": 83,
"end": 101,
"text": "(Jin et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 128,
"end": 135,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Task 2",
"sec_num": "5.2"
},
{
"text": "As most inflected forms derive their characters from the source lemma, the use of a mechanism for copying characters directly from the lemma has proven to be effective for morphological inflection generation, especially in the low-resource setting (Aharoni and Goldberg, 2017; Makarov et al., 2017) . As all Task 0 datasets are fairly large, we further design a low-resource experiment to investigate the effectiveness of our model.",
"cite_spans": [
{
"start": 248,
"end": 276,
"text": "(Aharoni and Goldberg, 2017;",
"ref_id": "BIBREF0"
},
{
"start": 277,
"end": 298,
"text": "Makarov et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Low-resource Setting",
"sec_num": "5.3"
},
{
"text": "Data. We simulate a low-resource setting by sampling 100 instances from all languages that we already consider low-resource, i.e., all languages with less than 1000 training instances. We then keep their development and test sets unchanged. Overall, we perform this experiment on 21 languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Low-resource Setting",
"sec_num": "5.3"
},
{
"text": "Experimental setup. We train a pointergenerator transformer and a vanilla transformer on the modified datasets to examine the effects of the copy mechanism. We keep the hyperparameters unchanged, i.e., they are as mentioned in Table 1 . We use a majority-vote ensemble consisting of 5 individual models for each architecture.",
"cite_spans": [],
"ref_spans": [
{
"start": 227,
"end": 234,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Low-resource Setting",
"sec_num": "5.3"
},
{
"text": "Baseline. We additionally train the neural transducer by Makarov and Clematide (2018) , which has achieved the best results for the 2018 shared task in the low-resource setting (Cotterell et Model: 1 2 3 4 5 Copy Multitask Train Hallucination Table 5 : System components for the ablation study for Task 0. Each model is a transformer which contains a combination of the following components: copy mechanism, multitask training and hallucination pretraining. 2018). The neural transducer uses hard monotonic attention (Aharoni and Goldberg, 2017) and transduces the lemma into the inflected form by a sequence of explicit edit operations. It is trained with an imitation learning method (Makarov and Clematide, 2018) . We use this model as a reference for the state of the art in the low-resource setting.",
"cite_spans": [
{
"start": 57,
"end": 85,
"text": "Makarov and Clematide (2018)",
"ref_id": "BIBREF21"
},
{
"start": 177,
"end": 190,
"text": "(Cotterell et",
"ref_id": null
},
{
"start": 517,
"end": 545,
"text": "(Aharoni and Goldberg, 2017)",
"ref_id": "BIBREF0"
},
{
"start": 686,
"end": 715,
"text": "(Makarov and Clematide, 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 243,
"end": 250,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Low-resource Setting",
"sec_num": "5.3"
},
{
"text": "Results. As seen in Table 4 , for the low-resource dataset, the pointer-generator transformer clearly outperforms the vanilla transformer by an average accuracy of 4.46%. For some languages, such as Chichicapan Zapotec, the difference is up to 14%. While the neural transducer achieves a higher accuracy, our model performs only 2.45% worse than this state-of-the-art model. 2 We are also able to observe the use of the copy mechanism for copying of OOV characters in the test sets of some languages.",
"cite_spans": [
{
"start": 375,
"end": 376,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 20,
"end": 27,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Low-resource Setting",
"sec_num": "5.3"
},
{
"text": "Our systems use three components on top of the vanilla transformer: a copy mechanism, multitask training and hallucination pretraining. We further perform an ablation study to measure the contribution of each component to the overall system performance. For this, we additionally train five different systems with different combinations of components. A description of which component is used in which system for this ablation study is shown in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 445,
"end": 452,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Studies",
"sec_num": "6"
},
{
"text": "Copy mechanism. Comparing models 2 and 4, which are both trained on the original dataset, pretrained with hallucination and differ only by the use of the copy mechanism, we are able to see that adding this component slightly improves performance by 0.06\u22120.16%. When comparing models 1 and 3, the copy mechanism decreases performance slightly by 0.3% for the high-resource languages 2 We could probably obtain better results with appropriate hyperparameter tuning. and 0.11% overall, but increases performance for low-resource languages by 0.68%.",
"cite_spans": [
{
"start": 382,
"end": 383,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.1"
},
{
"text": "Multitask training. Unlike the copy mechanism, multitask training actually consistently decreases the performance of the models. Looking at models 1 and 2, training the pointer-generator transformer on the multitask dataset decreases accuracy by 1.8 \u2212 2.03% for all three language groups. The same happens for the vanilla transformer with an accuracy decrease of 1.67 \u2212 2.32%. A possible explanation are the relatively large training sets provided for the shared task, as this method is more suitable for the low-resource setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.1"
},
{
"text": "Hallucination pretraining. In order to examine the effect of hallucination pretraining on our submitted models, we now compare the pointergenerator transformers trained on the multitask data with and without hallucination pretraining (models 1 and 5). Hallucination pretraining shows to be helpful: it increases the accuracy on low-resource languages by 1.85%. The performance on the highresource languages is necessarily the same, as only models for low-resource languages are actually pretrained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.1"
},
{
"text": "We presented the NYU-CUBoulder submissions for SIGMORPHON 2020 Task 0 and Task 2. We developed morphological inflection models, based on a transformer and a new model for the task, a pointer-generator transformer, which is a transformer-analogue of a pointer-generator model. For Task 0, we further added multitask training and hallucination pretraining. For Task 2, we combined our inflection models with additional components from the provided baseline to obtain a fully functional system for unsupervised morphological paradigm completion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We performed an ablation study to examine the effects of all components of our inflection system. Finally, we designed a low-resource experiment to show that using the copy mechanism on top of the vanilla transformer is beneficial if training sets are small, and achieved results close to a stateof-the-art model for low-resource morphological inflection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://github.com/AssafSinger94/ sigmorphon-2020-inflection",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the organizers of SIGMOR-PHON 2020 Task 0 and Task 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Morphological inflection generation with hard monotonic attention",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2004--2015",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1183"
]
},
"num": null,
"urls": [],
"raw_text": "Roee Aharoni and Yoav Goldberg. 2017. Morphologi- cal inflection generation with hard monotonic atten- tion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 -August 4, Vol- ume 1: Long Papers, pages 2004-2015.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Character-level language modeling with deeper self-attention",
"authors": [
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Dokook",
"middle": [],
"last": "Choe",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Mandy",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 2019,
"venue": "The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019",
"volume": "",
"issue": "",
"pages": "3159--3166",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.33013159"
]
},
"num": null,
"urls": [],
"raw_text": "Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. 2019. Character-level lan- guage modeling with deeper self-attention. In The Thirty-Third AAAI Conference on Artificial Intelli- gence, AAAI 2019, The Thirty-First Innovative Ap- plications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Hon- olulu, Hawaii, USA, January 27 -February 1, 2019., pages 3159-3166.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Pushing the limits of low-resource morphological inflection",
"authors": [
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "984--996",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1091"
]
},
"num": null,
"urls": [],
"raw_text": "Antonios Anastasopoulos and Graham Neubig. 2019. Pushing the limits of low-resource morphological in- flection. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 984- 996. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Towards a machine-learning architecture for lexical functional grammar parsing",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupala",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Chrupala. 2020. Towards a machine-learning architecture for lexical functional grammar parsing.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The conllsigmorphon 2018 shared task: Universal morphological reinflection",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Kann",
"suffix": ""
},
{
"first": "Garrett",
"middle": [],
"last": "Mielke",
"suffix": ""
},
{
"first": "Miikka",
"middle": [],
"last": "Nicolai",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Silfverberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the CoNLL SIG-MORPHON 2018 Shared Task: Universal Morphological Reinflection",
"volume": "",
"issue": "",
"pages": "1--27",
"other_ids": {
"DOI": [
"10.18653/v1/k18-3001"
]
},
"num": null,
"urls": [],
"raw_text": "McCarthy, Katharina Kann, S. J. Mielke, Garrett Nicolai, Miikka Silfverberg, David Yarowsky, Ja- son Eisner, and Mans Hulden. 2018. The conll- sigmorphon 2018 shared task: Universal morpholog- ical reinflection. In Proceedings of the CoNLL SIG- MORPHON 2018 Shared Task: Universal Morpho- logical Reinflection, Brussels, October 31 -Novem- ber 1, 2018, pages 1-27. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Conll-sigmorphon 2017 shared task: Universal morphological reinflection in 52 languages",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "G\u00e9raldine",
"middle": [],
"last": "Walther",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Vylomova",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection",
"volume": "",
"issue": "",
"pages": "1--30",
"other_ids": {
"DOI": [
"10.18653/v1/K17-2001"
]
},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, G\u00e9raldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra K\u00fcbler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017. Conll-sigmorphon 2017 shared task: Universal mor- phological reinflection in 52 languages. In Proceed- ings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, Van- couver, BC, Canada, August 3-4, 2017, pages 1-30. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Jason Eisner, and Mans Hulden. 2016a. The SIGMORPHON 2016 shared taskmorphological reinflection",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "10--22",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2002"
]
},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016a. The SIGMORPHON 2016 shared task - morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, Berlin, Germany, August 11, 2016, pages 10-22. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Morphological smoothing and extrapolation of word embeddings",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/p16-1156"
]
},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Hinrich Sch\u00fctze, and Jason Eisner. 2016b. Morphological smoothing and extrapolation of word embeddings. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 4171-4186. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Ryan Cotterell, and Nizar Habash. 2020. The paradigm discovery problem",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Erdmann",
"suffix": ""
},
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
},
{
"first": "Shijie",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Erdmann, Micha Elsner, Shijie Wu, Ryan Cotterell, and Nizar Habash. 2020. The paradigm discovery problem. CoRR, abs/2005.01630.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The sigmorphon 2020 shared task on multilingual grapheme-to-phoneme conversion",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Gorman",
"suffix": ""
},
{
"first": "F",
"middle": [
"E"
],
"last": "Lucas",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Ashby",
"suffix": ""
},
{
"first": "Arya",
"middle": [
"D"
],
"last": "Goyzueta",
"suffix": ""
},
{
"first": "Shijie",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "You",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyle Gorman, Lucas F.E. Ashby, Aaron Goyzueta, Arya D. McCarthy, Shijie Wu, and Daniel You. 2020. The sigmorphon 2020 shared task on multilingual grapheme-to-phoneme conversion. In Proceedings of the 17th SIGMORPHON Workshop on Computa- tional Research in Phonetics, Phonology, and Mor- phology. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "300 polish verbs. Barron's Educational Series",
"authors": [
{
"first": "Klara",
"middle": [],
"last": "Janecki",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klara Janecki. 2000. 300 polish verbs. Barron's Edu- cational Series.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Unsupervised morphological paradigm completion",
"authors": [
{
"first": "Huiming",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Yihui",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Arya",
"middle": [
"D"
],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huiming Jin, Liwei Cai, Yihui Peng, Chen Xia, Arya D. McCarthy, and Katharina Kann. 2020. Unsuper- vised morphological paradigm completion. In Pro- ceedings of the 58th Annual Meeting of the Associa- tion for Computational Linguistics. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "One-shot neural cross-lingual transfer for paradigm completion",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1993--2003",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1182"
]
},
"num": null,
"urls": [],
"raw_text": "Katharina Kann, Ryan Cotterell, and Hinrich Sch\u00fctze. 2017. One-shot neural cross-lingual transfer for paradigm completion. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 1993-2003, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The SIGMORPHON 2020 shared task on unsupervised morphological paradigm completion",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Arya",
"middle": [
"D"
],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Garrett",
"middle": [],
"last": "Nicolai",
"suffix": ""
},
{
"first": "Mans",
"middle": [],
"last": "Hulden",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Kann, Arya D. McCarthy, Garrett Nico- lai, and Mans Hulden. 2020. The SIGMORPHON 2020 shared task on unsupervised morphological paradigm completion. In Proceedings of the 17th SIGMORPHON Workshop on Computational Re- search in Phonetics, Phonology, and Morphology. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "MED: the LMU system for the SIGMORPHON 2016 shared task on morphological reinflection",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "62--70",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2010"
]
},
"num": null,
"urls": [],
"raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2016a. MED: the LMU system for the SIGMORPHON 2016 shared task on morphological reinflection. In Pro- ceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, Berlin, Germany, August 11, 2016, pages 62-70.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Singlemodel encoder-decoder with explicit morphological representation for reinflection",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2016b. Single- model encoder-decoder with explicit morphological representation for reinflection. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 2: Short Papers.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The handbook of morphology",
"authors": [
{
"first": "E",
"middle": [],
"last": "Aleksandr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kibrik",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "455--476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aleksandr E. Kibrik. 1998. The handbook of morphol- ogy. In Andrew Spencer and Arnold M. Zwicky, edi- tors, pages 455-476. Oxford: Blackwell Publishers.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "UZH at conll-sigmorphon 2018 shared task on universal morphological reinflection",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Makarov",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Clematide",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the CoNLL SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection",
"volume": "",
"issue": "",
"pages": "69--75",
"other_ids": {
"DOI": [
"10.18653/v1/k18-3008"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Makarov and Simon Clematide. 2018. UZH at conll-sigmorphon 2018 shared task on universal morphological reinflection. In Proceedings of the CoNLL SIGMORPHON 2018 Shared Task: Univer- sal Morphological Reinflection, Brussels, October 31 -November 1, 2018, pages 69-75. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Align and copy: UZH at SIGMORPHON 2017 shared task for morphological reinflection",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Makarov",
"suffix": ""
},
{
"first": "Tatiana",
"middle": [],
"last": "Ruzsics",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Clematide",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Makarov, Tatiana Ruzsics, and Simon Clematide. 2017. Align and copy: UZH at SIGMORPHON 2017 shared task for morphological reinflection. CoRR, abs/1707.01355.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Generating complex morphology for machine translation",
"authors": [
{
"first": "Einat",
"middle": [],
"last": "Minkov",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Hisami",
"middle": [],
"last": "Suzuki",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Einat Minkov, Kristina Toutanova, and Hisami Suzuki. 2007. Generating complex morphology for machine translation. In ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computa- tional Linguistics, June 23-30, 2007, Prague, Czech Republic. The Association for Computational Lin- guistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1073--1083",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1099"
]
},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1073-1083.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A graphbased lattice dependency parser for joint morphological segmentation and syntactic analysis",
"authors": [
{
"first": "Wolfgang",
"middle": [],
"last": "Seeker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etinoglu",
"suffix": ""
}
],
"year": 2015,
"venue": "Trans. Assoc. Comput. Linguistics",
"volume": "3",
"issue": "",
"pages": "359--373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolfgang Seeker and\u00d6zlem \u00c7 etinoglu. 2015. A graph- based lattice dependency parser for joint morpholog- ical segmentation and syntactic analysis. Trans. As- soc. Comput. Linguistics, 3:359-373.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "IIT(BHU)-IIITH at conllsigmorphon 2018 shared task on universal morphological reinflection",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Ganesh",
"middle": [],
"last": "Katrapati",
"suffix": ""
},
{
"first": "Dipti Misra",
"middle": [],
"last": "Sharma",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the CoNLL SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection",
"volume": "",
"issue": "",
"pages": "105--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek Sharma, Ganesh Katrapati, and Dipti Misra Sharma. 2018. IIT(BHU)-IIITH at conll- sigmorphon 2018 shared task on universal morphological reinflection. In Proceedings of the CoNLL SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection, Brussels, October 31 -November 1, 2018, pages 105-111.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 Decem- ber 2017, Long Beach, CA, USA, pages 5998-6008.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Grammar as a foreign language",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015",
"volume": "",
"issue": "",
"pages": "2773--2781",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. 2015. Gram- mar as a foreign language. In Advances in Neu- ral Information Processing Systems 28: Annual Conference on Neural Information Processing Sys- tems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2773-2781.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Miikka Silfverberg, and Mans Hulden. 2020. The SIG-MORPHON 2020 Shared Task 0: Typologically diverse morphological inflection",
"authors": [
{
"first": "Ekaterina",
"middle": [],
"last": "Vylomova",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Salesky",
"suffix": ""
},
{
"first": "Sabrina",
"middle": [
"J"
],
"last": "Mielke",
"suffix": ""
},
{
"first": "Shijie",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Edoardo",
"middle": [],
"last": "Ponti",
"suffix": ""
},
{
"first": "Rowan",
"middle": [],
"last": "Hall Maudslay",
"suffix": ""
},
{
"first": "Ran",
"middle": [],
"last": "Zmigrod",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Valvoda",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Toldova",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Klyachko",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Yegorov",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Krizhanovsky",
"suffix": ""
},
{
"first": "Paula",
"middle": [],
"last": "Czarnowska",
"suffix": ""
},
{
"first": "Irene",
"middle": [],
"last": "Nikkarinen",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Krizhanovsky",
"suffix": ""
},
{
"first": "Tiago",
"middle": [],
"last": "Pimentel",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Torroba Hennigen",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ekaterina Vylomova, Jennifer White, Elizabeth Salesky, Sabrina J. Mielke, Shijie Wu, Edoardo Ponti, Rowan Hall Maudslay, Ran Zmigrod, Joseph Valvoda, Svetlana Toldova, Francis Tyers, Elena Klyachko, Ilya Yegorov, Natalia Krizhanovsky, Paula Czarnowska, Irene Nikkarinen, Andrej Krizhanovsky, Tiago Pimentel, Lucas Torroba Hennigen, Christo Kirov, Garrett Nicolai, Ad- ina Williams, Antonios Anastasopoulos, Hilaria Cruz, Eleanor Chodroff, Ryan Cotterell, Miikka Silfverberg, and Mans Hulden. 2020. The SIG- MORPHON 2020 Shared Task 0: Typologically diverse morphological inflection. In Proceedings of the 17th SIGMORPHON Workshop on Compu- tational Research in Phonetics, Phonology, and Morphology.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Exact hard monotonic attention for character-level transduction",
"authors": [
{
"first": "Shijie",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019",
"volume": "1",
"issue": "",
"pages": "1530--1537",
"other_ids": {
"DOI": [
"10.18653/v1/p19-1148"
]
},
"num": null,
"urls": [],
"raw_text": "Shijie Wu and Ryan Cotterell. 2019. Exact hard mono- tonic attention for character-level transduction. In Proceedings of the 57th Conference of the Asso- ciation for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 1530-1537. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kewei",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Ruoyu",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Jingming",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "156--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving grammatical er- ror correction via pre-training a copy-augmented ar- chitecture with unlabeled data. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 156-165.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Morphological inflection examples in English. A lemma and features are mapped to an inflected form."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The dataset for Task 0 covers 90 languages in total: 45 development languages and 45 surprise languages. For details on the official dataset please refer toVylomova et al. (2020)."
},
"TABREF1": {
"content": "<table><tr><td>Hyperparameter</td><td>Value</td></tr><tr><td>Embedding dimension</td><td>256</td></tr><tr><td>Encoder layers</td><td>4</td></tr><tr><td>Decoder layers</td><td>4</td></tr><tr><td>Encoder hidden dimension</td><td>1024</td></tr><tr><td>Decoder hidden dimension</td><td>1024</td></tr><tr><td>Attention heads</td><td>4</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": "The new training set fully utilizes the connections between different"
},
"TABREF2": {
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null,
"text": "The hyperparameters used in our inflection models for both Task 0 and Task 2."
},
"TABREF4": {
"content": "<table><tr><td>: Macro-averaged results over all languages</td></tr><tr><td>on the official development and test sets for Task 0.</td></tr><tr><td>Low=languages with less than 1000 train instances,</td></tr><tr><td>Other=all other languages, All=all languages.</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
},
"TABREF6": {
"content": "<table><tr><td>evaluation only, and do not have development sets.</td></tr><tr><td>The development languages are: Maltese, Persian,</td></tr><tr><td>Portuguese, Russian, Swedish. The test languages</td></tr><tr><td>are: Basque, Bulgarian, English, Finnish, German,</td></tr><tr><td>Kannada, Navajo, Spanish and Turkish.</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": "Results for all test languages on the official test sets for Task 2."
},
"TABREF8": {
"content": "<table><tr><td>: Results on the official development data</td></tr><tr><td>for our low-resource experiment. Trm=Vanilla trans-</td></tr><tr><td>former, Trm-PG=Pointer-generator transformer, Base-</td></tr><tr><td>line=neural transducer by Makarov and Clematide</td></tr><tr><td>(2018).</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
},
"TABREF9": {
"content": "<table><tr><td>Model:</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td></tr><tr><td/><td colspan=\"3\">Development Set</td><td/><td/></tr><tr><td>Low</td><td>88.20</td><td/><td/><td/><td/></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": "90.00 87.52 89.84 86.35 Other 90.63 92.66 90.93 92.60 90.63 All 90.02 92.04 90.13 91.96 89.63"
},
"TABREF10": {
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null,
"text": "Ablation study for Task 0; development set results, averaged over all languages. Low=languages with less than 1000 train instances, Other=all other languages, All=all languages."
}
}
}
}