ACL-OCL / Base_JSON /prefixS /json /sigmorphon /2021.sigmorphon-1.26.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:31:34.114256Z"
},
"title": "Training Strategies for Neural Multilingual Morphological Inflection",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Ek",
"suffix": "",
"affiliation": {
"laboratory": "Centre for Linguistic Theory and Studies in Probability",
"institution": "University of Gothenburg",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Jean-Philippe",
"middle": [],
"last": "Bernardy",
"suffix": "",
"affiliation": {
"laboratory": "Centre for Linguistic Theory and Studies in Probability",
"institution": "University of Gothenburg",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents the submission of team GUCLASP to SIGMORPHON 2021 Shared Task on Generalization in Morphological Inflection Generation. We develop a multilingual model for Morphological Inflection and primarily focus on improving the model by using various training strategies to improve accuracy and generalization across languages.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents the submission of team GUCLASP to SIGMORPHON 2021 Shared Task on Generalization in Morphological Inflection Generation. We develop a multilingual model for Morphological Inflection and primarily focus on improving the model by using various training strategies to improve accuracy and generalization across languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Morphological inflection is the task of transforming a lemma to its inflected form given a set of grammatical features (such as tense or person). Different languages have different strategies, or morphological processes such as affixation, circumfixation, or reduplication among others (Haspelmath and Sims, 2013) . These are all ways to make a lemma express some grammatical features. One way to characterize how languages express grammatical features is a spectrum of morphological typology ranging from agglutinative to isolating. In agglutinative languages, grammatical features are encoded as bound morphemes attached to the lemma, while in isolating languages each grammatical feature is represented as a lemma. Thus, languages in different parts of this spectrum will have different strategies for expressing information given by grammatical features.",
"cite_spans": [
{
"start": 286,
"end": 313,
"text": "(Haspelmath and Sims, 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent years, statistical and neural models have been performing well on the task of morphological inflection (Smit et al., 2014; Kann and Sch\u00fctze, 2016; Makarov et al., 2017; Sharma et al., 2018) . We follow this tradition and implement a neural multilingual model for morphological inflection. In a multilingual system, a single model is developed to handle input from several different languages: we can give the model either a word in Evenk or Russian and it perform inflection. This is a challenging problem for several reasons. For many languages resources are scarce, and a multilingual system must balance the training signals from both high-resource and low-resource languages such that the model learns something about both. Additionally, different languages employ different morphological processes to inflect words. In addition to languages employing a variety of different morphological processes, different languages use different scripts (for example Arabic, Latin, or Cyrillic) , which can make it hard to transfer knowledge about one language to another. To account for these factors a model must learn to recognize the different morphological processes, the associated grammatical features, the script used, and map them to languages.",
"cite_spans": [
{
"start": 113,
"end": 132,
"text": "(Smit et al., 2014;",
"ref_id": "BIBREF17"
},
{
"start": 133,
"end": 156,
"text": "Kann and Sch\u00fctze, 2016;",
"ref_id": "BIBREF7"
},
{
"start": 157,
"end": 178,
"text": "Makarov et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 179,
"end": 199,
"text": "Sharma et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 969,
"end": 996,
"text": "Arabic, Latin, or Cyrillic)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we investigate how far these issues can be tackled using different training strategies, as opposed to focusing on model design. Of course, in the end, an optimal system will be a combination of a good model design and good training strategies. We employ an LSTM encoder-decoder architecture with attention, based on the architecture of Anastasopoulos and Neubig (2019), as our base model and consider the following training strategies:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Curriculum learning: We tune the order in which the examples are presented to the model based on the loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Multi-task learning: We predict the formal operations required to transform a lemma into its inflected form.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Language-wise label smoothing: We smooth the loss function to not penalize the model as much when it predicts a character from the correct language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Scheduled sampling: We use a probability distribution to determine whether to use the previous output or the gold as input when decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The data released cover 38 languages of varying typology, genealogy, grammatical features, scripts, and morphological processes. The data for the different languages vary greatly in size, from 138 examples (Ludic) to 100310 (Turkish). For the low-resourced languages 1 we extend the original dataset with hallucinated data (Anastasopoulos and Neubig, 2019) to train on. With respect to the work of Anastasopoulos and Neubig (2019), we make the following changes. We identify all subsequences of length 3 or more that overlap in the lemma and inflection. We then randomly sample one of them, denoted R, as the sequence to be replaced. For each language, we compile a set C L containing all (1,2,3,4)-grams in the language. We construct a string G to replace R with by uniformly sampling n-grams from C L and concatenating them G = cat(g 0 , ..., g m ) until we have a sequence whose length satisfy:",
"cite_spans": [
{
"start": 323,
"end": 356,
"text": "(Anastasopoulos and Neubig, 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "|R| \u2212 2 \u2264 |G| \u2264 |R| + 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "Additionally, we do not consider subsequences which include a phonological symbol. 2 A schematic of the hallucination process is shown in Figure 1 . Sampling n-grams instead of individual characters allow us to retain some of the orthographical information present in the language. We generate a set of 10 000 hallucinated examples for each of the low-resource languages.",
"cite_spans": [
{
"start": 83,
"end": 84,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 138,
"end": 146,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "In this section, the multilingual model and training strategies used are presented. 3 We employ a single model with shared parameters across all languages.",
"cite_spans": [
{
"start": 84,
"end": 85,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "To account for different languages in our model we prepend a language embedding to the input (similarly to Johnson et al. (2017) ; Raffel et al. (2019) ). To model inflection, we employ an encoder-decoder architecture with attention. The first layer in the model is comprised of an LSTM, which produces a contextual representation for each character in the lemma. We encode the tags using a self-attention module (equivalent to a 1-head transformer layer) (Vaswani et al., 2017) . This layer does not use any positional data: indeed the order of the tags does not matter (Anastasopoulos and Neubig, 2019) .",
"cite_spans": [
{
"start": 107,
"end": 128,
"text": "Johnson et al. (2017)",
"ref_id": "BIBREF6"
},
{
"start": 131,
"end": 151,
"text": "Raffel et al. (2019)",
"ref_id": "BIBREF14"
},
{
"start": 456,
"end": 478,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 571,
"end": 604,
"text": "(Anastasopoulos and Neubig, 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "To generate inflections, we use an LSTM decoder with two attention modules. One attending to the lemma and one to the tags. For the lemma attention, we use a content-based attention module (Graves et al., 2014; Karunaratne et al., 2021) which uses cosine similarity as its scoring method. However, we found that only using content-based attention causes attention to be too focused on a single character, and mostly ignores contextual cues relevant for the generation.",
"cite_spans": [
{
"start": 189,
"end": 210,
"text": "(Graves et al., 2014;",
"ref_id": "BIBREF3"
},
{
"start": 211,
"end": 236,
"text": "Karunaratne et al., 2021)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "To remedy this, we combine the content-based attention with additive attention as follows, where superscript cb indicate content-based attention, add additive attention and k the key:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "a add = softmax(w tanh(W a k + W b h)) att add = T t=1 a add t h add t a cb = softmax(cos(k, h)) att cb = T t=1 a cb t h cb t att = W[att cb ; att add ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "In addition to combining content-based attention and additive attention we also employ regularization on the attention modules such that for each decoding step, the attention is encouraged to distribute the attention weights a uniformly across 3 Our code is available here: https://github.com/adamlek/ multilingual-morphological-inflection/ the lemma and tag hidden states (Anastasopoulos and Neubig, 2019; Cohn et al., 2016) . We employ additive attention for the tags.",
"cite_spans": [
{
"start": 244,
"end": 245,
"text": "3",
"ref_id": null
},
{
"start": 373,
"end": 406,
"text": "(Anastasopoulos and Neubig, 2019;",
"ref_id": "BIBREF0"
},
{
"start": 407,
"end": 425,
"text": "Cohn et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "In each decoding step, we pass the gold or predicted character embedding to the decoding LSTM. We then take the output as the key and calculate attention over the lemma and tags. This representation is then passed to a two-layer perceptron with ReLU activations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.1"
},
{
"text": "Instead of predicting the characters in the inflected form, one can also predict the Levenshtein operations needed to transform the lemma into the inflected form; as shown by Makarov et al. (2017) .",
"cite_spans": [
{
"start": 175,
"end": 196,
"text": "Makarov et al. (2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-task learning",
"sec_num": "3.2"
},
{
"text": "A benefit of considering operations instead of characters needed to transform a lemma to its inflected form is that the script used is less of a factor, since by considering the operations only we abstract away from the script used. We find that making both predictions, as a multi-task setup, improves the performance of the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-task learning",
"sec_num": "3.2"
},
{
"text": "The multi-task setup operates on the character level, thus for each contextual representation of a character we want to predict an operation among deletion (del), addition/insertion (add), substitution (sub) and copy (cp). Because add and del change the length, we predict two sets of operations, the lemma-reductions and the lemmaadditions. To illustrate, the Levenshtein operations for the word pair (valatas, ei valate) in Veps (uralic language related to Finnish) is shown in Figure 2 . In our setup, the task of lemma-reductions is performed by predicting the cp, del, and sub operations based on the encoded hidden states in the lemma. The task of lemma-additions then is performed by predicting the cp, add, and sub operations on the characters generated by the decoder. We use a single two-layer perceptron with ReLU activation to predict both lemma-reduction and lemma-additions. 4",
"cite_spans": [],
"ref_spans": [
{
"start": 480,
"end": 488,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Multi-task learning",
"sec_num": "3.2"
},
{
"text": "We employ a competence-based curriculum learning strategy (Liu et al., 2020; Platanios et al., 2019) . A competence curriculum learning strategy constructs a learning curriculum based on the competence of a model, and present examples which the model is deemed to be able to handle. The goal of this strategy is for the model to transfer or apply the information it acquires from the easy examples to the hard examples.",
"cite_spans": [
{
"start": 58,
"end": 76,
"text": "(Liu et al., 2020;",
"ref_id": "BIBREF9"
},
{
"start": 77,
"end": 100,
"text": "Platanios et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Curriculum Learning",
"sec_num": "3.3"
},
{
"text": "To estimate an initial difficulty for an example we consider the character unigram log probability of the lemma and inflection. For a word (either the lemma or inflection) w = c 0 , ..., c K , the unigram log probability is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Curriculum Learning",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log(P U (w)) = K k=0 log(p(c k ))",
"eq_num": "(1)"
}
],
"section": "Curriculum Learning",
"sec_num": "3.3"
},
{
"text": "To get a score for a lemma and inflection pair (henceforth (x, y)), we calculate it as the sum of the log probabilities of x and y:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Curriculum Learning",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score(x, y) = P U (x) + P U (y)",
"eq_num": "(2)"
}
],
"section": "Curriculum Learning",
"sec_num": "3.3"
},
{
"text": "Note that here we do not normalize by the length of the inflection and lemma. This is because an additional factor in how difficult an example should be considered is its length, i.e. longer words are harder to model. We then sort the examples and use a cumulative density function (CDF) to map the unigram probabilities to a score in the range (0, 1], we denote the training set of pairs and their scores ((x, y), s) 0 , . . . , ((x, y), s) m , where m indicate the number of examples in the dataset, as D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Curriculum Learning",
"sec_num": "3.3"
},
{
"text": "To select appropriate training examples from D we must estimate the competence c of our model. The competence of the model is estimated by a function of the number of training steps t taken:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Curriculum Learning",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c(t) = min 1, t 1 \u2212 c(1) 2 c(1) 2 + c(1) 2",
"eq_num": "(3)"
}
],
"section": "Curriculum Learning",
"sec_num": "3.3"
},
{
"text": "During training, we employ a probabilistic approach to constructing batches from our corpus, we uniformly draw samples ((x, y), s) from the training set D such that the score s is lower than the model competence c(t). This ensures that for each training step, we only consider examples that the model can handle according to our curriculum schedule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Curriculum Learning",
"sec_num": "3.3"
},
{
"text": "However, just because an example has low unigram probability doesn't ensure that the example is easy, as the example may contain frequent characters but also include rare morphological processes (or rare combinations of Levenshtein operations), to account for this we recompute the example scores at each training step. We sort the examples in each training step according to the decoding loss, then assign a new score to the examples in the range (0, 1] using a CDF function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Curriculum Learning",
"sec_num": "3.3"
},
{
"text": "We also have to take into account that as the model competence grows, \"easy\" (low loss or high unigram probability) examples will be included more often in the batches. To ensure that the model learns more from examples whose difficulty is close to its competence we compute a weight w for each example in the batch. We then scale the loss by dividing the score s by the model competence at the current time-step: weighted loss(x, y) = loss(x, y) \u00d7 score(x, y) c(t) (4) Because the value of our model competence is tied to a specific number of training steps, we develop a probabilistic strategy for sampling batches when the model has reached full competence. When the model reaches full competence we construct language weights by dividing the number of examples in a language by the total number of examples in the dataset and taking the inverse distribution as the language weights. Thus for each language, we get a value in the range (0, 1] where low-resource languages receive a higher weight. To construct a batch we continue by sampling examples, but now we only add an example if r \u223c \u03c1, where \u03c1 is a uniform Bernoulli distribution, is less than the language weight of the example. This strategy allows us to continue training our model after reaching full competence without neglecting the low-resource languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Curriculum Learning",
"sec_num": "3.3"
},
{
"text": "In total we train the model for 240 000 training steps, and consider the model to be fully competent after 60 000 training steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Curriculum Learning",
"sec_num": "3.3"
},
{
"text": "Commonly, when training an encoder-decoder RNN model, the input at time-step t is not the output from the decoder at t \u2212 1, but rather the gold data. It has been shown that models trained with this strategy may suffer at inference time. Indeed, they have never been exposed to a partially incorrect input in the training phase. We address this issue using scheduled sampling (Bengio et al., 2015) .",
"cite_spans": [
{
"start": 375,
"end": 396,
"text": "(Bengio et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scheduled Sampling",
"sec_num": "3.4"
},
{
"text": "We implement a simple schedule for calculating the probability of using the gold characters or the model's prediction by using a global sample probability variable which is updated at each training step. We start with a probability \u03c1 of 100% to take the gold. At each training step, we decrease \u03c1 by 1 totalsteps . For each character, we take a sample from the Bernoulli distribution of parameter \u03c1 to determine the decision to make.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scheduled Sampling",
"sec_num": "3.4"
},
{
"text": "We use cross-entropy loss for the character generation loss and for the operation predictions tasks. Our final loss function consists of the character generation loss, the lemma-reduction, and the lemmaaddition losses summed. We use a cosine annealing learning rate scheduler (Loshchilov and Hutter, 2017) , gradually decreasing the learning rate. The hyperparameters used for training are presented in Table 1 : Hyperparameters used. As we use a probabilistic approach to training we report number of training steps rather than epochs. In total, the number of training steps we take correspond to about 35 epochs.",
"cite_spans": [
{
"start": 276,
"end": 305,
"text": "(Loshchilov and Hutter, 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 403,
"end": 410,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Training",
"sec_num": "3.5"
},
{
"text": "Language-wise Label smoothing We use language-wise label smoothing to calculate the loss. This means that we remove a constant \u03b1 from the probability of the correct character and distribute the same \u03b1 uniformly across the probabilities of the characters belonging to the language of the word. The motivation for doing label smoothing this way is that we know that all incorrect character predictions are not equally incorrect. the Latin or Cyrillic alphabet. A difficulty is that each language potentially uses a different set of characters. We calculate this set using the training set only-so it is important to make \u03b1 not too large, so that there is not a too big difference between characters seen in the training set and those not seen. Indeed, if there were, the model might completely exclude unseen characters from its test-time predictions. (We found that \u03b1 = 2.5% is a good value.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.5"
},
{
"text": "The results from our system using the four straining strategies presented earlier are presented in Table 2 . Each language is evaluated by two metrics, exact match, and average Levenshtein distance. The average Levenshtein distance is on average, how many operations are required to transform the system's guess to the gold inflected form. One challenging aspect of this dataset for our model is balancing the information the model learns about low-and high-resource languages. We plot the accuracy the model achieved against the data available for that language in Figure 3 . We note that for all languages with roughly more than 30 000 examples our model performs well, achieving around 98% accuracy. When we consider languages that have around 10 000 natural examples and no hallucinated data the accuracy drops closer to round 50%. For the languages with hallucinated data, we would expect this trend to continue as the data is synthetic and does not take into account orthographic information as natural language examples do. That is, when constructing hallucinated examples, orthography is taken into account only indirectly because we consider n-grams instead of characters when finding the replacement sequence. However, we find that for many of the languages with hallucinated data the exact match accuracy is above 50%, but varies a lot depending on the language.",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 106,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 566,
"end": 574,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Two of the worst languages in our model is Classical Syriac (syc) and Xibe (sjo). An issue with Classical Syriac is that the language uses a unique script, the Syriac abjad, which makes it difficult for the model to transfer information about operations and common character combinations/transformations into Classical Syriac from related languages such as Modern Standard Arabic (spoken in the region). For Xibe there is a similar story: it uses the Sibe alphabet which is a variant of Manchu script, which does not occur elsewhere in our dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Our model process many languages simultaneously, thus it would be encouraging if the model also was able to find similarities between languages. To explore this we investigate whether the language embeddings learned by the model produce clusters of language families. A t-SNE plot of the language embeddings is shown in Figure 4 . The plot shows that the model can find some family resemblances between languages. For example, we have a Uralic cluster consisting of the languages Veps (vep), Olonets (olo), and Karelian (krl) which are all spoken in a region around Russia and Finland. However, Ludic (lud) and V\u00f5ro (vro) are not captured in this cluster, yet they are spoken in the same region.",
"cite_spans": [],
"ref_spans": [
{
"start": 320,
"end": 328,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Language similarities",
"sec_num": "5"
},
{
"text": "We can see that the model seem to separate language families somewhat depending on the script used. The Afro-Asiatic languages are split into two smaller clusters, one cluster containing the languages that use Standard Arabic (ara, afv and arz) script and one cluster that use Amharic and Hebrew (amh, heb) script. As mentioned earlier Classical Syriac uses its another script and seems to consequently appear in another part of the map.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language similarities",
"sec_num": "5"
},
{
"text": "In general, our model's language embeddings appear to learn some relationships between languages, but certainly not all of them. However, that we find some patterns in encouraging for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language similarities",
"sec_num": "5"
},
{
"text": "We note that during the development all of our training strategies showed a stronger performance for the task, except one: scheduled sampling. We hypothesize this is because the low-resource languages benefit from using the gold as input when predicting the next character, while high-resource languages do not need this as much. The model has seen more examples from high-resource languages and thus can model them better, which makes using the previous hidden state more reliable as input when predicting the next token. Indeed, the scheduled sampling degrade the overall performance by 3.04 percentage points, increasing our total average accuracy to 83.3 percentage points, primarily affecting low-resource languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scheduled Sampling",
"sec_num": "6"
},
{
"text": "We have presented a single multilingual model for morphological inflection in 38 languages enhanced with different training strategies: curriculum learning, multi-task learning, scheduled sampling and language-wise label smoothing. The results indicate that our model to some extent capture similarities between the input languages, however, languages that use different scripts appears problematic. A solution to this would be to employ transliteration (Murikinati et al., 2020) .",
"cite_spans": [
{
"start": 454,
"end": 479,
"text": "(Murikinati et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future Work",
"sec_num": "7"
},
{
"text": "In future work, we plan on exploring curriculum learning in more detail and move away from estimating the competence of our model linearly, and instead, estimate the competence using the accuracy on the batches. Another interesting line of work here is instead of scoring the examples by model loss alone, but combine it with insights from language acquisition and teaching, such as sorting lemmas based on their frequency in a corpus (Ionin and Wexler, 2002; Slabakova, 2010) .",
"cite_spans": [
{
"start": 435,
"end": 459,
"text": "(Ionin and Wexler, 2002;",
"ref_id": "BIBREF5"
},
{
"start": 460,
"end": 476,
"text": "Slabakova, 2010)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future Work",
"sec_num": "7"
},
{
"text": "We also plan to investigate language-wise label smoothing more closely, specifically how the value of \u03b1 should be fine-tuned with respect to the number of characters and languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future Work",
"sec_num": "7"
},
{
"text": "We consider languages with less than 10 000 training examples as low-resource in this paper.2 Thus inFigure 1a subsequence of length 2 is selected as the sequence to be replaced, since the larger subsequences would include the phonological symbol :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In the future, we'd like to experiment with including the representations of tags in the input to the operation classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research reported in this paper was supported by grant 2014-39 from the Swedish Research Council, which funds the Centre for Linguistic Theory and Studies in Probability (CLASP) in the Department of Philosophy, Linguistics, and Theory of Science at the University of Gothenburg.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Pushing the limits of low-resource morphological inflection",
"authors": [
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "984--996",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1091"
]
},
"num": null,
"urls": [],
"raw_text": "Antonios Anastasopoulos and Graham Neubig. 2019. Pushing the limits of low-resource morphological in- flection. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 984- 996. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Scheduled sampling for sequence prediction with recurrent neural networks",
"authors": [
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1171--1179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for se- quence prediction with recurrent neural networks. In Advances in Neural Information Processing Sys- tems 28: Annual Conference on Neural Informa- tion Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1171-1179.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Incorporating structural alignment biases into an attentional neural translation model",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Cong Duy Vu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Vymolova",
"suffix": ""
},
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
}
],
"year": 2016,
"venue": "The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "876--885",
"other_ids": {
"DOI": [
"10.18653/v1/n16-1102"
]
},
"num": null,
"urls": [],
"raw_text": "Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vy- molova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment biases into an attentional neural translation model. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, San Diego California, USA, June 12-17, 2016, pages 876-885. The Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural turing machines. CoRR",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "Ivo",
"middle": [],
"last": "Danihelka",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. CoRR, abs/1410.5401.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Understanding morphology",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Haspelmath",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Sims",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Haspelmath and Andrea Sims. 2013. Under- standing morphology. Routledge.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Why is 'is' easier than '-s'?: acquisition of tense/agreement morphology by child second language learners of english",
"authors": [
{
"first": "Tania",
"middle": [],
"last": "Ionin",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Wexler",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "18",
"issue": "",
"pages": "95--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tania Ionin and Kenneth Wexler. 2002. Why is 'is' eas- ier than '-s'?: acquisition of tense/agreement mor- phology by child second language learners of en- glish. Second language research, 18(2):95-136.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Corrado",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, et al. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Med: The lmu system for the sigmorphon 2016 shared task on morphological reinflection",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "62--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2016. Med: The lmu system for the sigmorphon 2016 shared task on morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 62-70.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Robust highdimensional memory-augmented neural networks",
"authors": [
{
"first": "Geethan",
"middle": [],
"last": "Karunaratne",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "Schmuck",
"suffix": ""
},
{
"first": "Manuel",
"middle": [
"Le"
],
"last": "Gallo",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Cherubini",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Benini",
"suffix": ""
},
{
"first": "Abu",
"middle": [],
"last": "Sebastian",
"suffix": ""
},
{
"first": "Abbas",
"middle": [],
"last": "Rahimi",
"suffix": ""
}
],
"year": 2021,
"venue": "Nature communications",
"volume": "12",
"issue": "1",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geethan Karunaratne, Manuel Schmuck, Manuel Le Gallo, Giovanni Cherubini, Luca Benini, Abu Sebastian, and Abbas Rahimi. 2021. Robust high- dimensional memory-augmented neural networks. Nature communications, 12(1):1-12.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Norm-based curriculum learning for neural machine translation",
"authors": [
{
"first": "Xuebo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Houtim",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Derek",
"middle": [
"F"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Lidia",
"middle": [
"S"
],
"last": "Chao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "2020",
"issue": "",
"pages": "427--436",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.41"
]
},
"num": null,
"urls": [],
"raw_text": "Xuebo Liu, Houtim Lai, Derek F. Wong, and Lidia S. Chao. 2020. Norm-based curriculum learning for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics, ACL 2020, Online, July 5- 10, 2020, pages 427-436. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "SGDR: stochastic gradient descent with warm restarts",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2017. SGDR: stochastic gradient descent with warm restarts. In 5th International Conference on Learning Repre- sentations, ICLR 2017, Toulon, France, April 24- 26, 2017, Conference Track Proceedings. OpenRe- view.net.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Align and copy: UZH at SIGMORPHON 2017 shared task for morphological reinflection",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Makarov",
"suffix": ""
},
{
"first": "Tatiana",
"middle": [],
"last": "Ruzsics",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Clematide",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL SIGMORPHON 2017",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/K17-2004"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Makarov, Tatiana Ruzsics, and Simon Clematide. 2017. Align and copy: UZH at SIGMORPHON 2017 shared task for morphological reinflection. In Proceedings of the CoNLL SIGMORPHON 2017",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Transliteration for cross-lingual morphological inflection",
"authors": [
{
"first": "Nikitha",
"middle": [],
"last": "Murikinati",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "189--197",
"other_ids": {
"DOI": [
"10.18653/v1/2020.sigmorphon-1.22"
]
},
"num": null,
"urls": [],
"raw_text": "Nikitha Murikinati, Antonios Anastasopoulos, and Gra- ham Neubig. 2020. Transliteration for cross-lingual morphological inflection. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 189-197, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Competence-based curriculum learning for neural machine translation",
"authors": [
{
"first": "Otilia",
"middle": [],
"last": "Emmanouil Antonios Platanios",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Stretcu",
"suffix": ""
},
{
"first": "Barnabas",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Tom M",
"middle": [],
"last": "Poczos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.09848"
]
},
"num": null,
"urls": [],
"raw_text": "Emmanouil Antonios Platanios, Otilia Stretcu, Gra- ham Neubig, Barnabas Poczos, and Tom M Mitchell. 2019. Competence-based curriculum learning for neural machine translation. arXiv preprint arXiv:1903.09848.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.10683"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "IIT(BHU)-IIITH at CoNLL-SIGMORPHON 2018 shared task on universal morphological reinflection",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Ganesh",
"middle": [],
"last": "Katrapati",
"suffix": ""
},
{
"first": "Dipti Misra",
"middle": [],
"last": "Sharma",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection",
"volume": "",
"issue": "",
"pages": "105--111",
"other_ids": {
"DOI": [
"10.18653/v1/K18-3013"
]
},
"num": null,
"urls": [],
"raw_text": "Abhishek Sharma, Ganesh Katrapati, and Dipti Misra Sharma. 2018. IIT(BHU)-IIITH at CoNLL- SIGMORPHON 2018 shared task on universal mor- phological reinflection. In Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Uni- versal Morphological Reinflection, pages 105-111, Brussels. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "What is easy and what is hard to acquire in a second language?",
"authors": [
{
"first": "Roumyana",
"middle": [],
"last": "Slabakova",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roumyana Slabakova. 2010. What is easy and what is hard to acquire in a second language?",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Morfessor 2.0: Toolkit for statistical morphological segmentation",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Smit",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "Stig-Arne",
"middle": [],
"last": "Gr\u00f6nroos",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "21--24",
"other_ids": {
"DOI": [
"10.3115/v1/e14-2006"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Smit, Sami Virpioja, Stig-Arne Gr\u00f6nroos, and Mikko Kurimo. 2014. Morfessor 2.0: Toolkit for statistical morphological segmentation. In Proceed- ings of the 14th Conference of the European Chap- ter of the Association for Computational Linguistics, EACL 2014, April 26-30, 2014, Gothenburg, Swe- den, pages 21-24. The Association for Computer Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4- 9, 2017, Long Beach, CA, USA, pages 5998-6008.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "A example of the data hallucination process. The sequence R = ki is replace by G = tk.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Levenshtein operations mapped to characters in the lemma and inflection.",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "Number of examples (green indicate natural and blue hallucinated examples, left x-axis) plotted against the exact match accuracy (right x-axis) of our system on the development data (blue) and the test data (red).",
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"num": null,
"text": ": t-SNE plot of the language embeddings. Different colors indicate different language families.",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>HYPERPARAMETER</td><td>VALUE</td></tr><tr><td>Batch Size</td><td>256</td></tr><tr><td>Embedding dim</td><td>128</td></tr><tr><td>Hidden dim</td><td>256</td></tr><tr><td>Training steps</td><td>240000</td></tr><tr><td>Steps for full competence</td><td>60000</td></tr><tr><td>Initial LR</td><td>0.001</td></tr><tr><td>Min LR</td><td>0.0000001</td></tr><tr><td>Smoothing-\u03b1</td><td>2.5%</td></tr></table>",
"num": null,
"text": "",
"html": null
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Results on the development data.",
"html": null
}
}
}
}