ACL-OCL / Base_JSON /prefixS /json /sigmorphon /2020.sigmorphon-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:31:29.925655Z"
},
"title": "Ensemble Self-Training for Low-Resource Languages: Grapheme-to-Phoneme Conversion and Morphological Inflection",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sprachverarbeitung University of Stuttgart",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Vu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sprachverarbeitung University of Stuttgart",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Kuhn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sprachverarbeitung University of Stuttgart",
"location": {
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an iterative data augmentation framework, which trains and searches for an optimal ensemble and simultaneously annotates new training data in a self-training style. We apply this framework on two SIG-MORPHON 2020 shared tasks: graphemeto-phoneme conversion and morphological inflection. With very simple base models in the ensemble, we rank the first and the fourth in these two tasks. We show in the analysis that our system works especially well on lowresource languages. The system is available at",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an iterative data augmentation framework, which trains and searches for an optimal ensemble and simultaneously annotates new training data in a self-training style. We apply this framework on two SIG-MORPHON 2020 shared tasks: graphemeto-phoneme conversion and morphological inflection. With very simple base models in the ensemble, we rank the first and the fourth in these two tasks. We show in the analysis that our system works especially well on lowresource languages. The system is available at",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The vast majority of languages in the world have very few annotated dataset available for training natural language processing models, if at all. Dealing with the low-resource languages has sparked much interest in the NLP community (Garrette et al., 2013; Agi\u0107 et al., 2016; Zoph et al., 2016) .",
"cite_spans": [
{
"start": 233,
"end": 256,
"text": "(Garrette et al., 2013;",
"ref_id": "BIBREF7"
},
{
"start": 257,
"end": 275,
"text": "Agi\u0107 et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 276,
"end": 294,
"text": "Zoph et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "When annotation is difficult to obtain, data augmentation is a common practice to increase training data size with reasonable quality to feed to powerful models (Ragni et al., 2014; Bergmanis et al., 2017; Silfverberg et al., 2017) . For example, the data hallucination method by Anastasopoulos and Neubig (2019) automatically creates non-existing \"words\" to augment morphological inflection data, which alleviates the label bias problem in the generation model. However, the data created by such method can only help regularize the model, but cannot be viewed as valid words of a language.",
"cite_spans": [
{
"start": 161,
"end": 181,
"text": "(Ragni et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 182,
"end": 205,
"text": "Bergmanis et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 206,
"end": 231,
"text": "Silfverberg et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 280,
"end": 312,
"text": "Anastasopoulos and Neubig (2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Orthogonal to the data augmentation approach, another commonly used method to boost model performance without changing the architecture is ensembling, i.e., by training several models of the same kind and selecting the output by majority voting. It has been shown that a key to the success of ensembling is the diversity of the base models (Surdeanu and Manning, 2010) , since models with different inductive biases are less likely to make the same mistake.",
"cite_spans": [
{
"start": 340,
"end": 368,
"text": "(Surdeanu and Manning, 2010)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we pursue a combination of both directions, by developing a framework to search for the optimal ensemble and simultaneously annotate unlabeled data. The proposed method is an iterative process, which uses an ensemble of heterogeneous models to select and annotate unlabeled data based on the agreement of the ensemble, and use the annotated data to train new models, which are in turn potential members of the new ensemble. The ensemble is a subset of all trained models that maximizes the accuracy on the development set, and we use a genetic algorithm to find such combination of models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This approach can be viewed as a type of selftraining (Yarowsky, 1995; Clark et al., 2003) , but instead of using the confidence of one model, we use the agreement of many models to annotate new data. The key difference is that the model diversity in the ensemble can alleviate the confirmation bias of typical self-training approaches.",
"cite_spans": [
{
"start": 54,
"end": 70,
"text": "(Yarowsky, 1995;",
"ref_id": null
},
{
"start": 71,
"end": 90,
"text": "Clark et al., 2003)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We apply the framework on two of the SIGMOR-PHON 2020 Shared Tasks: grapheme-to-phoneme conversion and morphological inflection (Vylomova et al., 2020) . Our system rank the first in the former and the fourth in the latter.",
"cite_spans": [
{
"start": 128,
"end": 151,
"text": "(Vylomova et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While analyzing the contribution of each component of our framework, we found that the data augmentation method does not significantly improve the results for languages with medium or large training data in the shared tasks, i.e., the advantage of our system mainly comes from the massive ensemble of a variety of base models. However, when we simulate the low-resource scenario or consider only the low-resource languages, the benefit of data augmentation becomes prominent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Ensemble Self-Training Framework",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section we describe the details of our framework. It is largely agnostic to the type of supervised learning task, while in this work we apply it on two sequence generation tasks: morphological inflection and grapheme-to-phoneme conversion. The required component includes one or more types of base models and large amount of unlabeled data. Ideally, the base models should be simple and fast to train with reasonable performance, and as diverse as possible, i.e., models with different architectures are better than the same architecture with different random seeds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Workflow",
"sec_num": "2.1"
},
{
"text": "The workflow is described in Algorithm 1. Initially, we have the original training data L 0 , unlabeled data U , and several base model types T 1...k . In each iteration n, there are two major steps: (1) ensemble training and (2) data augmentation. In the ensemble training step, we train each base model type on the current training data L n to obtain the models m 1...k n , and add them into the model pool (line 4-8). We then search for an optimal subset of the models from the pool as the current ensemble, based on its performance on the development set (line 9). In the data augmentation step, we sample a batch of unlabeled data (line 10), then use the ensemble to predict and select a subset of the instances based on the agreement among the models (line 11). The selected data are then aggregated into the training set for later iterations (line 12-13).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Workflow",
"sec_num": "2.1"
},
{
"text": "Simply using all the models as the ensemble would be not only slow but also inaccurate, since too many inferior models might even mislead the ensemble, therefore searching for the optimal combination is needed. However, an exact search is not feasible, since the number of combinations grows exponentially. We use the genetic algorithm for heterogeneous ensemble search largely following Haque et al. (2016). In the preliminary experiments, the genetic algorithm consistently finds better ensembles than random sampling or using all models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Search",
"sec_num": "2.2"
},
{
"text": "We use a binary encoding such as 0100101011 to represent an ensemble combination (denoted as an individual in genetic algorithms), where each bit encodes whether to use one particular model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Search",
"sec_num": "2.2"
},
{
"text": "As we aim to maximizing the prediction accuracy of the ensemble, we define the fitness score of an individual as the accuracy on the development",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Search",
"sec_num": "2.2"
},
{
"text": "Algorithm 1 Ensemble Self-Training (EST) 1: function EST(L, U , T ) Require: labeled data L Require: unlabeled data U Require: tools T 2: Initial data L 0 = L 3: Model pool M = \u2205 4: for n : 0...N do 5: for t k \u2208 T do 6: m k n = TRAIN(t k , L n ) 7: M = M \u222a {m k n } 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Search",
"sec_num": "2.2"
},
{
"text": "end for 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Search",
"sec_num": "2.2"
},
{
"text": "E = SEARCHENSEMBLE(M ) 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Search",
"sec_num": "2.2"
},
{
"text": "Sample u \u223c U 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Search",
"sec_num": "2.2"
},
{
"text": "l = SELECTDATA(E, u)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Search",
"sec_num": "2.2"
},
{
"text": "12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Search",
"sec_num": "2.2"
},
{
"text": "L n+1 = AGGREGATEDATA(L n , l)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Search",
"sec_num": "2.2"
},
{
"text": "13:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Search",
"sec_num": "2.2"
},
{
"text": "U = U \u2212 l 14: end for 15:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Search",
"sec_num": "2.2"
},
{
"text": "return E, L k 16: end function set by the ensemble represented by the individual.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Search",
"sec_num": "2.2"
},
{
"text": "Initially, we generate 100 random individuals into a pool, which is maintained at the size of 100. Whenever a new individual enters the pool, the individual with the lowest fitness score will be removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Search",
"sec_num": "2.2"
},
{
"text": "Each new individual is created through three steps: parent selection, crossover, and mutation. Both parents are selected in a tournament style, in which we sample 10 individuals from the pool, and take the one with the highest fitness score. In the crossover process, we take each bit randomly from one parent with a rate of 60%, and 40% from the other. In the mutation process, we flip each bit of the child with a probability of 1%. To ensure the efficiency of the ensemble, we also limit the number of models in the combination to 20: if a newly evolved combination exceeds 20 models, we randomly reduce the number to 20 before evaluating the fitness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Search",
"sec_num": "2.2"
},
{
"text": "In each search, we evolve 100,000 individuals, and return the one with the highest fitness score. Since the data size is relatively small, the ensemble search procedure typically only takes a few seconds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Search",
"sec_num": "2.2"
},
{
"text": "In each iteration, we use the current optimal ensemble to predict a batch of new data, and select a subset as additional data to train models in the next iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Selection and Aggregation",
"sec_num": "2.3"
},
{
"text": "There are various heuristics to select new data, with two major principles to consider: (1) one should prefer the instances with higher agreement among the models, since they are more likely to be correct; (2) instances with unanimous agreement might be too trivial and does not provide much new information to train the models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Selection and Aggregation",
"sec_num": "2.3"
},
{
"text": "To strike a balance between the two considerations, we first rank the data by the agreement, but only take at most half of the instances with unanimous agreement as new annotated data. Concretely, we sample 20,000 instances to predict, and use at most 3,600 instances as new data if their predictions have over 80% agreement, among which, at most 1,800 instances have 100% agreement. Note that we chose the data size of 3,600 because it is the training data size in the grapheme-to-phoneme conversion task, and we used the same setting for the morphological inflection task without tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Selection and Aggregation",
"sec_num": "2.3"
},
{
"text": "There are also different ways to aggregate the new data. One could simply accumulate all the selected data, resulting in much larger training data in the later iterations, which might slow down the training process and dilute the original data too much. Alternatively, one could append only the selected data from the current iteration to the original data, which might limit the potential of the models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Selection and Aggregation",
"sec_num": "2.3"
},
{
"text": "Again, we took the middle path, in which we keep half of all additional data from the previous iteration together with the selected data in the current iteration. For example, there are 3600 additional instances produced in iteration 0, 3600/2 + 3600 = 5400 in iteration 1, 5400/2 + 3600 = 6300 in iteration 2, and the size eventually converges to 3600 \u00d7 2 = 7200.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Selection and Aggregation",
"sec_num": "2.3"
},
{
"text": "3 Grapheme-to-Phoneme Conversion",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Selection and Aggregation",
"sec_num": "2.3"
},
{
"text": "We first apply our framework on the grapheme-tophoneme conversion task , which includes 15 languages from the WikiPron project (Lee et al., 2020 ) with a diverse typological spectrum: Armenian (arm), Bulgarian (bul), French (fre), Georgian (geo), Hindi (hin), Hungarian (hun), Icelandic (ice), Korean (kor), Lithuanian (lit), Modern Greek (gre), Adyghe (ady), Dutch (dut), Japanese hiragana (jpn), Romanian (rum), and Vietnamese (vie).",
"cite_spans": [
{
"start": 127,
"end": 144,
"text": "(Lee et al., 2020",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Data",
"sec_num": "3.1"
},
{
"text": "As preprocessing, we romanize the scripts of Japanese and Korean, 12 which show improvements in preliminary experiments. The reason is that the Japanese Hiragana and Korean Hangul characters are both syllabic, in which one grapheme typically corresponds to multiple phonemes, and by romanizing them (1) the alphabet size is reduced, and (2) the length ratio of the source and target sequences are much closer to 1:1, which empirically improve the quality of the alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Data",
"sec_num": "3.1"
},
{
"text": "As unlabeled data, we use word frequency lists, 3 which are mostly extracted from OpenSubtitles (Lison and Tiedemann, 2016). For the two languages we did not find in OpenSubtitles, Adyghe is obtained from the corpus by Arkhangelskiy and Lander (2016), 4 and Georgian is obtained from several text corpora. 56 Since the word lists are automatically extracted from various sources with different methods and quality, we filter them by the alphabet of the training set of each language, and keep at most 100,000 most frequent words.",
"cite_spans": [
{
"start": 306,
"end": 308,
"text": "56",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Data",
"sec_num": "3.1"
},
{
"text": "As the framework desires the models to be as diverse as possible to maximize its benefit, we employ four different types of base models with different inductive biases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "The first type is the Finite-State-Transducer (FST) baseline by Lee et al. (2020) , based on the pair n-gram model (Novak et al., 2016) .",
"cite_spans": [
{
"start": 64,
"end": 81,
"text": "Lee et al. (2020)",
"ref_id": "BIBREF10"
},
{
"start": 115,
"end": 135,
"text": "(Novak et al., 2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "The other three types are all variants of Seq2Seq models, where we use the same BiLSTM encoder to encode the input grapheme sequence. The first one is a vanilla Seq2Seq model with attention (attn), similar to Luong et al. (2015) , where the decoder applies attention on the encoded input and use the attended input vector to predict the output phonemes.",
"cite_spans": [
{
"start": 209,
"end": 228,
"text": "Luong et al. (2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "The second one is a hard monotonic attention model (mono), similar to Aharoni and Goldberg (2017) , where the decoder uses a pointer to select the input vector to make a prediction: either produc-ing a phoneme, or moving the pointer to the next position. The monotonic alignment of the input and output is obtained with the Chinese Restaurant Process following Sudoh et al. (2013) , which is provided in the baseline model of the SIGMORPHON 2016 Shared Task (Cotterell et al., 2016) .",
"cite_spans": [
{
"start": 70,
"end": 97,
"text": "Aharoni and Goldberg (2017)",
"ref_id": "BIBREF1"
},
{
"start": 361,
"end": 380,
"text": "Sudoh et al. (2013)",
"ref_id": "BIBREF16"
},
{
"start": 458,
"end": 482,
"text": "(Cotterell et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "The third one is essentially a hybrid of hard monotonic attention model and tagging model (tag), i.e., for each grapheme we predict a short sequence of phonemes that is aligned to it. It relies on the same monotonic alignment for training. This model is different from the previous one in that it can potentially alleviate the error propagation problem, since the short sequences are nonautoregressive and independent of each other, much like tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "For each of the three models, we further create a reversed variant, where we reverse the input sequence and subsequently the output sequence. On average, the best model types are the tagging models of both directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "Since we need to train many base models, we keep their sizes at a minimal level: the LSTM encoder and decoder both have one layer, all dimensions are 128, and no beam search is used. As a result, each base model has about 0.3M parameters and takes less than 10 minutes to train on a single CPU core.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "With the ensemble self-training framework, we train 14 base models at each iteration: FST models with 3-grams and 7-grams (fst-3, fst-7), two instances for each direction of the attention model (attn-l2r, attn-r2l), hard monotonic model (mono-l2r, mono-r2l), and tagging model (tag-l2r, tag-r2l). Table 1 shows the number of iterations when the optimal ensemble is found and the number of models it contains, as well as the Word Error Rate (WER) and Phone Error Rate (PER) on the test set, in comparison to the Seq2Seq baseline provided by the organizer. Generally, our system outperforms the strong baseline in 13 out of 15 languages, and the gap for Korean is especially large, due to the romanization in our preprocessing. For three languages (Hungarian, Japanese, and Lithuanian), the best ensemble is in the 0-th iteration, which means the augmented data for them is not helpful at all. WER of 13.8 and PER of 2.76. However, a large ensemble of simple models is not exactly comparable with other single-model systems, and it is thus difficult to derive a conclusion from the evaluation alone. We are more interested in understanding how much of the improvement comes from the ensemble and its model diversity and how much from the data augmentation process. For this purpose, we run our framework in two additional scenarios. In the first scenario, we reduce the diversity of the models (denoted as -diversity), where we only use the base model tag-l2r and tag-r2l, which performs the best among others, but keep the same number of models trained in each iteration as before. In the second scenario, we do not perform data augmentation (denoted as -augmentation), i.e., all models are trained on the same original training data in each iteration. Table 2 shows the WER on the development set of the default scenario and the two experimental scenarios. For each scenario, we show the average WER of all models and the WER of the ensemble from the initial iteration and the best iteration.",
"cite_spans": [],
"ref_spans": [
{
"start": 297,
"end": 304,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1752,
"end": 1759,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3.3"
},
{
"text": "We can observe three trends in the 4 13.8 12.7 17.3 16.8 12.7 11.3 19.9 19.8 12.9 11.8 hin 9.7 9.0 4.0 3.6 8.1 6.9 4.2 4.0 9.7 9.3 4.0 3.6 hun 4.5 4.5 2.0 2.0 4.0 3.9 2.4 2.2 4.7 4.7 2.4 2.4 ice 15.3 14.1 6.4 5.6 11.9 11.4 5.6 5.3 14.8 14.6 6.2 5.6 jpn 8.0 8.0 6.0 6.0 7.7 7.7 6.2 6.2 8.0 8.0 5.8 5.8 kor 25.9 23.4 16.2 14.4 20.9 20.7 16.9 16.0 25.9 25.6 16.4 14. age model performance and the ensemble performance, which clearly demonstrates the benefit of the ensemble. (2) In the -diversity scenario, the average model performance is better than the default scenario, but the ensemble performance is worse than the default scenario, which demonstrates the importance of the model diversity. (3) The average model performance in the default scenario has clear improvement as opposed to the random fluctuation in the -augmentation scenario, which means that the data augmentation can indeed benefit some individual models. However, to our surprise and disappointment, the ensemble performance of the -augmentation scenario is even slightly better than the default scenario, which casts a shadow over the data augmentation method in this framework. As our framework is designed for low-resource languages, and the data size of 3,600 in the task is already beyond low-resource, we therefore experiment in a simulated low-resource scenario. 7 For each language, we randomly sample 200 instances as the new training data, while ensuring that all graphemes and phonemes in the training 7 Consider the Swadesh list (Swadesh, 1950) with only 100-200 basic concepts/words, which could be thought of as a typical low-resource scenario. In the WikiPron collection, more than 20% of the 165 languages have less than 200 words. data appear at least once. Table 3 shows the WER of the default andaugment scenario in the low-resource experiment. Similar to the previous experiment, the ensemble greatly reduces errors of individual models. More importantly, the individual models benefit significantly from the augmented data (from 54.2 to 35.5), and the final ensemble further reduces the error rate to 25.2. The WER in the default scenario is much better than the -augment scenario (25.2 vs 29.2), which means that the data augmentation is indeed beneficial when the training data is scarce.",
"cite_spans": [
{
"start": 35,
"end": 76,
"text": "4 13.8 12.7 17.3 16.8 12.7 11.3 19.9 19.8",
"ref_id": null
},
{
"start": 1339,
"end": 1340,
"text": "7",
"ref_id": null
},
{
"start": 1482,
"end": 1483,
"text": "7",
"ref_id": null
},
{
"start": 1510,
"end": 1525,
"text": "(Swadesh, 1950)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 1744,
"end": 1751,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3.3"
},
{
"text": "We also apply our framework on the morphological inflection task (Vylomova et al., 2020) , where the input is a combination of lemmata and morphological tags according to the UniMorph schema (Sylak-Glassman et al., 2015) , and the output is the inflected word forms. There are 90 languages with various data sizes, ranging from around 100 to 100,000.",
"cite_spans": [
{
"start": 65,
"end": 88,
"text": "(Vylomova et al., 2020)",
"ref_id": null
},
{
"start": 191,
"end": 220,
"text": "(Sylak-Glassman et al., 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task and Data",
"sec_num": "4.1"
},
{
"text": "As unlabeled data for the augmentation process, we simply recombine the lemmata and morphological tags of the same category in the training set (i.e., Table 3 : WER on the development set for the simulated low-resource experiment in the scenarios with and without data augmentation. In each scenario, we show the average model performance and the ensemble performance in the first iteration and the best iteration. a verb lemma only combines with all morphological tags for verbs), with a maximum size of 100,000 for each language. For many languages, however, the recombination is as scarce as the original data since they are from (almost) complete inflection paradigms of a few lemmata. In total, we obtained 1,422,617 instances, which is slightly smaller than the training set with 1,574,004 instances. Since the additional data come directly from the original training data, we consider it the restricted setting, where no external data sources or cross-lingual methods are used.",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 158,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task and Data",
"sec_num": "4.1"
},
{
"text": "Due to our late start in this task, we only implemented two types of base models, paired with leftto-right and right-to-left generation order. The first type is a Seq2Seq model with soft attention, very similar to the one in the grapheme-to-phoneme conversion task, except that an additional BiLSTM is used to encode the morphological tags. The second type is a hard monotonic attention model, also similar as before, but instead of using the alignment with the Chinese Restaurant Process, we use Levenshtein edit scripts to obtain the target sequence, since the input and the output share the same alphabet. At each step, the model either outputs a character from the alphabet, or copies the currently pointed input character, or advances the input pointer to the next position. In total, we train 8 models per iteration, i.e., two models with different random seeds for each variant. The hyperparameters are largely the same as in the previous task, and each model has about 0.5M parameters. Table 4 compares the average test accuracy between our system (IMS-00-0) and the systems of the winning teams as well as the baselines. The baselines include a hard monotonic attention model with latent alignment (Wu and Cotterell, 2019) and a carefully tuned transformer (Vaswani et al., 2017; Wu et al., 2020) , noted as mono and trm. They are additionally trained with augmented data by Anastasopoulos and Neubig (2019), noted as mono-aug and trm-aug. On average, our system ranks the fourth among the participating teams and the third in the restricted setting (without external data source or cross-lingual methods). It outperforms the hard monotonic attention baseline, but not the transformer baseline. More details on the systems and their comparisons are described in Vylomova et al. (2020) . Compared to the previous task, we used fewer base models, in terms of both number and diversity, which partly explains the relatively lower ranking.",
"cite_spans": [
{
"start": 1266,
"end": 1288,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF20"
},
{
"start": 1289,
"end": 1305,
"text": "Wu et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 1771,
"end": 1793,
"text": "Vylomova et al. (2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 994,
"end": 1001,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Models",
"sec_num": "4.2"
},
{
"text": "In this task, the data size ranges across several magnitude for different languages. We thus analyze the performance difference of our system against the two baselines with their own data augmentation Figure 1 : Performance difference between our system and the two baselines with data augmentation, with respect to the training data size.",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 209,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3"
},
{
"text": "(mono-aug and trm-aug) with respect to the original training data size, as illustrated in Figure 1 . We removed the trivial cases in which both models achieved 100% accuracy. Clearly, our system performs better for languages with smaller training data size, while losing to the powerful baseline models when the data size is large. This again demonstrates the benefit of our framework for low-resource languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 98,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3"
},
{
"text": "We also mark the major language families to see whether they play a role in the performance difference, since different inductive biases might work differently on particular language families. For example, the right-to-left generation order might work better on languages with inflectional prefixes. However, we could not find any convincing patterns regarding language families in the plot, i.e., there is not a language family in the data set where our model always performs better or worse than the baseline. The only exception is the Austronesian family, where our system generally outperforms the baselines, but they all have relatively small data size, which is a more probable explanation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3"
},
{
"text": "Note that our augmentation method is theoretically orthogonal to the hallucination method (Anastasopoulos and Neubig, 2019), and could be combined to further improve the performance of the baseline models for low-resource languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3"
},
{
"text": "We present an ensemble self-training framework and apply it on two sequence-to-sequence generation tasks: grapheme-to-phoneme conversion and morphological inflection. Our framework includes an improved self-training method by optimizing and utilizing the ensemble to obtain more reliable training data, which shows clear advantage on lowresource languages. The optimal ensemble search method with the genetic algorithm easily accommodates the inductive biases of different model architectures for different languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "As a potential future direction, we could incorporate the framework into the scenario of active learning to reduce annotator workload, i.e., by suggesting plausible predictions to minimize the need of correction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://pypi.org/project/pykakasi/ 2 https://pypi.org/project/ hangul-romanize/ 3 https://github.com/hermitdave/ FrequencyWords/ 4 https://github.com/timarkh/ uniparser-grammar-adyghe 5 https://github.com/akalongman/ geo-words 6 Georgian is actually in OpenSubtitles, but we accidentally missed it because of a confusion with the language code.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was in part supported by funding from the Ministry of Science, Research and the Arts of the State of Baden-W\u00fcrttemberg (MWK), within the CLARIN-D research project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multilingual projection for parsing truly low-resource languages",
"authors": [
{
"first": "Zeljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Johannsen",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "301--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeljko Agi\u0107, Anders Johannsen, Barbara Plank, H\u00e9ctor Mart\u00ednez Alonso, Natalie Schluter, and An- ders S\u00f8gaard. 2016. Multilingual projection for parsing truly low-resource languages. Transactions of the Association for Computational Linguistics, 4:301-312.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Morphological inflection generation with hard monotonic attention",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2004--2015",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1183"
]
},
"num": null,
"urls": [],
"raw_text": "Roee Aharoni and Yoav Goldberg. 2017. Morphologi- cal inflection generation with hard monotonic atten- tion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 2004-2015, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Pushing the limits of low-resource morphological inflection",
"authors": [
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "984--996",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1091"
]
},
"num": null,
"urls": [],
"raw_text": "Antonios Anastasopoulos and Graham Neubig. 2019. Pushing the limits of low-resource morphological in- flection. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 984-996, Hong Kong, China. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Developing a Polysynthetic Language Corpus: Problems and Solutions",
"authors": [
{
"first": "Timofey",
"middle": [],
"last": "Arkhangelskiy",
"suffix": ""
},
{
"first": "Yury",
"middle": [],
"last": "Lander",
"suffix": ""
}
],
"year": 2016,
"venue": "Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference",
"volume": "",
"issue": "",
"pages": "40--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timofey Arkhangelskiy and Yury Lander. 2016. Devel- oping a Polysynthetic Language Corpus: Problems and Solutions. In Computational Linguistics and In- tellectual Technologies: Proceedings of the Interna- tional Conference \"Dialogue 2016\", pages 40-49.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Training data augmentation for low-resource morphological inflection",
"authors": [
{
"first": "Toms",
"middle": [],
"last": "Bergmanis",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection",
"volume": "",
"issue": "",
"pages": "31--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toms Bergmanis, Katharina Kann, Hinrich Sch\u00fctze, and Sharon Goldwater. 2017. Training data aug- mentation for low-resource morphological inflection. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, pages 31-39.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bootstrapping POS-taggers using unlabelled data",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Curran",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "49--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark, James Curran, and Miles Osborne. 2003. Bootstrapping POS-taggers using unlabelled data. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 49-55.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The SIGMORPHON 2016 shared Task-Morphological reinflection",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "10--22",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2002"
]
},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared Task- Morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphol- ogy, pages 10-22, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Real-world semi-supervised learning of postaggers for low-resource languages",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Mielens",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "583--592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Garrette, Jason Mielens, and Jason Baldridge. 2013. Real-world semi-supervised learning of pos- taggers for low-resource languages. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 583-592.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The SIGMOR-PHON 2020 shared task on multilingual graphemeto-phoneme conversion",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Gorman",
"suffix": ""
},
{
"first": "F",
"middle": [
"E"
],
"last": "Lucas",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Ashby",
"suffix": ""
},
{
"first": "Shijie",
"middle": [],
"last": "Goyzueta",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "You",
"suffix": ""
}
],
"year": 2020,
"venue": "SIGMORPHON",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyle Gorman, Lucas F.E. Ashby, Aaron Goyzueta, Shi- jie Wu, and Daniel You. 2020. The SIGMOR- PHON 2020 shared task on multilingual grapheme- to-phoneme conversion. In SIGMORPHON.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Heterogeneous Ensemble Combination Search Using Genetic Algorithm for Class Imbalanced Data Classification",
"authors": [
{
"first": "Nasimul",
"middle": [],
"last": "Mohammad Nazmul Haque",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Noman",
"suffix": ""
},
{
"first": "Pablo",
"middle": [],
"last": "Berretta",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moscato",
"suffix": ""
}
],
"year": 2016,
"venue": "PloS one",
"volume": "11",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Nazmul Haque, Nasimul Noman, Regina Berretta, and Pablo Moscato. 2016. Heterogeneous Ensemble Combination Search Using Genetic Al- gorithm for Class Imbalanced Data Classification. PloS one, 11(1):e0146116.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Massively multilingual pronunciation mining with WikiPron",
"authors": [
{
"first": "Jackson",
"middle": [
"L"
],
"last": "Lee",
"suffix": ""
},
{
"first": "F",
"middle": [
"E"
],
"last": "Lucas",
"suffix": ""
},
{
"first": "M",
"middle": [
"Elizabeth"
],
"last": "Ashby",
"suffix": ""
},
{
"first": "Yeonju",
"middle": [],
"last": "Garza",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Lee-Sikka",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Arya",
"middle": [
"D"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gorman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4216--4221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jackson L. Lee, Lucas F.E. Ashby, M. Elizabeth Garza, Yeonju Lee-Sikka, Sean Miller, Alan Wong, Arya D. McCarthy, and Kyle Gorman. 2020. Massively multilingual pronunciation mining with WikiPron. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4216-4221, Mar- seille.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "OpenSubti-tles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Lison",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "923--929",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Lison and J\u00f6rg Tiedemann. 2016. OpenSubti- tles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 923-929.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1166"
]
},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lis- bon, Portugal. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Phonetisaurus: exploring grapheme-tophoneme conversion with joint n-gram models in the WFST framework",
"authors": [
{
"first": "Josef",
"middle": [
"R"
],
"last": "Novak",
"suffix": ""
},
{
"first": "Nobuaki",
"middle": [],
"last": "Minematsu",
"suffix": ""
},
{
"first": "Keikichi",
"middle": [],
"last": "Hirose",
"suffix": ""
}
],
"year": 2016,
"venue": "Natural Language Engineering",
"volume": "22",
"issue": "6",
"pages": "907--938",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josef R. Novak, Nobuaki Minematsu, and Keikichi Hi- rose. 2016. Phonetisaurus: exploring grapheme-to- phoneme conversion with joint n-gram models in the WFST framework. Natural Language Engineering, 22(6):907-938.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Data augmentation for low resource languages",
"authors": [
{
"first": "Anton",
"middle": [],
"last": "Ragni",
"suffix": ""
},
{
"first": "Kate",
"middle": [
"M"
],
"last": "Knill",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Shakti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rath",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gales",
"suffix": ""
}
],
"year": 2014,
"venue": "Fifteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anton Ragni, Kate M Knill, Shakti P Rath, and Mark JF Gales. 2014. Data augmentation for low resource languages. In Fifteenth Annual Conference of the International Speech Communication Associ- ation.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Data augmentation for morphological reinflection",
"authors": [
{
"first": "Miikka",
"middle": [],
"last": "Silfverberg",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Wiemerslage",
"suffix": ""
},
{
"first": "Ling",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lingshuang Jack",
"middle": [],
"last": "Mao",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection",
"volume": "",
"issue": "",
"pages": "90--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miikka Silfverberg, Adam Wiemerslage, Ling Liu, and Lingshuang Jack Mao. 2017. Data augmentation for morphological reinflection. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Univer- sal Morphological Reinflection, pages 90-99.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Noise-aware character alignment for bootstrapping statistical machine transliteration from bilingual corpora",
"authors": [
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "204--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katsuhito Sudoh, Shinsuke Mori, and Masaaki Nagata. 2013. Noise-aware character alignment for boot- strapping statistical machine transliteration from bilingual corpora. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing, pages 204-209, Seattle, Washington, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Ensemble models for dependency parsing: Cheap and good?",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "649--652",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu and Christopher D. Manning. 2010. Ensemble models for dependency parsing: Cheap and good? In Human Language Technologies: The 2010 Annual Conference of the North Ameri- can Chapter of the Association for Computational Linguistics, pages 649-652, Los Angeles, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Salish internal relationships",
"authors": [
{
"first": "Morris",
"middle": [],
"last": "Swadesh",
"suffix": ""
}
],
"year": 1950,
"venue": "International Journal of American Linguistics",
"volume": "16",
"issue": "4",
"pages": "157--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morris Swadesh. 1950. Salish internal relation- ships. International Journal of American Linguis- tics, 16(4):157-167.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A language-independent feature schema for inflectional morphology",
"authors": [
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Que",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "674--680",
"other_ids": {
"DOI": [
"10.3115/v1/P15-2111"
]
},
"num": null,
"urls": [],
"raw_text": "John Sylak-Glassman, Christo Kirov, David Yarowsky, and Roger Que. 2015. A language-independent fea- ture schema for inflectional morphology. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 674- 680, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Adina Williams",
"authors": [
{
"first": "Ekaterina",
"middle": [],
"last": "Vylomova",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Salesky",
"suffix": ""
},
{
"first": "Sabrina",
"middle": [
"J"
],
"last": "Mielke",
"suffix": ""
},
{
"first": "Shijie",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Edoardo",
"middle": [],
"last": "Ponti",
"suffix": ""
},
{
"first": "Rowan",
"middle": [],
"last": "Hall Maudslay",
"suffix": ""
},
{
"first": "Ran",
"middle": [],
"last": "Zmigrod",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Valvoda",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Toldova",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Klyachko",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Yegorov",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Krizhanovsky",
"suffix": ""
},
{
"first": "Paula",
"middle": [],
"last": "Czarnowska",
"suffix": ""
},
{
"first": "Irene",
"middle": [],
"last": "Nikkarinen",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Krizhanovsky",
"suffix": ""
},
{
"first": "Tiago",
"middle": [],
"last": "Pimentel",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Torroba Hennigen",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "Garrett",
"middle": [],
"last": "Nicolai",
"suffix": ""
}
],
"year": null,
"venue": "Miikka Silfverberg, and Mans Hulden. 2020. The SIG-MORPHON 2020 Shared Task",
"volume": "0",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ekaterina Vylomova, Jennifer White, Elizabeth Salesky, Sabrina J. Mielke, Shijie Wu, Edoardo Ponti, Rowan Hall Maudslay, Ran Zmigrod, Joseph Valvoda, Svetlana Toldova, Francis Tyers, Elena Klyachko, Ilya Yegorov, Natalia Krizhanovsky, Paula Czarnowska, Irene Nikkarinen, Andrej Krizhanovsky, Tiago Pimentel, Lucas Torroba Hennigen, Christo Kirov, Garrett Nicolai, Ad- ina Williams, Antonios Anastasopoulos, Hilaria Cruz, Eleanor Chodroff, Ryan Cotterell, Miikka Silfverberg, and Mans Hulden. 2020. The SIG- MORPHON 2020 Shared Task 0: Typologically",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"type_str": "table",
"text": "",
"html": null,
"content": "<table/>"
},
"TABREF2": {
"num": null,
"type_str": "table",
"text": "ensemble average ensemble init best init best init best init best init best init best ady 28.9 27.7 22.4 21.6 26.6 27.2 22.9 22.2 28.7 28.1 22.7 20.9 arm 18.8 17.4 13.1 11.3 16.1 15.4 12.2 11.8 18.7 18.1 12.2 10.7",
"html": null,
"content": "<table><tr><td>default</td><td>-diversity</td><td>-augment</td></tr><tr><td colspan=\"3\">average ensemble average bul 36.8 36.2 25.3 20.0 35.5 35.5 27.6 23.8 37.3 36.1 24.2 18.7</td></tr><tr><td colspan=\"3\">dut 19.5 18.8 11.8 10.4 18.5 18.8 12.2 10.9 19.7 19.6 11.6 9.8</td></tr><tr><td colspan=\"3\">fre 15.1 15.7 6.0 5.6 13.2 13.6 6.7 6.2 15.6 15.2 7.1 5.1</td></tr><tr><td colspan=\"3\">geo 26.9 26.7 20.2 17.8 26.6 25.1 20.7 18.4 27.0 26.8 19.6 16.7</td></tr><tr><td>gre 20.1 18.</td><td/><td/></tr><tr><td/><td/><td>(1) In</td></tr><tr><td/><td colspan=\"2\">all scenarios, there is a large gap between the aver-</td></tr></table>"
},
"TABREF4": {
"num": null,
"type_str": "table",
"text": "WER on the development set in the three scenarios (default, reduced diversity, and without data augmentation). In each scenario, we show the average model performance and the ensemble performance in the first iteration and the best iteration.",
"html": null,
"content": "<table/>"
},
"TABREF7": {
"num": null,
"type_str": "table",
"text": "Evaluation on the test set of the morphological inflection task, comparing our system to three winning systems and four baselines.",
"html": null,
"content": "<table/>"
}
}
}
}