ACL-OCL / Base_JSON /prefixW /json /wnut /2020.wnut-1.20.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:34:21.558060Z"
},
"title": "Fine-Tuning MT systems for Robustness to Second-Language Speaker Variations",
"authors": [
{
"first": "Md",
"middle": [],
"last": "Mahfuz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "George Mason University",
"location": {}
},
"email": ""
},
{
"first": "Ibn",
"middle": [],
"last": "Alam",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "George Mason University",
"location": {}
},
"email": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "George Mason University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The performance of neural machine translation (NMT) systems only trained on a single language variant degrades when confronted with even slightly different language variations. With this work, we build upon previous work to explore how to mitigate this issue. We show that fine-tuning using naturally occurring noise along with pseudo-references (i.e. \"corrected\" non-native inputs translated using the baseline NMT system) is a promising solution towards systems robust to such type of input variations. We focus on four translation pairs, from English to Spanish, Italian, French, and Portuguese, with our system achieving improvements of up to 3.1 BLEU points compared to the baselines, establishing a new stateof-the-art on the JFLEG-ES dataset. 1",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The performance of neural machine translation (NMT) systems only trained on a single language variant degrades when confronted with even slightly different language variations. With this work, we build upon previous work to explore how to mitigate this issue. We show that fine-tuning using naturally occurring noise along with pseudo-references (i.e. \"corrected\" non-native inputs translated using the baseline NMT system) is a promising solution towards systems robust to such type of input variations. We focus on four translation pairs, from English to Spanish, Italian, French, and Portuguese, with our system achieving improvements of up to 3.1 BLEU points compared to the baselines, establishing a new stateof-the-art on the JFLEG-ES dataset. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural machine translation (NMT) approaches have aided the machine translation field in achieving great advances in the recent years, starting with encoder-decoder models with attention (Bahdanau et al., 2014; Luong et al., 2015) , to transformers using self-attention (Vaswani et al., 2018) , to massively multilingual models that yield large improvements even in low-resource settings (Aharoni et al., 2019; Zhang et al., 2020) .",
"cite_spans": [
{
"start": 186,
"end": 209,
"text": "(Bahdanau et al., 2014;",
"ref_id": "BIBREF3"
},
{
"start": 210,
"end": 229,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF29"
},
{
"start": 269,
"end": 291,
"text": "(Vaswani et al., 2018)",
"ref_id": "BIBREF43"
},
{
"start": 387,
"end": 409,
"text": "(Aharoni et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 410,
"end": 429,
"text": "Zhang et al., 2020)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite these very encouraging developments, the list of shortcomings of NMT is also quite vast (Koehn and Knowles, 2017) , and one of the most crucial shortcomings is the lack of robustness to source-side noise. 2 When confronted with inputs that are even slightly different from the inputs that the models were trained on, the quality of the outputs significantly degrades. This observation has been confirmed for noise due to typos or character scrambling (Belinkov and Bisk, 2018) , due to faulty speech recognition (Heigold et al., 2018) , or due to naturally-occurring errors by second-language non-native speakers .",
"cite_spans": [
{
"start": 96,
"end": 121,
"text": "(Koehn and Knowles, 2017)",
"ref_id": "BIBREF24"
},
{
"start": 459,
"end": 484,
"text": "(Belinkov and Bisk, 2018)",
"ref_id": "BIBREF4"
},
{
"start": 520,
"end": 542,
"text": "(Heigold et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, this issue can particularly degrade the user experience for millions of potential users. For example, the number of non-native English speakers is three times larger than the number of native English speakers (c.f. around 1 billion for the former and about 300 million for the latter). Had one had access to large amounts of data for all different language varieties, it would be straightforward to train variety-specific MT models. Such data, though, are of course scarce.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we work on addressing this particular shortcoming, in an attempt to make NMT systems more robust to source-side variations that nonnative speakers produce. Since English is the language with the largest amount of second-language learners and non-native speakers, we only focus on MT systems translating out of English, but we point out that such work is urgently needed for other colonial languages (i.e. French, Spanish) or majority languages (such as Russian, Mandarin, or Hindi) that are taking over minority ones. 3 The main difference of our approach compared to previous work is that we do not attempt to synthesize different types of noise, but rather use naturally-occurring texts, as produced by nonnative speakers. We utilize grammar error correction corpora and produce pseudo-references, which we then use to fine-tune a NMT system with a goal of increasing its robustness to such sourceside noise. In our view, our approach has two main advantages and a single disadvantage over previous works. First, the types of realistic \"non-native-like\" noise that can be synthesized are limited, covering among others typos or simple morphological or syntactic mistakes (Belinkov and Bisk, 2018; Cheng et al., 2018; Tan et al., 2020, et alia) but not covering the interplay between all these or any other higher level issues (e.g. word choice). Our approach has the potential to handle a larger spectrum of language variation, as it appears in naturally occurring data. Second, our choice of fine-tuning, rather than training from scratch as previous works have opted for, leads to lower training times and lower compute needed for similar improvements on robustness. The main drawback of our approach lies in the need for corrected (or \"normalized\") versions of \"noisy\" non-native sentences, but we take solace in the fact that at least for the majority of the high-resource languages (such as English, French, German, Russian, or Chinese) such datasets already exist. Very briefly, our contributions are summarized here:",
"cite_spans": [
{
"start": 532,
"end": 533,
"text": "3",
"ref_id": null
},
{
"start": 1187,
"end": 1212,
"text": "(Belinkov and Bisk, 2018;",
"ref_id": "BIBREF4"
},
{
"start": 1213,
"end": 1232,
"text": "Cheng et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 1233,
"end": 1259,
"text": "Tan et al., 2020, et alia)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show that fine-tuning a pre-trained system on noisy source-side data along with pseudoreferences is a viable approach towards NMT robustness to grammar errors and input from non-native speakers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show that fine-tuning of a multilingual NMT system on several languages is also advisable, yielding better performance for a subset of the languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We also discuss the potential of achieving zero-shot robustness, as long as catastrophic forgetting issues can be overcome.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work is inspired by and combines two lines of research: (1) robustness studies in NMT and (2) data augmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Robust NMT Making robust models for NMT has recently gained popularity, with Shared Tasks organized in the Conference of Machine Translation (Li et al., 2019) and several solutions put forth (Berard et al., 2019; Helcl et al., 2019; Post and Duh, 2019; Zhou et al., 2019; Zheng et al., 2019, et alia) . ; Karpukhin et al. Tan et al. (2020) extend to more NLP tasks. While these approaches are indeed meritorious and indeed improve a model's robustness, we argue that one needs to use natural noise instead, on account of two phenomena. The first is language change: the different variations that the models will have to contend with are not static, but rather constantly changing at an ever-increasing pace. Second, and perhaps a partial direct consequence of the first point, one cannot rely on synthetic examples to properly capture the wide variety of naturallyoccurring variations. Besides, if one could properly model noise creation, they could also similarly model the inverse problem adequately, namely remove said noise, in which case a noise-removing preprocessing step would be most likely suffiecient to tackle the issue.",
"cite_spans": [
{
"start": 141,
"end": 158,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 191,
"end": 212,
"text": "(Berard et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 213,
"end": 232,
"text": "Helcl et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 233,
"end": 252,
"text": "Post and Duh, 2019;",
"ref_id": "BIBREF36"
},
{
"start": 253,
"end": 271,
"text": "Zhou et al., 2019;",
"ref_id": "BIBREF49"
},
{
"start": 272,
"end": 300,
"text": "Zheng et al., 2019, et alia)",
"ref_id": null
},
{
"start": 322,
"end": 339,
"text": "Tan et al. (2020)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "On working with real-world noise, the approach of Michel and Neubig (2018) is the most similar to ours. They collected \"noisy\" English, French, and Japanese sentences from Reddit, created translations, and split their dataset (MTNT) into train, development, and test, ranging from 5 to 36 thousand training examples. To build robust NMT systems, they first train a model on standard clean data and then fine-tune it on the training portion of MTNT using techniques from domain adaptation. The main difference between this worthy effort and our approach is two-fold. First, our approach does not require gold translations of the noisy inputs, which can be expensive and hard to collect, but we instead rely on the abundance of corrected second-language learner data (which we use to create pseudo-references, see \u00a73). Second, we attest that Reddit language translation is much closer to a domain adaptation scenario, and includes additional noise types that are not pertinent to non-native language translation such as emoji, Reddit jargon such us \"upvote\" or \"gild\", and internet slang such as \"tbh\" and \"smh\". 4 On working with pseudo-references, the approach of Cheng et al. (2019a) is the most similar to ours. They use ASR corpora to create synthetic ASR-induced noise and try to make NMT system more robust to this type of noise. As speech-to-transcription-to-translation datasets are very costly to produce, they use standard speechto-transcription datasets instead. They translate the gold transcription data set to get translation pseudoreferences. Then they jointly train the model on noisy source transcription using the pseudoreference translations as the target.",
"cite_spans": [
{
"start": 50,
"end": 74,
"text": "Michel and Neubig (2018)",
"ref_id": "BIBREF31"
},
{
"start": 1111,
"end": 1112,
"text": "4",
"ref_id": null
},
{
"start": 1164,
"end": 1184,
"text": "Cheng et al. (2019a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Data Augmentation Data augmentation techniques have become increasingly popular for MT and other NLP tasks, from back-translation of monolingual data (Sennrich et al., 2016) to counterfactual augmentation to address gender bias issues (Zmigrod et al., 2019, et alia) . For our purposes, we will focus on data augmentation techniques aimed at increasing NMT robustness.",
"cite_spans": [
{
"start": 150,
"end": 173,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF37"
},
{
"start": 235,
"end": 266,
"text": "(Zmigrod et al., 2019, et alia)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Simple perturbations typically used include the infusion of character-level noise (e.g. character scrambling (Heigold et al., 2018) or typos (Belinkov and Bisk, 2018)) or word order scrambling (Sperber et al., 2017) . Cheng et al. (2018 Cheng et al. ( , 2019b propose a gradient based method to attack the translation model with adversarial source examples, but there's not guarantee that the adversarial attack results in realistic noise (Michel et al., 2019a) . add specific types of errors (such as subject-verb agreement or determiner errors) on the source-side of parallel data, while Tan et al. (2020) specifically perturb the inflectional morphology of words to create adversarial examples and show that adversarial finetuning them for a single epoch significantly improves robustness. Our work is highly motivated from these last two works, but instead of creating synthetic perturbed adversarial examples we use real noisy examples.",
"cite_spans": [
{
"start": 109,
"end": 131,
"text": "(Heigold et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 193,
"end": 215,
"text": "(Sperber et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 218,
"end": 236,
"text": "Cheng et al. (2018",
"ref_id": "BIBREF9"
},
{
"start": 237,
"end": 259,
"text": "Cheng et al. ( , 2019b",
"ref_id": "BIBREF8"
},
{
"start": 439,
"end": 461,
"text": "(Michel et al., 2019a)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our goal is to achieve robustness to source-side variations that are similar to the mistakes that nonnative English speakers make. To do so, we will utilize state-of-the-art pretrained systems and finetune them using pseudo-references over corpora that include real-world noise. The general outline of our approach is straightforward:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning for Robustness",
"sec_num": "3"
},
{
"text": "1. Start with a English-to-X NMT system pretrained on any available data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning for Robustness",
"sec_num": "3"
},
{
"text": "dataset, which provides tuples (x,x) of original and corrected sentences. 3. Translate the corrected sentences obtaining pseudo-references\u1ef9 = NMT(x).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtain an English Grammar Error Correction",
"sec_num": "2."
},
{
"text": "4. Fine-tune the NMT system on (x,\u1ef9) pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtain an English Grammar Error Correction",
"sec_num": "2."
},
{
"text": "Notation Throughout this work, we use the notation of Anastasopoulos (2019) to denote different types of data:",
"cite_spans": [
{
"start": 54,
"end": 75,
"text": "Anastasopoulos (2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Obtain an English Grammar Error Correction",
"sec_num": "2."
},
{
"text": "\u2022 x: the original, noisy, potentially ungrammatical English sentence. Its tokens will be denoted as x i . \u2022x: the English sentence with the correction annotations applied to the original sentence x, which is deemed fluent and grammatical. Again, its tokens will be denoted asx i . \u2022\u1ef9: the output of the NMT system whenx is provided as input (tokens:\u1ef9 j ). This will be our pseudo-reference for fine-tuning or evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtain an English Grammar Error Correction",
"sec_num": "2."
},
{
"text": "For the sake of readability, we use the terms grammatical errors, noise, or edits interchangeably. In the context of this work, they will all denote the annotated grammatical errors in the source sentences (x).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtain an English Grammar Error Correction",
"sec_num": "2."
},
{
"text": "Data There are many publicly available corpora for non-native English that are annotated with corrections, which have been widely used for the Grammar Error Correction tasks (Bryant et al., 2019) . We specifically use NUCLE (Dahlmeier et al., 2013) , FCE (Yannakoudakis et al., 2011) , and Lang-8 (Tajiri et al., 2012) for creating the pseudo-references. For evaluation we use the JF-LEG dataset (Napoles et al., 2017) and its accompanying Spanish translations in the JFLEG-ES dataset .",
"cite_spans": [
{
"start": 174,
"end": 195,
"text": "(Bryant et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 224,
"end": 248,
"text": "(Dahlmeier et al., 2013)",
"ref_id": "BIBREF10"
},
{
"start": 255,
"end": 283,
"text": "(Yannakoudakis et al., 2011)",
"ref_id": "BIBREF46"
},
{
"start": 297,
"end": 318,
"text": "(Tajiri et al., 2012)",
"ref_id": "BIBREF39"
},
{
"start": 396,
"end": 418,
"text": "(Napoles et al., 2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Obtain an English Grammar Error Correction",
"sec_num": "2."
},
{
"text": "The NUS Corpus of Learner English (NUCLE) contains essays written by Singaporean students. It is generally considered the main benchmark for GEC. This dataset consists of 21.3K sentences. The First Certificate in English corpus (FCE) is also made of essays, written by learners who were sitting the English as Second or Other Language (ESOL) examinations. We use the publicly available version, which includes 17.6K sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtain an English Grammar Error Correction",
"sec_num": "2."
},
{
"text": "Lang-8 is a slightly different dataset than the previous two datasets. This dataset was built from user-provided corrections in an online learner forum. In comparison to the others, this dataset is much larger, consisting of 149.5K sentences. However, this datasets' error domain is very versatile. It does not consist any test and validation set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtain an English Grammar Error Correction",
"sec_num": "2."
},
{
"text": "The JHU FLuency-Extended GUG corpus (JFLEG) is a small corpus of only 1.3K sentences, intended only for evaluation. It has an unique character that is different from other datasets, as it contains correction annotations that include extended fluency edits rather than just minimal grammatical ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtain an English Grammar Error Correction",
"sec_num": "2."
},
{
"text": "The JFLEG corpus was translated into Spanish by to create the JFLEG-ES corpus, which provides gold-standard Spanish translations for every JFLEG sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtain an English Grammar Error Correction",
"sec_num": "2."
},
{
"text": "Evaluation In cases where we have access to human references, we can simply evaluate with reference-based metrics (e.g. BLEU (Papineni et al., 2002) ). Unfortunately, we only have references for the JFLEG-ES dataset in Spanish.",
"cite_spans": [
{
"start": 125,
"end": 148,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Obtain an English Grammar Error Correction",
"sec_num": "2."
},
{
"text": "For all other datasets and languages, we treat the translations of the corrected clean English sources as pseudo-references, and use the metrics from (Anastasopoulos, 2019) : Robustness Score (RB), f-BLEU, f-METEOR, and Noise Ratio (NR).",
"cite_spans": [
{
"start": 150,
"end": 172,
"text": "(Anastasopoulos, 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Obtain an English Grammar Error Correction",
"sec_num": "2."
},
{
"text": "Robustness Score (RB) is defined as the percentage of translations of noisy sentences that are exactly the same as the translation of the respective corrected sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtain an English Grammar Error Correction",
"sec_num": "2."
},
{
"text": "f-BLEU and f-METEOR are slight modification of the popular BLEU and METEOR metrics. The only difference is that they use pseudoreferences instead of true human-created ones, and hence are referred to as faux BLEU and faux ME-TEOR. In our case, the pseudo-references are the translations of the corrected sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtain an English Grammar Error Correction",
"sec_num": "2."
},
{
"text": "Target-Source Noise Ratio (NR) is the ratio between the target-and source-side BLEU score between noisy and corrected sentences. All other measures do not take into consideration how large are the source-side differences. The intuition behind this metric is that if there is minimal perturbation d(x,x) on the input side then there should be minimal reflection on the target side perturbation d(y,\u1ef9) as well. NR is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtain an English Grammar Error Correction",
"sec_num": "2."
},
{
"text": "NR(x,x, y,\u1ef9) = d(y,\u1ef9) d(x,x) = 100 \u2212 BLEU(y,\u1ef9) 100 \u2212 BLEU(x,x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtain an English Grammar Error Correction",
"sec_num": "2."
},
{
"text": "We name our models in a way that is convenient to understand. Our models are named as such: dataset language; e.g. the NUCLE ES model will refer to the model fine-tuned on the NUCLE dataset for Spanish language. We will overload the naming convention to also refer to datasets in the same way, e.g. the NUCLE ES dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "Experimental Details All data are tokenized and true-cased using the Moses tools (Koehn et al., 2007) .We use the SentencePiece (Kudo and Richardson, 2018) toolkit to split the sentences into sub-words. We use the unigram language model algorithm of the toolkit with 65,000 operations. We filter the fine-tuning dataset so that sentence length is capped at 80 words.",
"cite_spans": [
{
"start": 81,
"end": 101,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF23"
},
{
"start": 128,
"end": 155,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "Target Side Creation Given the recent success and promise of massively multilingual systems (Johnson et al., 2016; Firat et al., 2016) , we use as our original model the OPUS-MT multilingual Romance model 5 (Tiedemann and Thottingal, 2020), trained using Marian NMT (Junczys-Dowmunt et al., 2018) within the HuggingFace's Transformers library (Wolf et al., 2019) . For every dataset we pass source sentences (both original and corrected versions) and obtain target side sentences. Then we use the corrected target side sentences as our ground truth for fine-tuning the same model.",
"cite_spans": [
{
"start": 92,
"end": 114,
"text": "(Johnson et al., 2016;",
"ref_id": "BIBREF19"
},
{
"start": 115,
"end": 134,
"text": "Firat et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 343,
"end": 362,
"text": "(Wolf et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "We use a transformer architecture as they have shown to be much superior to recurrent architectures. We use Hug-ginFace's Transformers' BartForConditionalGeneration as our model and tokenizer. This model uses 12 layers, 16 attention heads, the embedding dimension is 1024, and positional feed-forward dimension is 4096. Dropout is set to 0.1. We use the same learning rate schedule as in (Vaswani et al., 2017) with 500 warm-up steps but only decay the learning rate until it reaches 3 * 10 \u22125 . We fine-tune our models on a V100 GPU for a maximum of 100 epochs (although best validation set performance is reached around 20 to 25 epochs). For testing we use the model with the best performance on the validation dataset. Our validation check interval is set to 0.2.",
"cite_spans": [
{
"start": 388,
"end": 410,
"text": "(Vaswani et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer Model Details",
"sec_num": null
},
{
"text": "We use METEOR (Denkowski and Lavie, 2014) to calculate the f-METEOR scores. We calculate BLEU and f-BLEU scores using Sacrebleu (Post, 2018) . We compute statistical significance with paired bootstrap resampling (Koehn, 2004) .",
"cite_spans": [
{
"start": 14,
"end": 41,
"text": "(Denkowski and Lavie, 2014)",
"ref_id": "BIBREF11"
},
{
"start": 128,
"end": 140,
"text": "(Post, 2018)",
"ref_id": "BIBREF35"
},
{
"start": 212,
"end": 225,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": null
},
{
"text": "Results on English-Spanish We first discuss the results on the JFLEG-ES test set, which is the only dataset with human gold references. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": null
},
{
"text": "Source (original) it has some problems that it can effect to humens. OPUS-MT output tiene algunos problemas que puede afectar a los humens. 47 Finetuned output tiene algunos problemas que pueden afectar a los humanos. 89 Reference esto tiene algunos problemas que pueden afectar a los humanos.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Sentence BLEU",
"sec_num": null
},
{
"text": "Source (original) last month, I needed to buy digtal-camera. OPUS-MT output el mes pasado, necesitaba comprar digtal-camera.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Sentence BLEU",
"sec_num": null
},
{
"text": "26 Finetuned output el mes pasado, necesitaba comprar una c\u00e1mara digital. 66 Reference el mes pasado necesitaba comprar una c\u00e1mara digital. Table 2 : Examples (cherry-picked) of sentences with high BLEU improvement after fine-tuning (English-Spanish on the the JFLEG-ES dataset). summarized in Table 1 . The \"Clean\" column refers to an average BLEU score over the four versions of source-side corrections provided by the JFLEG dataset, the \"Noisy\" column reports results with the original sentences as input, and the last column presents the difference (\u2206) between the two.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 2",
"ref_id": null
},
{
"start": 294,
"end": 301,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "System Sentence BLEU",
"sec_num": null
},
{
"text": "The first thing to note is that the multilingual OPUS-MT model outperforms the previously published results of Anastasopoulos et al. (2019) by more than 2 BLEU points on both clean and noisy settings. This is unsurprising, if one considers that the OPUS-MT model has been trained on an order of magnitude more English-Spanish data (about 25x), and it has in addition been trained on other related Romance languages. However, we should also note that the difference of the two models is imbalanced: the improvement from all these additional data is +2.3 BLEU points when evaluated on clean data, but only +1.2 BLEU points when evaluated on the noisy pairs. This outlines the importance of evaluating MT systems not only on clean data but also on other language variants. Although imbalanced these improvements are significant and hence we treat our multilingual OPUS-MT model as the baseline in all following discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Sentence BLEU",
"sec_num": null
},
{
"text": "Fine-tuning on individual datasets yields inconsistent results, with the BLEU score changing from -2.5 to +0.16. The highest drop is observed when fine-tuning on FCE; this is reasonable as JFLEG and FCE include errors on quite different domains (Napoles et al., 2017) . This ablation allows us to identify Lang-8 as perhaps the most appropriate single dataset for this kind of tasks, most likely due to its size and diversity of errors and domains.",
"cite_spans": [
{
"start": 245,
"end": 267,
"text": "(Napoles et al., 2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Sentence BLEU",
"sec_num": null
},
{
"text": "Using all available datasets, however, is significantly better. We find that our model performs particularly well when fine-tuned on pseudoreferences from all corpora (the \"all noisy in Spanish\" model (sixth row) that is a concatenation of all the datasets in Spanish). We observe a 3.1 BLEU improvement for noisy data, while suffering a small decrease (0.8 BLEU points) on clean data. The dif- Adaptation significantly increases the Robustness percentage as well as f-BLEU. *statistically significantly better than the \"original\" baseline, with p < 0.05.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Sentence BLEU",
"sec_num": null
},
{
"text": "ferent datasets cover different types of errors and domains, and as a result the fine-tuning process does not get biased by a single type of domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Sentence BLEU",
"sec_num": null
},
{
"text": "The second-to-last row (\"all noisy in all four languages\") reports results when pseudo-references in all four experimental MT directions (EN to ES,FR,IT,PT) are used in the fine-tuning process of our multilingual model. In this case, we still observe 1 BLEU improvement over the noisy data, compared to the baseline, but the performance on clean data is further degraded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Sentence BLEU",
"sec_num": null
},
{
"text": "Last, it was crucial to examine whether the improvements we obtained are due to our fine-tuned models becoming indeed more robust to errors, as opposed to adapting to the domain and other characteristics of the datasets we train and evaluate on. In the last row (\"all clean in Spanish\") we present the results following fine-tuning the models with the corrected sentences as inputs. 6 We confirm that indeed our model improves slightly on the clean data, but its performance does not improve on the noisy inputs. Hence, we can conclude that the effect of domain adaptation is minimal, and our fine-tuned model has indeed learn to deal with non-standard inputs. Table 2 displays a couple of sentences where our fine-tuned system produces more fluent outputs that then pre-trained system, properly han-dling the source-side noise. The mistakes in the English source sentence and the MT outputs are highlighted with red italics. In the first example (top) our system can handle the typo \"digtal\" and correctly translate it as \"digital.\" In the second example (bottom) our system, in addition to handling the typo \"humens\", it also correctly inflects the verb \"pueden\" (third person, plural) to agree with its subject, while the pretrained model produces an ungrammatical Spanish output (the verb \"puede\" is in third person singular and does not agree with its subject).",
"cite_spans": [
{
"start": 383,
"end": 384,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 661,
"end": 668,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Sentence BLEU",
"sec_num": null
},
{
"text": "Results on other ROMANCE language In this section we report results obtained with the model fine-tuned on pseudo-references from all datasets for each of the four languages, as they were consistently better than any single-dataset fine-tuning approach. Table 3 presents the scores with all four evaluation metrics on all four En-to-X translation directions. For each language, we compare three models: the pre-trained one, one fine-tuned on pseudo-references for the respective language only, and one fine-tuned on all four languages simultaneously.",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 260,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "System Sentence BLEU",
"sec_num": null
},
{
"text": "As we don't have human references for the other languages except Spanish, we use the Robustness Score, faux-BLEU, faux-METEOR, and Target-Source Noise Ratio metrics. As showcased by Table 3 , in every language our approach yields a minimum of 7 f-BLEU points improvement over the original system when trained on that single language. In Italian and French, the improvement is particularly significant of at least 25 f-BLEU points. We have also listed the f-BLEU for clean test set which shows that the f-BLEU score decreased giving us assurance that the model is learning to be robust on noisy data. Also 75-80 BLEU score is still very significant.",
"cite_spans": [],
"ref_spans": [
{
"start": 182,
"end": 189,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "System Sentence BLEU",
"sec_num": null
},
{
"text": "The f-METEOR scores indicate similar trends. It is worth noting, though, that the differences between the original and the fine-tuned systems are less pronounced. We attribute this difference to the fact that most output differences are generally small local changes (e.g. on the inflection of a verb or a noun), which METEOR's paraphrase matching considers to be quite similar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Sentence BLEU",
"sec_num": null
},
{
"text": "The Robustness Scores (RB) are also revealing: when the original system only returned the same output for the potentially noisy original sentence and the corrected one about 10% of the time, after fine-tuning all systems return the same outputs more than 24% of the time, reaching a RB score of more than 43% for French.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Sentence BLEU",
"sec_num": null
},
{
"text": "The Noise Ratio (NR) allows us to inspect if we actually manage to create a system that reduces the noise or not. An NR of less than 1 means that indeed our system reduces the source-side noise in its output, while a NR higher than one implies that the system amplifies the source-side differences (the lower NR the better). The pre-trained system consistently produces an NR of around 1, meaning that even though it does not amplify noise, it also does not reduce it. In comparison, our adapted models manage to reduce the source-side noise, with scores significantly lower than 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Sentence BLEU",
"sec_num": null
},
{
"text": "Can we achieve zero-shot robustness? An intriguing question that arose during our experiments, was whether one could fine-tune a multilingual system for robustness on only one language (e.g. Spanish) and consequently make the system more robust not only in that language but also in the other languages supported by the system. This avenue would significantly increase the value of not only our approach but also of the original multilingual systems: perhaps the community might eventually have access to large collections of true reference translations of non-native English, which would allow us to train systems robust to such sourceside variations. Such datasets are unlikely to be available in multiple languages, though, hence the need for a way to improve a multilingual system's robustness using single-language data. We attempt a first step towards this direction, by evaluating on English-Spanish the systems that we fine-tuned solely on English-Italian, English-French, and English-Portuguese. Unfortunately, as outlined in Table 4 , this simple approach does not work out-of-the-box. Fine-tuning on a single language pair leads to catastrophic forgetting (French, 1999) of the multilingual abilities of the system. This is a phenomenon commonly observed in continued learning or fine-tuning scenarios (Goodfellow et al., 2013) as well as on MT domain adaptation scenarios in particular (Freitag and Al-Onaizan, 2016) , for the mitigation of which several approaches have been proposed (Lopez-Paz and Ranzato, 2017; Thompson et al., 2019; Michel et al., 2019b, et alia) . As this research direction is beyond the scope of this paper, we leave the application of such approaches for future work.",
"cite_spans": [
{
"start": 1167,
"end": 1181,
"text": "(French, 1999)",
"ref_id": "BIBREF15"
},
{
"start": 1398,
"end": 1428,
"text": "(Freitag and Al-Onaizan, 2016)",
"ref_id": "BIBREF14"
},
{
"start": 1497,
"end": 1526,
"text": "(Lopez-Paz and Ranzato, 2017;",
"ref_id": "BIBREF28"
},
{
"start": 1527,
"end": 1549,
"text": "Thompson et al., 2019;",
"ref_id": "BIBREF41"
},
{
"start": 1550,
"end": 1580,
"text": "Michel et al., 2019b, et alia)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1035,
"end": 1042,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "System Sentence BLEU",
"sec_num": null
},
{
"text": "In this work, we studied the effect of fine-tuning a NMT model using real source-side noise paired with pseudo-references obtained by translating Grammar Error Correction corpora. We confirmed previous works on the utility of training with source-side noise, as it leads to models more robust to non-native English inputs, but also showed that instead of using synthetically-induced noise, we can (a) use real-user data with pseudo-references and (b) fine-tune a pre-trained system, rather than training from scratch. We will release all pseudoreferences and our code upon acceptance. Our approach of fine-tuning a pre-trained system with pseudo-references approach has particular appealing advantages (less training time, no need for costly translation references) and it improves the robustness of MT systems significantly on all language pairs we tested.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "For future work, we will explore ways to integrate strategies for avoiding catastrophic forgetting, in order to achieve multilingual robustness without needing to fine-tune a multilingual model on all interested languages, as well as incorporating robustness rewards through reinforcement learning in the fine-tuning process. In addition, we will investigate how the quality of the pseudo-references affects the downstream results, and we also plan to explore the trade-off between language-specific and multilingual fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "All datasets and code are publicly available here:https://github.com/mahfuzibnalam/ finetuning_for_robustness.2 This is not to say that non-neural statistical approaches did not suffer from the same drawbacks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "That is also not to say that robustness is not necessary for low-resource languages; to the contrary! We just focus on high-resource settings first as they are the ones that have the potential to affect a larger number of downstream users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "\"tbh\" stands for \"to be honest\" and \"smh\" for \"shake my head\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "name: Helsinki-NLP/opus-mt-en-ROMANCE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This would amount to a straightforward case of selftraining, since it the target outputs were produced by the model itself prior to fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors want to thank the reviewers for their insightful comments. The first author was funded by a George Mason Computer Science Department's 2020 PhD Research Initiation Award.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Massively multilingual neural machine translation",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3874--3884",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1388"
]
},
"num": null,
"urls": [],
"raw_text": "Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 3874-3884, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An analysis of sourceside grammatical errors in NMT",
"authors": [
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "213--223",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4822"
]
},
"num": null,
"urls": [],
"raw_text": "Antonios Anastasopoulos. 2019. An analysis of source- side grammatical errors in NMT. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 213-223, Florence, Italy. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural machine translation of text from non-native speakers",
"authors": [
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "Toan",
"middle": [
"Q"
],
"last": "Nguyen",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "3070--3080",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1311"
]
},
"num": null,
"urls": [],
"raw_text": "Antonios Anastasopoulos, Alison Lui, Toan Q. Nguyen, and David Chiang. 2019. Neural machine translation of text from non-native speakers. In Pro- ceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers), pages 3070-3080, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. Cite arxiv:1409.0473Comment: Accepted at ICLR 2015 as oral presentation.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Synthetic and natural noise both break neural machine translation",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine transla- tion. In Proc. ICLR.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Naver labs Europe's systems for the WMT19 machine translation robustness task",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Berard",
"suffix": ""
},
{
"first": "Ioan",
"middle": [],
"last": "Calapodescu",
"suffix": ""
},
{
"first": "Claude",
"middle": [],
"last": "Roux",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "526--532",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5361"
]
},
"num": null,
"urls": [],
"raw_text": "Alexandre Berard, Ioan Calapodescu, and Claude Roux. 2019. Naver labs Europe's systems for the WMT19 machine translation robustness task. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 526-532, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The BEA-2019 shared task on grammatical error correction",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "Mariano",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "\u00d8istein",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "52--75",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4406"
]
},
"num": null,
"urls": [],
"raw_text": "Christopher Bryant, Mariano Felice, \u00d8istein E. An- dersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In Pro- ceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52-75, Florence, Italy. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Breaking the data barrier: Towards robust speech translation via adversarial stability training",
"authors": [
{
"first": "Qiao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Meiyuan",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Yaqian",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Jin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yitao",
"middle": [],
"last": "Duan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiao Cheng, Meiyuan Fang, Yaqian Han, Jin Huang, and Yitao Duan. 2019a. Breaking the data barrier: Towards robust speech translation via adversarial sta- bility training.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Robust neural machine translation with doubly adversarial inputs",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong Cheng, Lu Jiang, and Wolfgang Macherey. 2019b. Robust neural machine translation with doubly ad- versarial inputs. CoRR, abs/1906.02443.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Towards robust neural machine translation",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Fandong",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1756--1766",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1163"
]
},
"num": null,
"urls": [],
"raw_text": "Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018. Towards robust neural machine translation. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1756- 1766, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Building a large annotated corpus of learner English: The NUS corpus of learner English",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Dahlmeier",
"suffix": ""
},
{
"first": "Siew Mei",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "22--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner English: The NUS corpus of learner English. In Proceedings of the Eighth Workshop on Innova- tive Use of NLP for Building Educational Applica- tions, pages 22-31, Atlanta, Georgia. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Meteor universal: Language specific translation evaluation for any target language",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "376--380",
"other_ids": {
"DOI": [
"10.3115/v1/W14-3348"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Denkowski and Alon Lavie. 2014. Meteor uni- versal: Language specific translation evaluation for any target language. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 376-380, Baltimore, Maryland, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "On adversarial examples for character-level neural machine translation",
"authors": [
{
"first": "Javid",
"middle": [],
"last": "Ebrahimi",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Lowd",
"suffix": ""
},
{
"first": "Dejing",
"middle": [],
"last": "Dou",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javid Ebrahimi, Daniel Lowd, and Dejing Dou. 2018. On adversarial examples for character-level neural machine translation. CoRR, abs/1806.09030.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multi-way, multilingual neural machine translation with a shared attention mechanism",
"authors": [
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "866--875",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine trans- lation with a shared attention mechanism. In Pro- ceedings of the 2016 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 866-875, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Fast domain adaptation for neural machine translation",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.06897"
]
},
"num": null,
"urls": [],
"raw_text": "Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. arXiv:1612.06897.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Catastrophic forgetting in connectionist networks",
"authors": [
{
"first": "M",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "French",
"suffix": ""
}
],
"year": 1999,
"venue": "Trends in cognitive sciences",
"volume": "3",
"issue": "4",
"pages": "128--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert M French. 1999. Catastrophic forgetting in con- nectionist networks. Trends in cognitive sciences, 3(4):128-135.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An empirical investigation of catastrophic forgetting in gradientbased neural networks",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ian",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Da",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. 2013. An empirical investigation of catastrophic forgetting in gradient- based neural networks.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "How robust are characterbased word embeddings in tagging and MT against wrod scramlbing or randdm nouse?",
"authors": [
{
"first": "Georg",
"middle": [],
"last": "Heigold",
"suffix": ""
},
{
"first": "Stalin",
"middle": [],
"last": "Varanasi",
"suffix": ""
},
{
"first": "G\u00fcnter",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 13th Conference of the Association for Machine Translation in the Americas",
"volume": "1",
"issue": "",
"pages": "68--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georg Heigold, Stalin Varanasi, G\u00fcnter Neumann, and Josef van Genabith. 2018. How robust are character- based word embeddings in tagging and MT against wrod scramlbing or randdm nouse? In Proceedings of the 13th Conference of the Association for Ma- chine Translation in the Americas (Volume 1: Re- search Papers), pages 68-80, Boston, MA. Associa- tion for Machine Translation in the Americas.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "CUNI system for the WMT19 robustness task",
"authors": [
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Helcl",
"suffix": ""
},
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Libovick\u00fd",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "539--543",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5364"
]
},
"num": null,
"urls": [],
"raw_text": "Jind\u0159ich Helcl, Jind\u0159ich Libovick\u00fd, and Martin Popel. 2019. CUNI system for the WMT19 robustness task. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 539-543, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [
"B"
],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho- rat, Fernanda B. Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's multilingual neural machine translation system: Enabling zero-shot translation. CoRR, abs/1611.04558.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Marian: Fast neural machine translation in c++",
"authors": [
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Tomasz",
"middle": [],
"last": "Dwojak",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Neckermann",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Seide",
"suffix": ""
},
{
"first": "Ulrich",
"middle": [],
"last": "Germann",
"suffix": ""
},
{
"first": "Alham",
"middle": [],
"last": "Fikri Aji",
"suffix": ""
},
{
"first": "Nikolay",
"middle": [],
"last": "Bogoychev",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.00344"
]
},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, et al. 2018. Marian: Fast neural machine translation in c++. arXiv:1804.00344.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Training on synthetic noise improves robustness to natural noise in machine translation",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, and Marjan Ghazvininejad. 2019. Training on synthetic noise improves robustness to natural noise in ma- chine translation. CoRR, abs/1902.01509.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceed- ings of the 2004 Conference on Empirical Meth- ods in Natural Language Processing, pages 388- 395, Barcelona, Spain. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Ses- sions, pages 177-180, Prague, Czech Republic. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Six challenges for neural machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Knowles",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Neural Machine Translation",
"volume": "",
"issue": "",
"pages": "28--39",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3204"
]
},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six chal- lenges for neural machine translation. In Proceed- ings of the First Workshop on Neural Machine Trans- lation, pages 28-39, Vancouver. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2012"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Findings of the first shared task on machine translation robustness",
"authors": [
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "91--102",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5303"
]
},
"num": null,
"urls": [],
"raw_text": "Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, and Has- san Sajjad. 2019. Findings of the first shared task on machine translation robustness. In Proceedings of the Fourth Conference on Machine Translation (Vol- ume 2: Shared Task Papers, Day 1), pages 91-102, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Robust neural machine translation with joint textual and phonetic embedding",
"authors": [
{
"first": "Hairong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hairong Liu, Mingbo Ma, Liang Huang, Hao Xiong, and Zhongjun He. 2018. Robust neural machine translation with joint textual and phonetic embed- ding. CoRR, abs/1810.06729.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Gradient episodic memory for continual learning",
"authors": [
{
"first": "David",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Paz",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "6467--6476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Lopez-Paz and Marc'Aurelio Ranzato. 2017. Gradient episodic memory for continual learning. In Advances in neural information processing systems, pages 6467-6476.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christo- pher D. Manning. 2015. Effective approaches to attention-based neural machine translation. CoRR, abs/1508.04025.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "On evaluation of adversarial perturbations for sequence-to-sequence models",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3103--3114",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1314"
]
},
"num": null,
"urls": [],
"raw_text": "Paul Michel, Xian Li, Graham Neubig, and Juan Pino. 2019a. On evaluation of adversarial perturbations for sequence-to-sequence models. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3103-3114, Minneapolis, Minnesota. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "MTNT: A testbed for machine translation of noisy text",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Michel and Graham Neubig. 2018. MTNT: A testbed for machine translation of noisy text. CoRR, abs/1809.00388.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Regularizing trajectories to mitigate catastrophic forgetting",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Elisabeth",
"middle": [],
"last": "Salesky",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Michel, Elisabeth Salesky, and Graham Neubig. 2019b. Regularizing trajectories to mitigate catas- trophic forgetting. Preprint.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "JFLEG: A fluency corpus and benchmark for grammatical error correction",
"authors": [
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "229--234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Courtney Napoles, Keisuke Sakaguchi, and Joel Tetreault. 2017. JFLEG: A fluency corpus and benchmark for grammatical error correction. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 2, Short Papers, pages 229-234, Valencia, Spain. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6319"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "JHU 2019 robustness task system description",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "552--558",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5366"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Post and Kevin Duh. 2019. JHU 2019 robust- ness task system description. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 552-558, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Toward robust neural machine translation for noisy input sequences",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Sperber",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. IWSLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Sperber, Jan Niehues, and Alex Waibel. 2017. Toward robust neural machine translation for noisy input sequences. In Proc. IWSLT.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Tense and aspect error correction for ESL learners using global context",
"authors": [
{
"first": "Toshikazu",
"middle": [],
"last": "Tajiri",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "198--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshikazu Tajiri, Mamoru Komachi, and Yuji Mat- sumoto. 2012. Tense and aspect error correction for ESL learners using global context. In Proceed- ings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 198-202, Jeju Island, Korea. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "It's morphin' time! Combating linguistic discrimination with inflectional perturbations",
"authors": [
{
"first": "Samson",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2920--2935",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.263"
]
},
"num": null,
"urls": [],
"raw_text": "Samson Tan, Shafiq Joty, Min-Yen Kan, and Richard Socher. 2020. It's morphin' time! Combating linguistic discrimination with inflectional perturba- tions. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 2920-2935, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Overcoming catastrophic forgetting during domain adaptation of neural machine translation",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Gwinnup",
"suffix": ""
},
{
"first": "Huda",
"middle": [],
"last": "Khayrallah",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2062--2068",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1209"
]
},
"num": null,
"urls": [],
"raw_text": "Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, and Philipp Koehn. 2019. Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 2062-2068, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "OPUS-MT -Building open translation services for the World",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "Santhosh",
"middle": [],
"last": "Thottingal",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann and Santhosh Thottingal. 2020. OPUS-MT -Building open translation services for the World. In Proceedings of the 22nd Annual Con- ferenec of the European Association for Machine Translation (EAMT), Lisbon, Portugal.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Tensor2Tensor for neural machine translation",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Brevdo",
"suffix": ""
},
{
"first": "Francois",
"middle": [],
"last": "Chollet",
"suffix": ""
},
{
"first": "Aidan",
"middle": [],
"last": "Gomez",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Sepassi",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 13th Conference of the Association for Machine Translation in the Americas",
"volume": "1",
"issue": "",
"pages": "193--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Samy Bengio, Eugene Brevdo, Fran- cois Chollet, Aidan Gomez, Stephan Gouws, Llion Jones, \u0141ukasz Kaiser, Nal Kalchbrenner, Niki Par- mar, Ryan Sepassi, Noam Shazeer, and Jakob Uszko- reit. 2018. Tensor2Tensor for neural machine trans- lation. In Proceedings of the 13th Conference of the Association for Machine Translation in the Ameri- cas (Volume 1: Research Papers), pages 193-199, Boston, MA. Association for Machine Translation in the Americas.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. arXiv:1910.03771.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "A new dataset and method for automatically grading ESOL texts",
"authors": [
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Medlock",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "180--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 180-189, Portland, Oregon, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Improving massively multilingual neural machine translation and zero-shot translation",
"authors": [
{
"first": "Biao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1628--1639",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.148"
]
},
"num": null,
"urls": [],
"raw_text": "Biao Zhang, Philip Williams, Ivan Titov, and Rico Sen- nrich. 2020. Improving massively multilingual neu- ral machine translation and zero-shot translation. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1628- 1639, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Robust machine translation with domain sensitive pseudo-sources: Baidu-OSU WMT19 MT robustness shared task system report",
"authors": [
{
"first": "Renjie",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Hairong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Baigong",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "559--564",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5367"
]
},
"num": null,
"urls": [],
"raw_text": "Renjie Zheng, Hairong Liu, Mingbo Ma, Baigong Zheng, and Liang Huang. 2019. Robust machine translation with domain sensitive pseudo-sources: Baidu-OSU WMT19 MT robustness shared task sys- tem report. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Pa- pers, Day 1), pages 559-564, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Improving robustness of neural machine translation with multi-task learning",
"authors": [
{
"first": "Shuyan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiangkai",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Yingqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "565--571",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5368"
]
},
"num": null,
"urls": [],
"raw_text": "Shuyan Zhou, Xiangkai Zeng, Yingqi Zhou, Antonios Anastasopoulos, and Graham Neubig. 2019. Im- proving robustness of neural machine translation with multi-task learning. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 565-571, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology",
"authors": [
{
"first": "Ran",
"middle": [],
"last": "Zmigrod",
"suffix": ""
},
{
"first": "Sabrina",
"middle": [
"J"
],
"last": "Mielke",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1651--1661",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1161"
]
},
"num": null,
"urls": [],
"raw_text": "Ran Zmigrod, Sabrina J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmen- tation for mitigating gender stereotypes in languages with rich morphology. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 1651-1661, Florence, Italy. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"html": null,
"num": null,
"text": "Translation quality (BLEU scores) on the JFLEG ES data-set. \u2020: average over 4 corrected sentences as input to the translation model. *statistically significantly better than the baseline, with p < 0.05.",
"type_str": "table",
"content": "<table/>"
},
"TABREF4": {
"html": null,
"num": null,
"text": "",
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"html": null,
"num": null,
"text": "Simple finetuning on only a single language leads to catastrophic forgetting of the other languages, as the low translation quality (BLEU scores) on the JF-LEG ES data-set show.",
"type_str": "table",
"content": "<table/>"
}
}
}
}