Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K19-1028",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:05:10.563370Z"
},
"title": "Automatically Extracting Challenge Sets for Non-local Phenomena in Neural Machine Translation",
"authors": [
{
"first": "Leshem",
"middle": [],
"last": "Choshen",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We show that the state-of-the-art Transformer MT model is not biased towards monotonic reordering (unlike previous recurrent neural network models), but that nevertheless, longdistance dependencies remain a challenge for the model. Since most dependencies are shortdistance, common evaluation metrics will be little influenced by how well systems perform on them. We therefore propose an automatic approach for extracting challenge sets replete with long-distance dependencies, and argue that evaluation using this methodology provides a complementary perspective on system performance. To support our claim, we compile challenge sets for English-German and German-English, which are much larger than any previously released challenge set for MT. The extracted sets are large enough to allow reliable automatic evaluation, which makes the proposed approach a scalable and practical solution for evaluating MT performance on the long-tail of syntactic phenomena. 1",
"pdf_parse": {
"paper_id": "K19-1028",
"_pdf_hash": "",
"abstract": [
{
"text": "We show that the state-of-the-art Transformer MT model is not biased towards monotonic reordering (unlike previous recurrent neural network models), but that nevertheless, longdistance dependencies remain a challenge for the model. Since most dependencies are shortdistance, common evaluation metrics will be little influenced by how well systems perform on them. We therefore propose an automatic approach for extracting challenge sets replete with long-distance dependencies, and argue that evaluation using this methodology provides a complementary perspective on system performance. To support our claim, we compile challenge sets for English-German and German-English, which are much larger than any previously released challenge set for MT. The extracted sets are large enough to allow reliable automatic evaluation, which makes the proposed approach a scalable and practical solution for evaluating MT performance on the long-tail of syntactic phenomena. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The assumption that proximate source words are more likely to correspond to proximate target words has often been introduced as a bias (henceforth, locality bias) into statistical MT systems (Brown et al., 1993; Koehn et al., 2003; Chiang, 2005) . While reordering phenomena, abundant for some language pairs, violate this simplifying assumption, it has often proved to be a useful inductive bias in practice, especially when complemented with targeted techniques for addressing non-monotonic translation (e.g., Och, 2002; Chiang, 2005) . For example, if an adjective precedes a noun in one language and modifies it syntactically, it is likely that their corresponding words will appear close to each other in the translation -i.e., they may not be immediately adjacent or even in the same order in the translation, but it is unlikely that they will be arbitrarily distant from one another.",
"cite_spans": [
{
"start": 191,
"end": 211,
"text": "(Brown et al., 1993;",
"ref_id": "BIBREF9"
},
{
"start": 212,
"end": 231,
"text": "Koehn et al., 2003;",
"ref_id": "BIBREF36"
},
{
"start": 232,
"end": 245,
"text": "Chiang, 2005)",
"ref_id": "BIBREF12"
},
{
"start": 512,
"end": 522,
"text": "Och, 2002;",
"ref_id": "BIBREF45"
},
{
"start": 523,
"end": 536,
"text": "Chiang, 2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the era of Neural Machine Translation (NMT), such biases are implicitly introduced by the sequential nature of the LSTM architecture (Bahdanau et al., 2015, see \u00a72) . The influential Transformer model (Vaswani et al., 2017) replaces the sequential LSTMs with self-attention, which does not seem to possess this bias. We show that the default implementation of the Transformer does retain some bias, but that it can be relieved by using learned positional embeddings ( \u00a73).",
"cite_spans": [
{
"start": 136,
"end": 167,
"text": "(Bahdanau et al., 2015, see \u00a72)",
"ref_id": null
},
{
"start": 204,
"end": 226,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF62"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Long-distance dependencies (LDD) between words and phrases present a long-standing problem for MT (Sennrich, 2016) , as they are generally more difficult to detect (indeed, they pose an ongoing challenge for parsing as well (Xu et al., 2009) ), and often result in non-monotonic translation if the target differs from the source in terms of its word order and lexicalization patterns. The Transformer's indifference to the absolute position of the tokens raises the question of whether longdistance dependencies are still an open problem.",
"cite_spans": [
{
"start": 98,
"end": 114,
"text": "(Sennrich, 2016)",
"ref_id": "BIBREF49"
},
{
"start": 224,
"end": 241,
"text": "(Xu et al., 2009)",
"ref_id": "BIBREF64"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We address this question by proposing an automatic method to compile challenge sets for evaluating system performance on LDD ( \u00a74). We distinguish between two main LDD types: (1) reordering LDD, namely cases where source and target words largely correspond to one another but are ordered differently; (2) lexical LDD, where the way a word or a contiguous expression on the target side is translated is dependent on non-adjacent words on the source side.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We define a methodology for extracting both LDD types. For reordering LDD, we build on Birch (2011) , whereas for lexical LDD we compile a list of linguistic phenomena that yield LDD, and use a dependency parser to find instances of these phenomena in the source side of a parallel corpus. As a test case, we apply this method to construct challenge sets ( \u00a74.2) for German-English and English-German. The approach can be easily scaled to other languages for which a good enough parser exists.",
"cite_spans": [
{
"start": 87,
"end": 99,
"text": "Birch (2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experimenting both with RNN and selfattention NMT architectures, we find that although the latter presents no locality bias, LDD remain challenging. Moreover, lexical LDD become increasingly challenging with their distance, suggesting that syntactic distance remains an important determinant of performance in state-of-the-art (SoTA) NMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We conclude that evaluating LDD using targeted challenge sets gives a detailed picture of MT performance, and underscores challenges the field has yet to fully address. As particular types of LDD are not frequent enough to significantly affect coarse-grained measures, such as BLEU (Papineni et al., 2002) or TER (Snover et al., 2006) , our evaluation approach provides a complementary perspective on system performance.",
"cite_spans": [
{
"start": 282,
"end": 305,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF47"
},
{
"start": 313,
"end": 334,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A common architecture for text-to-text generation tasks is the (Bi)LSTM encoder-decoder (Bahdanau et al., 2015) . This architecture consists of several LSTM layers for the encoder and the decoder and a thin attention layer connecting them. LSTM is a recurrent network with a state vector it updates. At every step, it discards some of the current and past information and aggregates the rest into the state. Any information about the past comes from this state, which is a learned \"summary\" of the previous states (cf. Greff et al., 2017) . Hence, for information to reach a certain prediction step, it should be stored and then kept throughout the intermediate steps (tokens). While theoretically information could be kept indefinitely (Hochreiter and Schmidhuber, 1997) , practical evidence shows that LSTMs performance decreases with the distance between the trigger and the prediction (Linzen et al., 2016; , and that they have difficulties generalizing over sequence lengths (Suzgun et al., 2018) .",
"cite_spans": [
{
"start": 88,
"end": 111,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 519,
"end": 538,
"text": "Greff et al., 2017)",
"ref_id": "BIBREF21"
},
{
"start": 737,
"end": 771,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF26"
},
{
"start": 889,
"end": 910,
"text": "(Linzen et al., 2016;",
"ref_id": "BIBREF38"
},
{
"start": 980,
"end": 1001,
"text": "(Suzgun et al., 2018)",
"ref_id": "BIBREF59"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Long-distance Dependencies in MT",
"sec_num": "2.1"
},
{
"text": "Despite being affected by absolute distances between syntactically dependent tokens (Linzen et al., 2016) , LSTMs tend to learn to a certain extent structural information even without being instructed to do so explicitly (Gulordava et al., 2018) . Futrell and Levy (2018) discuss similar linguistic phenomena to what we discuss in \u00a74.2, and show that LSTM encoder-decoder systems handle them better than previous N-gram based systems, despite being profoundly affected by distance.",
"cite_spans": [
{
"start": 84,
"end": 105,
"text": "(Linzen et al., 2016)",
"ref_id": "BIBREF38"
},
{
"start": 221,
"end": 245,
"text": "(Gulordava et al., 2018)",
"ref_id": "BIBREF22"
},
{
"start": 248,
"end": 271,
"text": "Futrell and Levy (2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Long-distance Dependencies in MT",
"sec_num": "2.1"
},
{
"text": "Transformer (Vaswani et al., 2017 ) models are also encoder-decoder, but instead of LSTMs, they use self-attention. Self-attention is based on gating all outputs of the previous layer as inputs for the current one; put differently, it aggregates all the input in one step. This approach makes information from all parts of the input sequence equally reachable. While this is not the only architecture with such attributes (van den Oord et al., 2016), we focus on it due to its SoTA results for MT (Lakew et al., 2018) . The Transformer's use of self-attention inspired other works in related fields (Devlin et al., 2018) , some of which attributed their performance gains to the model's ability to capture long-range context (M\u00fcller et al., 2018) .",
"cite_spans": [
{
"start": 12,
"end": 33,
"text": "(Vaswani et al., 2017",
"ref_id": "BIBREF62"
},
{
"start": 497,
"end": 517,
"text": "(Lakew et al., 2018)",
"ref_id": "BIBREF37"
},
{
"start": 599,
"end": 620,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 725,
"end": 746,
"text": "(M\u00fcller et al., 2018)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Long-distance Dependencies in MT",
"sec_num": "2.1"
},
{
"text": "As the Transformer does not aggregate input sequentially, token positions must be represented through other means. For that purpose, the embedding of each input token W is concatenated with an embedding of its position in the source sentence P . While positional embeddings can generally be any vectors, two implementations are commonly used (Tebbifakhr et al., 2018; Guo et al., 2018) : learned positional embeddings (learnedPEs; P is randomly initialized), and sine positional embeddings (SinePEs) defined as:",
"cite_spans": [
{
"start": 342,
"end": 367,
"text": "(Tebbifakhr et al., 2018;",
"ref_id": "BIBREF60"
},
{
"start": 368,
"end": 385,
"text": "Guo et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Long-distance Dependencies in MT",
"sec_num": "2.1"
},
{
"text": "P (pos,2i) = sin(pos/10, 000 2i/dim ) P (pos,2i+1) = cos(pos/10, 000 2i/dim )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Long-distance Dependencies in MT",
"sec_num": "2.1"
},
{
"text": "where dim is the dimension of the embedding. Vaswani et al. (2017) report that they see no benefit in learnedPEs, and hence use SinePEs, which have much fewer parameters.",
"cite_spans": [
{
"start": 45,
"end": 66,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF62"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Long-distance Dependencies in MT",
"sec_num": "2.1"
},
{
"text": "Most of the dependencies between words are short. Short-distance linguistic dependencies include some of the most common phenomena in language, such as determination, modification by an adjective and compounding. For example, 62% of the dependencies in the standard UD EWT training set (Silveira et al., 2014) are between tokens that are up to one word apart. It stands to reason that the locality bias is useful in these cases. Nevertheless, as system quality improves, rarer, more challenging dependencies become a priority, and languages present a countless number of longdistance reordering phenomena (Deng and Xue, 2017) . One example is subject-verb agreement, where a correct translation requires that the verb is inflected according to the headword of the subject (e.g., in English \"dogs that ..., bark\", while \"a dog that ..., barks\"). When translating such cases, a locality bias may impede performance, by biasing the model not to attend to both the subject's head and the main verb (which may be arbitrarily distant), thereby disallowing it to correctly inflect the main verb.",
"cite_spans": [
{
"start": 286,
"end": 309,
"text": "(Silveira et al., 2014)",
"ref_id": "BIBREF53"
},
{
"start": 605,
"end": 625,
"text": "(Deng and Xue, 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Long-distance Dependencies in MT",
"sec_num": "2.1"
},
{
"text": "Due to the benefits of the locality bias, it featured prominently in statistical MT, including in the IBM models, where alignments are constrained not to cross too much (Brown et al., 1993) , and in predicting probabilities of reorderings (Koehn et al., 2003; Chiang, 2005) . Difficulties in handling LDD have motivated the development of syntax-based MT (Yamada and Knight, 2001) , that can effectively represent reordering at the phrase level, such as when translating between VSO and SOV languages. However, syntaxbased MT models remain limited in their ability to map between arbitrarily different word orders (Sun et al., 2009; Xiong et al., 2012) . For example, reorderings that violate the assumption that the trees form contiguous phrases would be difficult for most such models to capture. In the next section ( \u00a73) we show that the Transformer, when implemented with learnedPEs, presents no locality bias, and hence can, in principle, learn dependencies between any two positions of the source, and use them at any step during decoding.",
"cite_spans": [
{
"start": 169,
"end": 189,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF9"
},
{
"start": 239,
"end": 259,
"text": "(Koehn et al., 2003;",
"ref_id": "BIBREF36"
},
{
"start": 260,
"end": 273,
"text": "Chiang, 2005)",
"ref_id": "BIBREF12"
},
{
"start": 355,
"end": 380,
"text": "(Yamada and Knight, 2001)",
"ref_id": "BIBREF65"
},
{
"start": 614,
"end": 632,
"text": "(Sun et al., 2009;",
"ref_id": "BIBREF58"
},
{
"start": 633,
"end": 652,
"text": "Xiong et al., 2012)",
"ref_id": "BIBREF63"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Long-distance Dependencies in MT",
"sec_num": "2.1"
},
{
"text": "With major improvements in system performance, crude assessments of performance are becoming less satisfying, i.e., evaluation metrics do not give an indication on the performance of MT systems on important challenges for the field (Isabelle and Kuhn, 2018) . String-similarity metrics against a reference are known to be partial and coarsegrained aspects of the task (Callison-Burch et al., 2006) , but are still the common practice in various text generation tasks. However, their opaqueness and difficulty to interpret have led to efforts to improve evaluation measures so that they will better reflect the requirements of the task (Anderson et al., 2016; Sulem et al., 2018; Choshen and Abend, 2018b) , and to increased interest in defin-ing more interpretable and telling measures (Lo and Wu, 2011; Hodosh et al., 2013; Choshen and Abend, 2018a) .",
"cite_spans": [
{
"start": 232,
"end": 257,
"text": "(Isabelle and Kuhn, 2018)",
"ref_id": "BIBREF30"
},
{
"start": 368,
"end": 397,
"text": "(Callison-Burch et al., 2006)",
"ref_id": "BIBREF10"
},
{
"start": 635,
"end": 658,
"text": "(Anderson et al., 2016;",
"ref_id": "BIBREF1"
},
{
"start": 659,
"end": 678,
"text": "Sulem et al., 2018;",
"ref_id": "BIBREF57"
},
{
"start": 679,
"end": 704,
"text": "Choshen and Abend, 2018b)",
"ref_id": "BIBREF14"
},
{
"start": 786,
"end": 803,
"text": "(Lo and Wu, 2011;",
"ref_id": "BIBREF41"
},
{
"start": 804,
"end": 824,
"text": "Hodosh et al., 2013;",
"ref_id": "BIBREF27"
},
{
"start": 825,
"end": 850,
"text": "Choshen and Abend, 2018a)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MT Evaluation",
"sec_num": "2.2"
},
{
"text": "A promising path forward is complementing string-similarity evaluation with linguistically meaningful challenge sets. Such sets have the advantage of being interpretable: they test for specific phenomena that are important for humans and are crucial for language understanding. Interpretability also means that evaluation artefacts are more likely to be detected earlier. So far, such challenge sets were constructed for French-English (Isabelle et al., 2017; Isabelle and Kuhn, 2018) and English-Swedish (Ahrenberg, 2018) 2 . Previous challenge sets were compiled by manually searching corpora for specific phenomena of interest (e.g., yes-no questions which are formulated differently in English and French). These corpora are carefully made but are small in size (ten examples per phenomenon), which means that evaluation must be done manually as well.",
"cite_spans": [
{
"start": 436,
"end": 459,
"text": "(Isabelle et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 460,
"end": 484,
"text": "Isabelle and Kuhn, 2018)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MT Evaluation",
"sec_num": "2.2"
},
{
"text": "As our methodology extracts sentences automatically based on parser output, we are able to compile much larger challenge sets, which allows us to apply standard MT measures to each subcorpus corresponding to a specific phenomenon. The methodology is, therefore, more flexible, and can be straightforwardly adapted to accommodate future advances in MT evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT Evaluation",
"sec_num": "2.2"
},
{
"text": "In this section we show that encoder-decoder models based on BiLSTM with attention (see \u00a72), do exhibit a locality bias, but that the Transformer, whose encoder is based on self-attention, and in which token position is encoded only through learnedPEs, does not present any such bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locality in SoTA NMT",
"sec_num": "3"
},
{
"text": "In order to test whether an NMT system presents a locality bias in a controlled environment, we examine a setting of arbitrary absolute order of the source-side tokens. In this case, systems that are predisposed towards monotonic decoding are likely to present lower performance, while systems that have no predisposition as to the order of the target side tokens relative to the source-side tokens are not expected to show any change in per-formance. In order to create a controlled setting, where source-side token order is arbitrary, we extract fixed length sentences, and apply the same permutation to all of them. We then train systems with the permuted source-side data (and the same target-side data), and compare results to a control condition where no permutation is applied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "Concretely, we experiment on a German-English setting, extracting all sentences of the most common length (18) from the WMT2015 (Bojar et al., 2015) training data. This results in 130,983 sentences, of which we hold out 1,000 sentences for testing. It is comparable in training set size to a low-resource language setting.",
"cite_spans": [
{
"start": 128,
"end": 148,
"text": "(Bojar et al., 2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "We set a fixed permutation \u03c3 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "[18] \u2192 [18]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "and train systems on three versions of the training data (settings): (1) REGULAR, to be used for control;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "(2) PERMUTED source-side, in which we apply \u03c3 over all source-side tokens; (3) PERPOSEMB where the positional embeddings of the sourceside tokens are permuted; 3 and (4) REVERSED, where tokens are input in a reverse order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "We apply the following permutation, \u03c3, to the source-side tokens: We did not find any property that would deem this permutation special (examining, e.g., its decomposition into cycles). We therefore assume that similar results will hold for other \u03c3s as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "We train a Transformer model, optimizing using Adam (Kingma and Ba, 2015). We set the embedding size to 512, dropout rate of 0.1, 6 stack layers in both the encoder and the decoder and 8 attention heads. We use tokenization, truecasing and BPE as preprocessing, following the same protocol as (Yang et al., 2018) .",
"cite_spans": [
{
"start": 293,
"end": 312,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF66"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "We experiment both with learnedPEs, and with SinePEs. We train the BiLSTM model using the Nematus implementation (Sennrich et al., 2017b) , and use their supplied scripts for preprocessing, training and testing, changing only the datasets used. For all models, we report the highest BLEU score on the test data for any epoch during training, and perform early stopping after 10 consecutive epochs without improvement.",
"cite_spans": [
{
"start": 113,
"end": 137,
"text": "(Sennrich et al., 2017b)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "In the Transformer with learnedPEs, 5 repetitions were done in the REGULAR setting, and 5 for 3 Formally, if the source sentence is (t1, ..., t18), then the input to the Transformer is W (t1); P (t \u03c3(1) ) , ..., W (t18); P (t \u03c3(18) ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1"
},
{
"text": "Positional the other settings: 5 repetitions for PERMUTED, 1 for PERPOSEMB and 1 for REVERSED. In addition, we trained the BiLSTM model and the Transformer with SinePEs both in the REGULAR condition and in PERMUTED, each was trained once.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Setting BLEU Transformer",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Table 1 presents our results. We find that Nematus BiLSTM suffers substantially from permuting the source-side tokens, but that the Transformer does not exhibit a locality bias. Indeed, for learned-PEs in all settings (REGULAR, PERMUTED, RE-VERSED and PERPOSEMB), BLEU scores are essentially the same. We also find that the common practice of using fixed SinePEs does introduce some bias, as attested by the small performance drop between REGULAR and PERMUTED. Like Vaswani et al. (2017) , we find that in the REGULAR settings, learnedPEs are not superior in performance to SinePEs, despite having more expressive power. However, our results suggest that the decision between learnedPEs and SinePEs is not without consequences: learnedPEs are preferable if a locality bias is undesired (this is potentially the case for highly divergent language pairs).",
"cite_spans": [
{
"start": 466,
"end": 487,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF62"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "Finding that Transformers do not present a locality bias has implications on how to construct their input in MT settings, as well as in other tasks that use self-attention encoders, such as image captioning (You et al., 2016) . It is common practice to augment the source-side with globally-applicable information, e.g., the target language in multilingual MT (Johnson et al., 2017) . Having no locality bias implies this additional information can be added at any fixed point in the sequence fed to a Transformer, provided that the positional embeddings do not themselves introduce such a bias. This is not the case with BiLSTMs, which often require introducing the same information at each input token to allow them to be effectively used by the system (Yao et al., 2017; Rennie et al., 2017) .",
"cite_spans": [
{
"start": 207,
"end": 225,
"text": "(You et al., 2016)",
"ref_id": "BIBREF68"
},
{
"start": 360,
"end": 382,
"text": "(Johnson et al., 2017)",
"ref_id": "BIBREF33"
},
{
"start": 755,
"end": 773,
"text": "(Yao et al., 2017;",
"ref_id": "BIBREF67"
},
{
"start": 774,
"end": 794,
"text": "Rennie et al., 2017)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.3"
},
{
"text": "One of the stated motivations of the Transformer model is to effectively tackle long-distance dependencies, which are \"a key challenge in many sequence transduction tasks\" (Vaswani et al., 2017) .",
"cite_spans": [
{
"start": 172,
"end": 194,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF62"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LDD Challenge Sets",
"sec_num": "4"
},
{
"text": "Our results from the previous section show that indeed fixed reordering patterns are completely transparent for Transformers. This, however, still leaves the question of how Transformers handle linguistic reordering patterns, which may involve varying distances between dependent tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LDD Challenge Sets",
"sec_num": "4"
},
{
"text": "We propose a method for scalably compiling challenge sets to support fine-grained MT evaluation for different types of LDD. We address two main types:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "Reordering LDD are cases where the words on the two sides of the parallel corpus largely correspond to one another, but are ordered differently. These cases may require attending to source words in a highly non-monotonic order, but the generation of each target word is localized to a specific region in the source sentence. For example, in English-German, the verb in a subordinated clause appears in a final position, while the verb in the English source appears right after the subject.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "Consider \"The man that is sitting on the chair\", and the corresponding German \"Der Mann, der auf dem Stuhl sitzt\" (lit. the man, that on the chair sits) -while the verb is placed at different clause positions in the two cases, the words mostly have direct correspondents. Our methodology follows Birch (2011) in detecting such phenomena based on alignment. Concretely, we extract a word alignment between corresponding sentences, and collect all sentences that include a pair of aligned words in the source and target sides, whose indices have a difference of at least d \u2208 N.",
"cite_spans": [
{
"start": 296,
"end": 308,
"text": "Birch (2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "Lexical LDD are cases where the translation of a single word or phrase is determined by nonadjacent words on the source side. This requires attending to two or more regions that can be arbitrarily distant from one another. Several phenomena, such as light verbs (Isabelle and Kuhn, 2018) , are known from the linguistic and MT literature to yield lexical LDD. Our methodology takes a predefined set of such phenomena, and defines rules for detecting each of them over dependency parses of the source-side. See \u00a74.2 for the list of phenomena we experiment on in this paper.",
"cite_spans": [
{
"start": 262,
"end": 287,
"text": "(Isabelle and Kuhn, 2018)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "Focusing on LDD, we restrict ourselves to instances where the absolute distance between the word and the dependent is at least d \u2208 N. Selecting large enough d entails that the extracted phenomena are unlikely to be memorized as a phrase with a specific meaning (e.g., encode \"make the whole thing up\" [d = 3] as a phrase, rather than as a discontiguous phrase \"make ... up\" with an argument \"the whole thing\"). This increases the probability that such cases, if translated correctly, reflect the MT systems' ability to recognize that such discontiguous units are likely to be translated as a single piece.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "We note, that by extracting the challenge set based on syntactic parses, we by no means assume these representations are internally represented by the MT systems in any way, or assume such a representation is required for succeeding in correctly translating such constructions. The extraction method is merely a way of finding phenomena we have a reason to believe are difficult to translate, and meaningful for language understanding. We use Universal Dependencies (UD; Nivre et al., 2016) as a syntactic representation, due to its cross-lingual consistency (about 90 languages are supported so far), which allows research on difficult LDD phenomena that recur across languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "Our extraction methods resemble previous challenge set approaches (Isabelle et al., 2017; Isabelle and Kuhn, 2018; Ahrenberg, 2018) , in using linguistically motivated sets of sentence pairs to assess translation quality. However, as our extraction method is fully automatic, it allows for the compilation of much larger challenge sets over many language pairs. The challenge sets we extract contain hundreds or thousands of pairs ( \u00a74.2). The size of the sets allows using any MT evaluation measures to measure performance, and is thus a much more scalable solution than manual inspection, as is commonly done in challenge set approaches.",
"cite_spans": [
{
"start": 66,
"end": 89,
"text": "(Isabelle et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 90,
"end": 114,
"text": "Isabelle and Kuhn, 2018;",
"ref_id": "BIBREF30"
},
{
"start": 115,
"end": 131,
"text": "Ahrenberg, 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "On the other hand, an automatic methodology has the side-effect of being noisier, and not necessarily selecting the most representative sentences for each phenomenon. For instance befinden sich (lit. to determine) includes a verb and a reflexive pronoun, which do not necessarily appear contiguously in German. However, as befinden always appears with the reflexive sich, it might not pose a challenge to NMT systems, which can essentially ignore the reflexive pronoun upon translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4.1"
},
{
"text": "Next, we discuss the compilation of German-English and English-German corpora. We select these pairs, as they are among the most studied in MT, and comparatively high results are obtained for them (Bojar et al., 2017) . Hence, they are more likely to benefit from a fine-grained analysis.",
"cite_spans": [
{
"start": 197,
"end": 217,
"text": "(Bojar et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Test Case on Extracting Sets",
"sec_num": "4.2"
},
{
"text": "For the reordering LDD corpus, we align each source and target sentences using FastAlign (Dyer et al., 2013) and collect all sentences with at least one pair of source-side and target-side tokens, whose indices have a difference of at least d = 5. For example:",
"cite_spans": [
{
"start": 89,
"end": 108,
"text": "(Dyer et al., 2013)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Test Case on Extracting Sets",
"sec_num": "4.2"
},
{
"text": "Source: W\u00e4re es ein gro\u00dfer Misserfolg, nicht den Titel in der Ligue 1 zu gewinnen, wie dies in der letzten Saison der Fall war? Gloss: Would-be it a big failure, not the title in the Ligue 1 to win, as this in the last season the case was? Target: In Ligue 1, would not winning the title, like last season, be a big failure?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Test Case on Extracting Sets",
"sec_num": "4.2"
},
{
"text": "We extract lexical LDD using simple rules over source-side parse trees, parsed with UDPipe (Straka and Strakov\u00e1, 2017) . For a sentence to be selected, at least one word should separate the detected pair of words. We picked several wellknown challenging constructions for translation that involve discontiguous phrases: reflexive-verb, verb-particle constructions and preposition stranding. We note that while these constructions often yield lexical LDDs, and are thus expected to be challenging on average, some of their instances can be translated literally (e.g., amuse oneself is translated to am\u00fcsieren sich).",
"cite_spans": [
{
"start": 91,
"end": 118,
"text": "(Straka and Strakov\u00e1, 2017)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Test Case on Extracting Sets",
"sec_num": "4.2"
},
{
"text": "Reflexive Verbs. Prototypically, reflexivity is the case where the subject and object corefer. Reflexive pronouns in English end with self or selves (e.g., yourselves) and in German include sich, dich, mich and uns among others. However, reflexive pronouns can often change the meaning of a verb unpredictably, and may thus lead to different translations for non-reflexive instances of a verb, compared to reflexive ones. For example, abheben in German means taking off (as of a plane), but sich abheben means standing out. Similarly, in the example below, dr\u00e4ngte sich translates to intrude, while dr\u00e4ngte normally translates to pushed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Test Case on Extracting Sets",
"sec_num": "4.2"
},
{
"text": "A source sentence is said to include a reflexive verb if one of its tokens is parsed with a reflexive morphological feature (refl=yes). Phrasal Verbs are verbs that are made up of a verb and a particle (or several particles), which may change the meaning of the verb unpredictably. Examples of English phrasal verbs include run into (in the sense of meet) and give in, and in German they include examples such as einladen (invite), consisting morphologically of the particle ein and the verb laden (load). A source sentence is said to include a phrasal verb if a particle dependent (UD labels of compound:prt or prt) exists in the parse. trat in itself means stepped, but in the extracted example below, trat. . . entgegen translates to received. Preposition Stranding is the case where a preposition does not appear adjacent to the object it refers to. In English, it will often appear at the end of the sentence or a clause. For example, The banana she stepped on or The boy I read the book to. Preposition stranding is common in English and other languages such as Scandinavian languages or Dutch (Hornstein and Weinberg, 1981) . However, in German, it is not a part of standard written language (Beermann and Ik-Han, 2005) , although it does (rarely) appear (Fanselow, 1983) . We, therefore, extract this challenge set only with English as the source side. While preposition stranding is often regarded as a syntactic phenomenon, we consider it here a lexical LDD, since the translation of prepositions Table 3 : Sizes of Lexical LDD corpora. Challenge sets are partitioned (in order of appearance) by the language pairs, the phenomenon type, and the minimal distance between the head and the dependent. Phenomenon appears in the source. Statistics for the Newstest2013 corpora with miminal distance \u2265 1 are at the rightmost column, the rest are on Books.",
"cite_spans": [
{
"start": 1100,
"end": 1130,
"text": "(Hornstein and Weinberg, 1981)",
"ref_id": "BIBREF28"
},
{
"start": 1199,
"end": 1226,
"text": "(Beermann and Ik-Han, 2005)",
"ref_id": "BIBREF4"
},
{
"start": 1262,
"end": 1278,
"text": "(Fanselow, 1983)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1507,
"end": 1514,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Test Case on Extracting Sets",
"sec_num": "4.2"
},
{
"text": "(and in some cases their accompanying verbs) is dependent on the prepositional object, which in the case of preposition stranding, may be distant from the preposition itself. For example, translating the car we looked for into German usually uses the verb suchen (search), while translating the car we looked at does not. Translating prepositions is difficult in general (Hashemi and Hwa, 2014), but preposition stranding is especially so, as there is no adjacent object to assist disambiguation. A source sentence is said to include preposition stranding if it contains two nodes with an edge of the type obl (oblique) or a subcategory thereof between them, and the UD POS tag of the dependent is adposition (ADP ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Test Case on Extracting Sets",
"sec_num": "4.2"
},
{
"text": "We turn to evaluate SoTA NMT performance on the extracted challenge sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3"
},
{
"text": "Experimental Setup. We trained the Transformer on WMT2015 training data (Bojar et al., 2015) , for parameters see \u00a73.1. For Nematus we used the non-ensemble pre-trained model from (Sennrich et al., 2017a) . Each of the test sets, either a baseline or a challenge sets, for the Transformer and Nematus used a maximum of 10k and 1k sentences per set respectively. 4 Two parallel corpora were used for extracting the challenge sets. One is newstest2013 (Bojar et al., 2015) from the news domain that is commonly used as a development set for English-German. The other is the relatively unused Books corpus (Tiedemann, 2012) from the more challenging domain of literary translation. The corpora are of sizes 51K and 3K respectively. For lexical LDD, we took the distance (d) between the relevant words to be at least 1, meaning there is at least one word separating them. See Tables 2, 3 for the sizes of the extracted corpora.",
"cite_spans": [
{
"start": 72,
"end": 92,
"text": "(Bojar et al., 2015)",
"ref_id": "BIBREF8"
},
{
"start": 180,
"end": 204,
"text": "(Sennrich et al., 2017a)",
"ref_id": "BIBREF50"
},
{
"start": 362,
"end": 363,
"text": "4",
"ref_id": null
},
{
"start": 450,
"end": 470,
"text": "(Bojar et al., 2015)",
"ref_id": "BIBREF8"
},
{
"start": 603,
"end": 620,
"text": "(Tiedemann, 2012)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [
{
"start": 872,
"end": 883,
"text": "Tables 2, 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3"
},
{
"text": "For evaluation, we use the MOSES implementation of BLEU (Papineni et al., 2002; Koehn et al., 2007) , and for reordering LDD, also RIBES (Isozaki et al., 2010) , which focuses on reordering. RIBES measures the correlation of n-gram ranks between the output and the reference, where n-gram appears uniquely and in both.",
"cite_spans": [
{
"start": 56,
"end": 79,
"text": "(Papineni et al., 2002;",
"ref_id": "BIBREF47"
},
{
"start": 80,
"end": 99,
"text": "Koehn et al., 2007)",
"ref_id": "BIBREF35"
},
{
"start": 137,
"end": 159,
"text": "(Isozaki et al., 2010)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3"
},
{
"text": "Manual Validation. To assess the ability of our procedure to extract relevant LDDs, we manually analyzed over 180 source German sentences ex- 10.01 9.73 9.14 9.04 -0.97 Table 5 : The effect of dependency distance for lexical LDDs on SoTA performance . Results are in BLEU over the Books challenge sets. Columns correspond to the minimum distance, where All does not restrict distance (control). The rightmost column presents the Spearman correlation of the phenomena's score with the minimum distance used. All correlations but one are highly negative, implying that distance has a negative effect on performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 176,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3"
},
{
"text": "tracted from Books, and 81 English ones including all the instances extracted from News and 45 extracted from Books, where instances are evenly distributed between phenomena and distance of exactly 1,2 or 5. We find that 85% of German sentences, 87% of the English News sentences and 86% of the Books ones indeed contain the target phenomenon. For details of the manual evaluation of the extraction procedure, see Appendix 1. Results. Comparison of the overall BLEU scores of the NMT models (Table 4) against their performance on the challenge sets, shows that the phenomena are challenging for both models. Both in the small development set of newstest2013 and the large set of Books, the challenge subparts are more challenging across the board. For reordering LDD, we further apply RIBES and find a similar trend: RIBES score is lower for the reorder challenge set than the baseline (see Table 6 ). In order to confirm that the distance between the head and dependent (the \"length\" of the depen-dency) is related to the observed performance drop in the case of lexical LDD, we partition each of the challenge sets according to their length (d), and compare the results to a control condition, where all instances of the phenomena listed in \u00a74.2 are extracted, including non-LDD instances, i.e., sentences where the head and the dependent are adjacent. System performance on the sliced challenge sets (Table 5) shows that performance indeed decreases with d. Results thus indicate that it is not only the presence of the phenomena that make these sets challenging, but that the challenge increases with the distance.",
"cite_spans": [],
"ref_spans": [
{
"start": 491,
"end": 500,
"text": "(Table 4)",
"ref_id": "TABREF5"
},
{
"start": 891,
"end": 898,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 1403,
"end": 1412,
"text": "(Table 5)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3"
},
{
"text": "We validate this main finding using manual annotation of German to English cases. Using two annotators (with high agreement between them; \u03ba=0.79), we find that the decrease in performance with d is replicated. We measure how many of the detected lexical LDD are correctly translated, ignoring the rest of the source and output, as done in manual challenge set approaches. We find that 60%, 54% and 38% of the cases are translated correctly for d \u2208 (1, 2, 5) , respectively. This suggests that the extracted phenomena and the distance indeed pose a challenge, and that the automatic metric we use shows the correct trend in these cases. See Appendix 2 for details.",
"cite_spans": [],
"ref_spans": [
{
"start": 448,
"end": 457,
"text": "(1, 2, 5)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3"
},
{
"text": "Discussion. Interestingly, these results hold true for the Transformer despite its indifference to the absolute word order. Therefore, word distance in itself is not what makes such phenomena challenging, contrary to what one might expect from the definition of LDD. It seems then that these phenomena are especially challenging due to the non-standard linguistic structure (e.g., syntactic and lexical structure), and the varying distances in which LDD manifest themselves. The models, therefore, seem to be unable to learn the linguistic structure underlying these phenomena, which may motivate more explicit modelling of linguistic biases into NMT models, as proposed by, e.g., Eriguchi et al. (2017) and Song et al. (2019) .",
"cite_spans": [
{
"start": 681,
"end": 703,
"text": "Eriguchi et al. (2017)",
"ref_id": "BIBREF18"
},
{
"start": 708,
"end": 726,
"text": "Song et al. (2019)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3"
},
{
"text": "We note that our experiments were not designed to compare the performance of BiLSTM and selfattention models. We, therefore, do not see the Transformer's inferior performance on Books, relative to Nematus as an indication of the general ability of this model in out-of-domain settings. What is evident from the results is that translating Books is a challenge in itself, probably due to the register of the language, and the presence of frequent non-literal translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3"
},
{
"text": "A potential confound is that performance might change with the length of the source in BiLSTMs (Carpuat et al., 2013; Murray and Chiang, 2018) , in Transformers it was reported to increase . Length is generally greater in the challenge set than in the full test set, and generally increases with d, showing if anything a decrease of performance by length. To assess whether our corpora are challenging due to a length bias, we randomly sample from Books 1,000 corpora with 1,000, 100 and 10 sentences each. The correlation between their corresponding average length and the Transformers' BLEU score on them was 0.06,0.09 and 0.03 respectively. While this suggests length is not a strong predictor of performance, to verify that difficulty is not a result of the distribution of lengths in the challenge sets we conduct another experiment.",
"cite_spans": [
{
"start": 95,
"end": 117,
"text": "(Carpuat et al., 2013;",
"ref_id": "BIBREF11"
},
{
"start": 118,
"end": 142,
"text": "Murray and Chiang, 2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3"
},
{
"text": "For each challenge set and each value of d (0-3), we sample 100 corpora. For each sentence in a given challenge set, we sample a sentence of no more than a difference of 1 in length. This results in a corpus with a similar length distribution, but sampled from the overall population of Books sentences. Results show that the BLEU score of the challenge sets in all German to English cases is lower than any randomly sampled corpus. 5 In the English-German cases, trends are similar, albeit less pronounced. This may be due to the low number of long English sentences, which lead to 5 Most sampled corpora actually had better scores than the baseline. We believe this is because very short sentences which are mostly noise, are never sampled. more homogeneous samples. Overall, results suggest that length is extremely unlikely to be the only cause for the observed trends.",
"cite_spans": [
{
"start": 433,
"end": 434,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.3"
},
{
"text": "As NMT system performance is constantly improving, more reliable methods for identifying and classifying their failures are needed. Much research effort is therefore devoted to developing more fine-grained and interpretable evaluation methods, including challenge-set approaches. In this paper, we showed that, using a UD parser, it is possible to extract challenge sets that are large enough to allow scalable MT evaluation of important and challenging phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "An accumulating body of research is devoted to the ability of modern neural architectures such as LSTMs (Linzen et al., 2016) and pretrained embeddings (Hewitt and Manning, 2019; Liu et al., 2019; Jawahar et al., 2019) to represent linguistic features. This paper makes a contribution to this literature in confirming that the Transformer model can indeed be made indifferent to the absolute order of the words, but also shows that this does not entail that the model can overcome the difficulties of LDD in naturalistic data. We may carefully conclude then that despite the remarkable feats of current NMT models, inducing linguistic structure in its more evasive and challenging instances is still beyond the reach of stateof-the-art NMT, which motivates exploring more linguistically-informed models.",
"cite_spans": [
{
"start": 104,
"end": 125,
"text": "(Linzen et al., 2016)",
"ref_id": "BIBREF38"
},
{
"start": 152,
"end": 178,
"text": "(Hewitt and Manning, 2019;",
"ref_id": "BIBREF25"
},
{
"start": 179,
"end": 196,
"text": "Liu et al., 2019;",
"ref_id": "BIBREF39"
},
{
"start": 197,
"end": 218,
"text": "Jawahar et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our extracted challenge sets and codebase are found in https://github.com/borgr/auto_challenge_ sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In WMT 2019 English-German phenomena were tested with a new corpus, using both human and automatic evaluation. It is not possible, however, to use this evaluation outside the competition(Avramidis et al., 2019).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We subsample a smaller test set for Nematus, since the most competitive model for the language pair requires Theano. As Theano is deprecated for two years now, it cannot run on our GPUs, which entails long inference time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the Israel Science Foundation (grant no. 929/17)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A challenge set for englishswedish machine translation",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Ahrenberg",
"suffix": ""
}
],
"year": 2018,
"venue": "SLTC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lars Ahrenberg. 2018. A challenge set for english- swedish machine translation. In SLTC.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Spice: Semantic propositional image caption evaluation",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Basura",
"middle": [],
"last": "Fernando",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
}
],
"year": 2016,
"venue": "European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "382--398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. Spice: Semantic propo- sitional image caption evaluation. In European Conference on Computer Vision, pages 382-398. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Linguistic evaluation of german-english machine translation using a test suite",
"authors": [
{
"first": "Eleftherios",
"middle": [],
"last": "Avramidis",
"suffix": ""
},
{
"first": "Vivien",
"middle": [],
"last": "Macketanz",
"suffix": ""
},
{
"first": "Ursula",
"middle": [],
"last": "Strohriegel",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Fourth Conference on Machine Translation. Conference on Machine Translation (WMT-2019), located at The 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eleftherios Avramidis, Vivien Macketanz, Ursula Strohriegel, and Hans Uszkoreit. 2019. Linguis- tic evaluation of german-english machine transla- tion using a test suite. In Proceedings of the Fourth Conference on Machine Translation. Conference on Machine Translation (WMT-2019), located at The 57th Annual Meeting of the Association for Com- putational Linguistics, August 1-2, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Preposition stranding and locative adverbs in german",
"authors": [
{
"first": "Dorothee",
"middle": [],
"last": "Beermann",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Ik-Han",
"suffix": ""
}
],
"year": 2005,
"venue": "Organizing Grammar",
"volume": "86",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dorothee Beermann and Lars Ik-Han. 2005. Preposi- tion stranding and locative adverbs in german. Or- ganizing Grammar, 86:31.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Reordering metrics for statistical machine translation",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandra Birch. 2011. Reordering metrics for statis- tical machine translation. Ph.D. thesis, The Univer- sity of Edinburgh.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Hume: Human ucca-based evaluation of machine translation",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1264--1274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandra Birch, Omri Abend, Ond\u0159ej Bojar, and Barry Haddow. 2016. Hume: Human ucca-based evaluation of machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 1264-1274.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Findings of the 2017 conference on machine translation (wmt17)",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Shujian",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "169--214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, et al. 2017. Findings of the 2017 confer- ence on machine translation (wmt17). In Proceed- ings of the Second Conference on Machine Transla- tion, pages 169-214.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Findings of the 2015 workshop on statistical machine translation",
"authors": [
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Hokamp",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In WMT@EMNLP.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "Vincent J Della",
"middle": [],
"last": "Peter F Brown",
"suffix": ""
},
{
"first": "Stephen A Della",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert L",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathemat- ics of statistical machine translation: Parameter esti- mation. Computational linguistics, 19(2):263-311.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Re-evaluation the role of bleu in machine translation research",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Burch",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2006,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluation the role of bleu in ma- chine translation research. In EACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Proceedings of ssst-7, seventh workshop on syntax, semantics and structure in statistical translation",
"authors": [
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marine Carpuat, Lucia Specia, and Dekai Wu. 2013. Proceedings of ssst-7, seventh workshop on syntax, semantics and structure in statistical translation. In EMNLP 2014.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A hierarchical phrase-based model for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "263--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Pro- ceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 263-270. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic metric validation for grammatical error correction",
"authors": [
{
"first": "Leshem",
"middle": [],
"last": "Choshen",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leshem Choshen and Omri Abend. 2018a. Automatic metric validation for grammatical error correction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Referenceless measure of faithfulness for grammatical error correction",
"authors": [
{
"first": "Leshem",
"middle": [],
"last": "Choshen",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leshem Choshen and Omri Abend. 2018b. Reference- less measure of faithfulness for grammatical error correction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Translation divergences in chinese-english machine translation: An empirical investigation",
"authors": [
{
"first": "Dun",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "3",
"pages": "521--565",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00292"
]
},
"num": null,
"urls": [],
"raw_text": "Dun Deng and Nianwen Xue. 2017. Translation diver- gences in chinese-english machine translation: An empirical investigation. Computational Linguistics, 43(3):521-565.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A simple, fast, and effective reparameterization of ibm model 2",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Chahuneau",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "644--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameteriza- tion of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-648.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning to parse and translate improves neural machine translation",
"authors": [
{
"first": "Akiko",
"middle": [],
"last": "Eriguchi",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to parse and translate improves neural machine translation. In ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Zu einigen problemen von kasus, rektion und bindung in der deutschen syntax",
"authors": [
{
"first": "Gisbert",
"middle": [],
"last": "Fanselow",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gisbert Fanselow. 1983. Zu einigen problemen von ka- sus, rektion und bindung in der deutschen syntax. Universit\u00e4t Konstanz.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Do rnns learn human-like abstract word order preferences? CoRR",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": ""
},
{
"first": "Roger",
"middle": [
"P"
],
"last": "Levy",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Futrell and Roger P. Levy. 2018. Do rnns learn human-like abstract word order preferences? CoRR, abs/1811.01866.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Lstm: A search space odyssey",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Greff",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Rupesh",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Koutn\u00edk",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bas",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Steunebrink",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE transactions on neural networks and learning systems",
"volume": "28",
"issue": "",
"pages": "2222--2232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klaus Greff, Rupesh K Srivastava, Jan Koutn\u00edk, Bas R Steunebrink, and J\u00fcrgen Schmidhuber. 2017. Lstm: A search space odyssey. IEEE transactions on neu- ral networks and learning systems, 28(10):2222- 2232.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Colorless green recurrent networks dream hierarchically",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Gulordava",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In NAACL-HLT.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A comparison of mt errors and esl errors",
"authors": [
{
"first": "B",
"middle": [],
"last": "Homa",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Hashemi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hwa",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "2696--2700",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Homa B Hashemi and Rebecca Hwa. 2014. A com- parison of mt errors and esl errors. In LREC, pages 2696-2700.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A structural probe for finding syntax in word representations",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hewitt",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4129--4138",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1419"
]
},
"num": null,
"urls": [],
"raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word repre- sentations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Framing image description as a ranking task: Data, models and evaluation metrics",
"authors": [
{
"first": "Micah",
"middle": [],
"last": "Hodosh",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Artificial Intelligence Research",
"volume": "47",
"issue": "",
"pages": "853--899",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Ar- tificial Intelligence Research, 47:853-899.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Case theory and preposition stranding",
"authors": [
{
"first": "Norbert",
"middle": [],
"last": "Hornstein",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Weinberg",
"suffix": ""
}
],
"year": 1981,
"venue": "Linguistic inquiry",
"volume": "12",
"issue": "1",
"pages": "55--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Norbert Hornstein and Amy Weinberg. 1981. Case the- ory and preposition stranding. Linguistic inquiry, 12(1):55-91.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A challenge set approach to evaluating machine translation",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Isabelle",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "George",
"middle": [
"F"
],
"last": "Foster",
"suffix": ""
}
],
"year": 2017,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Isabelle, Colin Cherry, and George F. Foster. 2017. A challenge set approach to evaluating ma- chine translation. In EMNLP.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A challenge set for french-> english machine translation",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Isabelle",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.02725"
]
},
"num": null,
"urls": [],
"raw_text": "Pierre Isabelle and Roland Kuhn. 2018. A challenge set for french-> english machine translation. arXiv preprint arXiv:1806.02725.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Automatic evaluation of translation quality for distant language pairs",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "944--952",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic eval- uation of translation quality for distant language pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Process- ing, pages 944-952. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "What does BERT learn about the structure of language",
"authors": [
{
"first": "Ganesh",
"middle": [],
"last": "Jawahar",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Djam\u00e9",
"middle": [],
"last": "Seddah",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3651--3657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 3651-3657, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [
"B"
],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho- rat, Fernanda B. Vi\u00e9gas, Martin Wattenberg, Gre- gory S. Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine transla- tion system: Enabling zero-shot translation. Trans- actions of the Association for Computational Lin- guistics, 5:339-351.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Ses- sions, pages 177-180.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "48--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology- Volume 1, pages 48-54. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A comparison of transformer and recurrent neural networks on multilingual neural machine translation",
"authors": [
{
"first": "Mauro",
"middle": [],
"last": "Surafel Melaku Lakew",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2018,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Surafel Melaku Lakew, Mauro Cettolo, and Marcello Federico. 2018. A comparison of transformer and recurrent neural networks on multilingual neural machine translation. In COLING.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Assessing the ability of lstms to learn syntaxsensitive dependencies",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "521--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntax- sensitive dependencies. Transactions of the Associ- ation for Computational Linguistics, 4:521-535.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Linguistic knowledge and transferability of contextual representations",
"authors": [
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1073--1094",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1112"
]
},
"num": null,
"urls": [],
"raw_text": "Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Lin- guistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 1073-1094, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Lstms exploit linguistic attributes of data",
"authors": [
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Chenhao",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "Rep4NLP@ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nelson F. Liu, Omer Levy, Roy Schwartz, Chenhao Tan, and Noah A. Smith. 2018. Lstms exploit lin- guistic attributes of data. In Rep4NLP@ACL.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Meant: an inexpensive, high-accuracy, semi-automatic metric for evaluating translation utility via semantic frames",
"authors": [
{
"first": "Chi-Kiu",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "220--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi-kiu Lo and Dekai Wu. 2011. Meant: an inexpen- sive, high-accuracy, semi-automatic metric for eval- uating translation utility via semantic frames. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies-Volume 1, pages 220-229. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A large-scale test set for the evaluation of context-aware pronoun translation in neural machine translation",
"authors": [
{
"first": "Mathias",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Annette",
"middle": [],
"last": "Rios",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Voita",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2018,
"venue": "WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathias M\u00fcller, Annette Rios, Elena Voita, and Rico Sennrich. 2018. A large-scale test set for the evalu- ation of context-aware pronoun translation in neural machine translation. In WMT.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Correcting length bias in neural machine translation",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2018,
"venue": "WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Murray and David Chiang. 2018. Correcting length bias in neural machine translation. In WMT.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Universal dependencies v1: A multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of LREC",
"volume": "",
"issue": "",
"pages": "1659--1666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Hajic, Christopher D. Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In Proc. of LREC, pages 1659- 1666.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Statistical machine translation: from single-word models to alignment templates",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2002. Statistical machine transla- tion: from single-word models to alignment tem- plates. Ph.D. thesis, Bibliothek der RWTH Aachen.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Wavenet: A generative model for raw audio",
"authors": [
{
"first": "A\u00e4ron",
"middle": [],
"last": "Van Den Oord",
"suffix": ""
},
{
"first": "Sander",
"middle": [],
"last": "Dieleman",
"suffix": ""
},
{
"first": "Heiga",
"middle": [],
"last": "Zen",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"W"
],
"last": "Senior",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
}
],
"year": 2016,
"venue": "SSW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A\u00e4ron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. In SSW.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Self-critical sequence training for image captioning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Etienne",
"middle": [],
"last": "Rennie",
"suffix": ""
},
{
"first": "Youssef",
"middle": [],
"last": "Marcheret",
"suffix": ""
},
{
"first": "Jerret",
"middle": [],
"last": "Mroueh",
"suffix": ""
},
{
"first": "Vaibhava",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "7008--7024",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7008-7024.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "How grammatical is characterlevel neural machine translation. Assessing MT quality with contrastive translation pairs",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich. 2016. How grammatical is charac- terlevel neural machine translation. Assessing MT quality with contrastive translation pairs. CoRR, abs/1612.04629.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "The university of edinburgh's neural mt systems for wmt17",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Currey",
"suffix": ""
},
{
"first": "Ulrich",
"middle": [],
"last": "Germann",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Valerio Miceli",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Barone",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2017,
"venue": "WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Alexandra Birch, Anna Currey, Ulrich Germann, Barry Haddow, Kenneth Heafield, An- tonio Valerio Miceli Barone, and Philip Williams. 2017a. The university of edinburgh's neural mt sys- tems for wmt17. In WMT.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Nematus: a toolkit for neural machine translation",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Hitschler",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "L\u00e4ubli",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Valerio Miceli",
"suffix": ""
},
{
"first": "Jozef",
"middle": [],
"last": "Barone",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Mokry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nadejde",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "65--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexan- dra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel L\u00e4ubli, Antonio Valerio Miceli Barone, Jozef Mokry, and Maria Nadejde. 2017b. Nematus: a toolkit for neural machine trans- lation. In Proceedings of the Software Demonstra- tions of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics, pages 65-68, Valencia, Spain. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), volume 1, pages 1715-1725.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "A gold standard dependency corpus for English",
"authors": [
{
"first": "Natalia",
"middle": [],
"last": "Silveira",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Miriam",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Christopher D. Manning. 2014. A gold standard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC- 2014).",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of association for machine translation in the Americas",
"volume": "200",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine transla- tion in the Americas, volume 200.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Semantic neural machine translation using amr",
"authors": [
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.07282"
]
},
"num": null,
"urls": [],
"raw_text": "Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, and Jinsong Su. 2019. Semantic neural machine translation using amr. arXiv preprint arXiv:1902.07282.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe",
"authors": [
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Strakov\u00e1",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "88--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milan Straka and Jana Strakov\u00e1. 2017. Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 88-99, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Semantic structural evaluation for text simplification",
"authors": [
{
"first": "Elior",
"middle": [],
"last": "Sulem",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elior Sulem, Omri Abend, and Ari Rappoport. 2018. Semantic structural evaluation for text simplifica- tion. In NAACL-HLT.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "A noncontiguous tree sequence alignment-based model for statistical machine translation",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chew Lim",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "2",
"issue": "",
"pages": "914--922",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Sun, Min Zhang, and Chew Lim Tan. 2009. A non- contiguous tree sequence alignment-based model for statistical machine translation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Vol- ume 2-Volume 2, pages 914-922. Association for Computational Linguistics.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "On evaluating the generalization of lstm models in formal languages",
"authors": [
{
"first": "Mirac",
"middle": [],
"last": "Suzgun",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Stuart",
"middle": [
"M"
],
"last": "Shieber",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.01001"
]
},
"num": null,
"urls": [],
"raw_text": "Mirac Suzgun, Yonatan Belinkov, and Stuart M Shieber. 2018. On evaluating the generalization of lstm models in formal languages. arXiv preprint arXiv:1811.01001.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Multi-source transformer with combined losses for automatic post editing",
"authors": [
{
"first": "Amirhossein",
"middle": [],
"last": "Tebbifakhr",
"suffix": ""
},
{
"first": "Ruchit",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2018,
"venue": "WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amirhossein Tebbifakhr, Ruchit Agrawal, Matteo Ne- gri, and Marco Turchi. 2018. Multi-source trans- former with combined losses for automatic post edit- ing. In WMT.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Parallel data, tools and interfaces in opus",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Lrec",
"volume": "2012",
"issue": "",
"pages": "2214--2218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Lrec, volume 2012, pages 2214- 2218.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Modeling the translation of predicate-argument structure for smt",
"authors": [
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "902--911",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deyi Xiong, Min Zhang, and Haizhou Li. 2012. Mod- eling the translation of predicate-argument structure for smt. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguis- tics: Long Papers-Volume 1, pages 902-911. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Using a dependency parser to improve smt for subject-object-verb languages",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jaeho",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ringgaard",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of human language technologies: The 2009 annual conference of the North American chapter of the association for computational linguistics",
"volume": "",
"issue": "",
"pages": "245--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Xu, Jaeho Kang, Michael Ringgaard, and Franz Och. 2009. Using a dependency parser to improve smt for subject-object-verb languages. In Proceed- ings of human language technologies: The 2009 an- nual conference of the North American chapter of the association for computational linguistics, pages 245-253. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "A syntaxbased statistical translation model",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Yamada and Kevin Knight. 2001. A syntax- based statistical translation model. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Improving neural machine translation with conditional sequence generative adversarial nets",
"authors": [
{
"first": "Zhen",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1346--1355",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1122"
]
},
"num": null,
"urls": [],
"raw_text": "Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Improving neural machine translation with condi- tional sequence generative adversarial nets. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long Papers), pages 1346-1355. Association for Computational Linguistics.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Boosting image captioning with attributes",
"authors": [
{
"first": "Ting",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Yingwei",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Yehao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zhaofan",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Mei",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "4904--4912",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ting Yao, Yingwei Pan, Yehao Li, Zhaofan Qiu, and Tao Mei. 2017. Boosting image captioning with at- tributes. 2017 IEEE International Conference on Computer Vision (ICCV), pages 4904-4912.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Image captioning with semantic attention",
"authors": [
{
"first": "Quanzeng",
"middle": [],
"last": "You",
"suffix": ""
},
{
"first": "Hailin",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Zhaowen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Jiebo",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "4651--4659",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. 2016. Image captioning with seman- tic attention. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4651-4659.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Accelerating neural transformer via an average attention network",
"authors": [
{
"first": "Biao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Biao Zhang, Deyi Xiong, and Jinsong Su. 2018. Accel- erating neural transformer via an average attention network. In ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "For example: Source: [...] es ertragen zu m\u00fcssen, da\u00df eine unsympathische Fremde sich unaufh\u00f6rlich in ihren Familienkreis dr\u00e4ngte. Target: [...] to see an uncongenial alien permanently intruded on her own family group."
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "For example: Source: [...] ich trat ihm in wahnsinniger Wut entgegen. Target: [...] I received him in frantic sort."
},
"TABREF2": {
"text": "Sizes of reordering and baseline corpora.",
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">Min Distance</td><td/></tr><tr><td/><td>Phenomena</td><td>All</td><td>\u22651</td><td>\u22652</td><td>\u22653</td><td>News</td></tr><tr><td>De\u2192En</td><td>Particle Reflexive</td><td colspan=\"5\">8,361 13,207 8,122 5,598 4,226 281 7,584 6,261 4,780 232</td></tr><tr><td/><td>Particle</td><td>4,636</td><td>786</td><td>111</td><td>36</td><td>17</td></tr><tr><td>En\u2192De</td><td>Reflexive</td><td>3,225</td><td colspan=\"2\">1,188 460</td><td>274</td><td>11</td></tr><tr><td/><td colspan=\"2\">Preposition Stranding 682</td><td>191</td><td>85</td><td>40</td><td>8</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF5": {
"text": "",
"content": "<table><tr><td>: BLEU scores on the challenge sets. Mini-</td></tr><tr><td>mum distance between head and dependent d \u2265 1.</td></tr><tr><td>A clear, consistent drop from the Baseline (full cor-</td></tr><tr><td>pus) score is observed in all cases. The top part of</td></tr><tr><td>the table corresponds to German-to-English (De\u2192En)</td></tr><tr><td>sets, and bottom part to English-to-German (En\u2192De)</td></tr><tr><td>sets. Within each part, rows correspond to various</td></tr><tr><td>linguistic phenomena (second column), including re-</td></tr><tr><td>ordering LDD (Reorder), Verb-Particle Constructions</td></tr><tr><td>(Particle), Reflexive Verbs (Reflexive) and Preposition</td></tr><tr><td>Stranding. Columns correspond to the models (Tran-</td></tr><tr><td>former/Nematus), and the domains (Books/News).</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF8": {
"text": "RIBES scores on the reordering LDD challenge sets. Sentences extracted as being challenging to reorder are harder for the Transformer (lower score). This trend is consistent with our experiments with BLEU. First column indicates the source language.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}