Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R19-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:01:13.753314Z"
},
"title": "Naive Regularizers for Low-Resource Neural Machine Translation",
"authors": [
{
"first": "Meriem",
"middle": [],
"last": "Beloucif",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e4t Hamburg",
"location": {
"settlement": "Hamburg",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Ana",
"middle": [
"Valeria"
],
"last": "Gonzalez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {
"settlement": "Copenhagen",
"country": "Denmark"
}
},
"email": ""
},
{
"first": "Marcel",
"middle": [],
"last": "Bollmann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {
"settlement": "Copenhagen",
"country": "Denmark"
}
},
"email": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {
"settlement": "Copenhagen",
"country": "Denmark"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Neural machine translation models have little inductive bias, which can be a disadvantage in low-resource scenarios. They require large volumes of data and often perform poorly when limited data is available. We show that using naive regularization methods, based on sentence length, punctuation and word frequencies, to penalize translations that are very different from the input sentences, consistently improves the translation quality across multiple low-resource languages. We experiment with 12 language pairs, varying the training data size between 17k to 230k sentence pairs. Our best regularizer achieves an average increase of 1.5 BLEU score and 1.0 TER score across all the language pairs. For example, we achieve a BLEU score of 26.70 on the IWSLT15 English-Vietnamese translation task simply by using relative differences in punctuation as a regularizer.",
"pdf_parse": {
"paper_id": "R19-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "Neural machine translation models have little inductive bias, which can be a disadvantage in low-resource scenarios. They require large volumes of data and often perform poorly when limited data is available. We show that using naive regularization methods, based on sentence length, punctuation and word frequencies, to penalize translations that are very different from the input sentences, consistently improves the translation quality across multiple low-resource languages. We experiment with 12 language pairs, varying the training data size between 17k to 230k sentence pairs. Our best regularizer achieves an average increase of 1.5 BLEU score and 1.0 TER score across all the language pairs. For example, we achieve a BLEU score of 26.70 on the IWSLT15 English-Vietnamese translation task simply by using relative differences in punctuation as a regularizer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "One of the major challenges when training neural networks is overfitting. Overfitting is what happens when a neural network in part memorizes the training data rather than learning to generalize from it. To prevent this, neural machine translation (NMT) models are typically trained with an L 1 or L 2 penalty, dropout, momentum, or other general-purpose regularizers. Generalpurpose regularizers and large volumes of training data have enabled us to train flexible, expressive neural machine translation architectures that have provided a new state of the art in machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For low-resource language pairs, however, where large volumes of training data are not available, neural machine translation has come with diminishing returns (Koehn and Knowles, 2017) . The general-purpose regularizers do not provide enough inductive bias to enable generalization, it seems. This is an area of active research, and other work has explored multi-task learning (Firat et al., 2016; Dong et al., 2015) , zero-shot learning (Johnson et al., 2016) , and unsupervised machine translation (Gehring et al., 2017) to resolve the data bottleneck. In this paper, we consider a fully complementary, but much simpler alternative: naive, linguistically motivated regularizers that penalize the output sentences of translation models departing heavily from simple characteristics of the input sentences.",
"cite_spans": [
{
"start": 159,
"end": 184,
"text": "(Koehn and Knowles, 2017)",
"ref_id": "BIBREF13"
},
{
"start": 377,
"end": 397,
"text": "(Firat et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 398,
"end": 416,
"text": "Dong et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 438,
"end": 460,
"text": "(Johnson et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 500,
"end": 522,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The proposed regularizers are based on three surface properties of sentences: their length (measured as number of tokens), their amount of punctuation (measured as number of punctuation signs), and the frequencies of their words (as measured on external corpora). While there are languages that do not make use of punctuation (e.g., Lao and Thai), in general these three properties are roughly preserved across translations into most languages. If we translate a sentence such as (1), for example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) That dog is a Chinook.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "it is relatively safe to assume that a good translation will be short, contain at most one dot, and contain at least one relatively frequent word (for dog) and at least one relatively infrequent word (for Chinook). This assumption is the main motivation for our work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions Our contribution is three-fold: (a) We propose three relatively na\u00efve, yet linguistically motivated, regularization methods for machine translation with low-resource languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Two of the regularizers are derived directly from the input, without relying on any additional linguistic resources. This makes them adequate for low-resource settings, where the availability of linguistic resources can generally not assumed. Our third regularizer (frequency) only assumes access to unlabeled data. (b) We show that regularizing a standard NMT architecture using naive regularization methods consistently improves machine translation quality across multiple low-resource languages, also compared to using more standard methods such as dropout. We also show that combining these regularizers leads to further improvements. (c) Finally, we present examples and analysis showing how the more linguistically motivated regularizers we propose, help low-resource machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "End-to-end neural machine translation is based on encoder-decoder architectures Luong et al., 2015a Luong et al., , 2017 , in which a source sentence",
"cite_spans": [
{
"start": 80,
"end": 99,
"text": "Luong et al., 2015a",
"ref_id": "BIBREF16"
},
{
"start": 100,
"end": 120,
"text": "Luong et al., , 2017",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "x = (x 1 , x 2 , ..., x n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "is encoded into a vector (or a weighted average over a sequence of vectors) z = (z 1 , z 2 , ..., z n ). The hidden state representing z is then fed to the transducer (also called decoder) which generates translations, noted as y = (y 1 , y 2 , ..., y m ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Neural machine translation has achieved stateof-the-art performance for various language pairs (Luong et al., 2015a; Sennrich et al., 2015; Luong and Manning, 2016; Neubig, 2015; Vaswani et al., 2017) , especially when trained on large volumes of parallel data, i.e., millions of parallel sentences (also called bi-sentences), humanly translated or validated. Such amounts of training data, however, are difficult to obtain for low-resource languages such as Slovene or Vietnamese, and in their absence, neural machine translation is known to come with diminishing returns, suffering from overfitting (Koehn and Knowles, 2017) .",
"cite_spans": [
{
"start": 95,
"end": 116,
"text": "(Luong et al., 2015a;",
"ref_id": "BIBREF16"
},
{
"start": 117,
"end": 139,
"text": "Sennrich et al., 2015;",
"ref_id": "BIBREF23"
},
{
"start": 140,
"end": 164,
"text": "Luong and Manning, 2016;",
"ref_id": "BIBREF15"
},
{
"start": 165,
"end": 178,
"text": "Neubig, 2015;",
"ref_id": "BIBREF20"
},
{
"start": 179,
"end": 200,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 601,
"end": 626,
"text": "(Koehn and Knowles, 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In order to avoid overfitting, NMT models are often trained with L 1 or L 2 regularization, as well as other forms of regularization such as momentum training or dropout (Srivastava et al., 2014; Miceli Barone et al., 2017) . However, these regularization methods are very general and do not carry any language specific information.",
"cite_spans": [
{
"start": 170,
"end": 195,
"text": "(Srivastava et al., 2014;",
"ref_id": "BIBREF25"
},
{
"start": 196,
"end": 223,
"text": "Miceli Barone et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "On the other hand, it has been shown that transfer learning approaches using out of domain data, such as the European Parliament data 1 , to regularize the learning helps improve the translation quality (Miceli Barone et al., 2017) . This approach produces good results, but it is not applicable in low-resource settings because it requires large amounts of data in the language of interest. To the best of our knowledge, our work is the first to introduce naive, linguistically motivated regularization methods such as sentence length, punctuation and word frequency.",
"cite_spans": [
{
"start": 203,
"end": 231,
"text": "(Miceli Barone et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In order to show the impact that our regularizers have on the translation quality, we use an off-the-shelf NMT system described by Luong et al. (2017) as our baseline. The model consists of two multi-layer recurrent neural networks (RNNs), one that encodes the source language and one that decodes onto the target language. For the encoder cell, we use a single Long Short-Term Memory (LSTM) layer (Hochreiter and Schmidhuber, 1997) and output the hidden state, which then gets passed to the decoder cell.",
"cite_spans": [
{
"start": 131,
"end": 150,
"text": "Luong et al. (2017)",
"ref_id": "BIBREF14"
},
{
"start": 398,
"end": 432,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "3.1"
},
{
"text": "We train our models to minimize the crossentropy loss and back-propagate the loss to update the parameters of our model. We update network weights using Adam optimization (Kingma and Ba, 2014), which calculates the exponential moving average of the gradient and squared gradient, and combines the advantages of AdaGrad and RMSProp. For the purpose of comparison, we set the dropout to 0.2, similar to Luong et al. (2015b) .",
"cite_spans": [
{
"start": 401,
"end": 421,
"text": "Luong et al. (2015b)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "3.1"
},
{
"text": "To apply our new regularizers, we add each regularizer to the loss function during the training of the NMT model (Luong et al., 2015a; Luong and Manning, 2016; Luong et al., 2017 ). Since we aim to minimize the cross-entropy loss, this means that we favor training instances which have a low penalty from the regularizers (e.g., a small length difference). Importantly, we do not use dropout in this scenario, as we want to contrast our naive, but linguistically motivated signals with a traditional, but not linguistically motivated regularization method, i.e., dropout.",
"cite_spans": [
{
"start": 113,
"end": 134,
"text": "(Luong et al., 2015a;",
"ref_id": "BIBREF16"
},
{
"start": 135,
"end": 159,
"text": "Luong and Manning, 2016;",
"ref_id": "BIBREF15"
},
{
"start": 160,
"end": 178,
"text": "Luong et al., 2017",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Regularized NMT",
"sec_num": "3.2"
},
{
"text": "Furthermore, we do not explore alternative ways for adding regularizers to the loss function here (other alternatives could be to have a weighted penalty which is then tuned to find the best penalty and added to the loss function for testing). The main purpose of this work is to study the effect of naive linguistically motivated regularizers and show that they can improve translation quality; we leave it to future work to find the optimal configuration of regularizers that maximizes the overall translation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularized NMT",
"sec_num": "3.2"
},
{
"text": "NMT models have shown to suffer \"the curse of sentence length\", and it has been hypothesized that this is due to a lack of representation at the decoder level Pouget-Abadie et al., 2014) . Our proposed sentence-length-based regularizer penalizes relative differences between the input and the MT output lengths during the training of the NMT model:",
"cite_spans": [
{
"start": 159,
"end": 186,
"text": "Pouget-Abadie et al., 2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Length-Based Regularizer",
"sec_num": "4.1"
},
{
"text": "reg length = |l 0 \u2212 l 1 | (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length-Based Regularizer",
"sec_num": "4.1"
},
{
"text": "Here, l 0 and l 1 represent the input sentence and the MT output sentence lengths, respectively, as measured by the number of words (not to be confused with L 1 and L 2 regularization methods).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length-Based Regularizer",
"sec_num": "4.1"
},
{
"text": "Note that this regularizer is different from the word penalty feature in phrase-based machine translation (Zens and Ney, 2004) , which only penalizes the target sentence length. The relative difference between the input and the MT output sentence lengths is also used as a feature in Marie and Fujita (2018) .",
"cite_spans": [
{
"start": 106,
"end": 126,
"text": "(Zens and Ney, 2004)",
"ref_id": "BIBREF29"
},
{
"start": 284,
"end": 307,
"text": "Marie and Fujita (2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Length-Based Regularizer",
"sec_num": "4.1"
},
{
"text": "The punctuation-based regularizer penalizes training instances whenever the amount of punctuation marks in the input sentence differs from the amount in the MT output sentence. It is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Punctuation-Based Regularizer",
"sec_num": "4.2"
},
{
"text": "reg punct = |p 0 \u2212 p 1 | (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Punctuation-Based Regularizer",
"sec_num": "4.2"
},
{
"text": "Here, p 0 and p 1 is the total number of punctuation marks in the input and the MT output sentence, respectively. Unfortunately, the only available methods to generate more efficient NMT models have included data intensive methods such as sentence alignment . Some very early research done in alignment used simple methodologies such as punctuation-based alignment (Chuang et al., 2004) . Our second regularizer is based on this simple idea, as it penalizes training instances where the quantities of punctuation marks differ between input and MT output sentences. Example (2) is taken from the training set of the French-English translation task:",
"cite_spans": [
{
"start": 365,
"end": 386,
"text": "(Chuang et al., 2004)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Punctuation-Based Regularizer",
"sec_num": "4.2"
},
{
"text": "( 2) We note that the punctuation in the French input sentence matches the punctuation of the desired English reference. However, during an early training step, the NMT model translates the input to a sequence containing six times the number of punctuation marks in the input sentence, which is obviously incorrect. Our punctuation regularizer further penalizes examples like this one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Punctuation-Based Regularizer",
"sec_num": "4.2"
},
{
"text": "Our last regularizer is based on the distribution of word frequencies between the source and the target sentences. Generally speaking, if the source sentence contains an uncommon word, we assume that its translation in the target language is also uncommon. The intuition behind this regularizer is that if the source sentence contains one uncommon word and three common words, then its accurate translation should contain similar word frequencies. The example below is extracted from the English-French translation task:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-Based Regularizer",
"sec_num": "4.3"
},
{
"text": "( is to calculate how different the MT output sentence is from the source input in terms of vocabulary distribution. For instance, the frequency of using the word chauve-souris in French is almost similar to the frequency of using its English translation bat in English. The same could be applied for the more frequent words such as et in French and its English translation and.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-Based Regularizer",
"sec_num": "4.3"
},
{
"text": "We start by computing the frequency vectors \u2212 \u2192 v in and \u2212 \u2192 v out , containing the frequency for every word w i in the input and MT output sentence, respectively:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-Based Regularizer",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2212 \u2192 v = f (w 1 ), . . . , f (w n )",
"eq_num": "(3)"
}
],
"section": "Frequency-Based Regularizer",
"sec_num": "4.3"
},
{
"text": "To calculate the word frequencies f (w) for each language, we use the Wikipedia database 2 as an external resource. Table 1 contains the size of the datasets (in number of words) used to estimate these. We note that there is considerably more data for English and French than for e.g. Vietnamese (cf. Table. 1); we discuss the effect that this might have on the results in Sec. 6.",
"cite_spans": [],
"ref_spans": [
{
"start": 116,
"end": 123,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Frequency-Based Regularizer",
"sec_num": "4.3"
},
{
"text": "We interpret the resulting frequency vectors \u2212 \u2192 v as distributions, for which we now calculate the Kullback-Leibler (KL) divergence to obtain our regularization term:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-Based Regularizer",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "reg freq = D KL ( \u2212 \u2192 v in , \u2212 \u2192 v out )",
"eq_num": "(4)"
}
],
"section": "Frequency-Based Regularizer",
"sec_num": "4.3"
},
{
"text": "Essentially, this regularizer penalizes translations if their word frequency distributions diverge too strongly from those of the source sentence. 4shows an input sentence and its MT output, for which we would compute the frequency vectors as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-Based Regularizer",
"sec_num": "4.3"
},
{
"text": "\u2212 \u2192 v in = f ('it'), f ('was'), . . . ,f ('neck') \u2212 \u2192 v out = f ('c'\u00e9tait'), f ('une'), . . . ,f ('cou') 5 Experiments 5.1 Data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-Based Regularizer",
"sec_num": "4.3"
},
{
"text": "The purpose of our experiments is to show that signals such as sentence length, punctuation or word frequency help improve the translation quality of a standard neural machine translation architecture. To that effect, we experiment with 12 translation tasks, translating from English to six low-resource languages, and vice versa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-Based Regularizer",
"sec_num": "4.3"
},
{
"text": "The six languages represent the following language families: Slavic, Romance, Germanic, and Austro-Asian. We further vary the size of the training data to test how our regularization methods affect the quality of the MT output in different setups. Table 2 contains the size of the training, development and test set for every language pair. Note that the training sets vary considerably in size, from 17k sentence pairs for Slovene to almost 233k for French.",
"cite_spans": [],
"ref_spans": [
{
"start": 248,
"end": 255,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Frequency-Based Regularizer",
"sec_num": "4.3"
},
{
"text": "The data is from the International Workshop on Spoken Language Translation (IWSLT), except for Russian, Slovene and Vietnamese which are from IWSLT 2015, the data for the remaining translation tasks is from IWSLT 2017 (Cettolo et al., 2012) .",
"cite_spans": [
{
"start": 218,
"end": 240,
"text": "(Cettolo et al., 2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-Based Regularizer",
"sec_num": "4.3"
},
{
"text": "Preprocessing The purpose of our experiments is to learn how to efficiently translate low-resource languages. For that purpose, we do not use any advanced preprocessing for any of our translation tasks except tokenization where we use the script from the Moses toolkit (Koehn et al., 2007) . We also set the maximum sentence length to 70 tokens and the vocabulary size to 50k.",
"cite_spans": [
{
"start": 269,
"end": 289,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-Based Regularizer",
"sec_num": "4.3"
},
{
"text": "We use the attention-based model described in Luong et al. (2015b) . Our model is composed of two LSTM layers each of which has 512-dimensional units and embeddings; we also use a mini-batch size of 128. Adding an attention mechanism in neural machine translation helps to encode relevant parts of the source sentence when learning the model. We propose to add additional regularizers on top of the attention-based model at each translation step.",
"cite_spans": [
{
"start": 46,
"end": 66,
"text": "Luong et al. (2015b)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "5.2"
},
{
"text": "We have noticed that the convergence highly depends on the language pairs involved. While our baseline model is identical to the NMT model described by Luong et al. (2015b) , we deviate from their training procedure by continuing the training until convergence, which for us took 15 epochs instead of the 12 epochs described by the authors. The convergence in our case is measured by the models having no improvements on the development set over five epochs. Table 3 shows that our baseline is +1.5 BLEU points better than the scores reported by Luong et al. (2015b) . On top of that, our length-based and punctuation-based models produce a statistically significant improvement over the baseline (+0.5 BLEU points).",
"cite_spans": [
{
"start": 152,
"end": 172,
"text": "Luong et al. (2015b)",
"ref_id": "BIBREF17"
},
{
"start": 546,
"end": 566,
"text": "Luong et al. (2015b)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 459,
"end": 466,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Details",
"sec_num": "5.2"
},
{
"text": "We train all our models automatically until convergence. In Table 4 , we report the number of epochs it took to converge by translation task when translating to/from English. We note that except for Czech and Slovene, which converged the quickest, most of the translation tasks took between 15k and 20k steps to converge.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Training Details",
"sec_num": "5.2"
},
{
"text": "In order to show that the naive regularizers which we propose in this paper significantly boost the translation quality, we test the machine translation output using the toolkit MultEval defined in Clark et al. (2011) . In this paper, we report the results using three commonly used metrics: the n- Table 3 : Baseline vs. our proposed models on the English-Vietnamese translation task, using the same dataset as Luong et al. (2015b) . The results in bold represent statistically significant results compared to the baseline according to Mul-tEval (Clark et al., 2011) . gram based metrics BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) , as well as the error-rate based metric TER (Snover et al., 2006) . The evaluation metric BLEU (Papineni et al., 2002) is based on n-gram matching between the input and the output, whereas the error-rate based metric TER (Snover et al., 2006) measures how many edits are needed so that the machine translation resembles the man-made reference. Table 5 : Contrasting our three proposed models to the baseline (NMT; Luong et al., 2017) across 12 translation tasks. We evaluate all the models using BLEU, METEOR and TER. The bold values represent the models that show statistically significant improvements over the baseline (p < 0.001; Clark et al., 2011) . Note that for BLEU and METEOR, higher is better, while for TER, lower is better. All regularization schemes almost consistently lead to improvements, with the punctuation-based regularizer achieving the highest gains.",
"cite_spans": [
{
"start": 198,
"end": 217,
"text": "Clark et al. (2011)",
"ref_id": "BIBREF5"
},
{
"start": 412,
"end": 432,
"text": "Luong et al. (2015b)",
"ref_id": "BIBREF17"
},
{
"start": 547,
"end": 567,
"text": "(Clark et al., 2011)",
"ref_id": "BIBREF5"
},
{
"start": 594,
"end": 617,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF21"
},
{
"start": 629,
"end": 655,
"text": "(Banerjee and Lavie, 2005)",
"ref_id": "BIBREF1"
},
{
"start": 701,
"end": 722,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF24"
},
{
"start": 752,
"end": 775,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF21"
},
{
"start": 878,
"end": 899,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF24"
},
{
"start": 1291,
"end": 1310,
"text": "Clark et al., 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 299,
"end": 306,
"text": "Table 3",
"ref_id": null
},
{
"start": 1001,
"end": 1008,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "baseline across almost all language pairs for all models and across all metrics. We obtain statistically significant results for almost all translation tasks for at least one regularization method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.1"
},
{
"text": "More specifically, the punctuation regularizer outperforms all the other models on all translation tasks except for French-English and English-French. For the latter, we observe that the word frequency regularizer is better than the other systems. This could be explained by the fact that the English vocabulary has many words borrowed from French, which makes the word frequency regularizer a better signal than punctuation or sentence length for this specific task. It also could be due to the fact that both English and French have the largest vocabulary for training the word frequencies (cf. Table 1 ; English has around 80M words and French has around 50M words, whereas all other languages have much less data).",
"cite_spans": [],
"ref_spans": [
{
"start": 597,
"end": 604,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.1"
},
{
"text": "The most challenging translation tasks are Slovene-English and English-Slovene, especially in terms of error rate. The results show that with 17k sentence pairs as a training set, it becomes more challenging to efficiently learn anything. The results we obtained are between 2 and 5 BLEU points when translating from English. The Slovene output contained many nontranslated words. Specifically, this task greatly suffers when using the word frequency regularizer, with an error increase of about 10 TER points from English to Slovene. We do not observe such losses for the Czech-English and English-Czech transla-tion tasks, even though the vocabulary size for estimating the word frequencies is lower for Czech. We hypothesize that this is due to the Czech training set being seven times larger than the Slovene one. We hypothesize that this is due to the fact that for Slovene we only have 17K sentence pairs for the training step; whereas for Czech, we have 122K sentence pairs, which helped control the model compared to Slovene.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.1"
},
{
"text": "One case where the punctuation regularizer succeeds consistently is on the English-German and German-English translation tasks, with an error reduction of about 1 TER point. This reflects the similarity in punctuation between these languages. Although we also observe improvements using the other regularization methods, e.g. the length-based method, these are not statistically significant here as calculated by MultEval (Clark et al., 2011) . Table 3 shows the BLEU scores of seven different systems including the one where we combine our three regularizers on the English-Vietnamese translation task. The combined regularizer does not only produce a statistically significant improvement of almost 1-BLEU point over the attention based baseline, but it also outperforms all the other regularizers achieving a BLEU score of 27.23.",
"cite_spans": [
{
"start": 422,
"end": 442,
"text": "(Clark et al., 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 445,
"end": 452,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.1"
},
{
"text": "The punctuation regularizer outperforms the baseline in most cases, and all of our regularization methods show statistically significant improvements in at least one language. Below we present examples, extracted from the test data, of how each of the regularization methods affects the output in comparison to the baseline model. The purpose of the examples is to show how each objective function in the learning component affects the performance component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Examples",
"sec_num": "7"
},
{
"text": "The frequency-based regularization method penalizes cases where the distribution of the target vocabulary greatly differs from the source vocabulary. We have noted a significant improvement for this specific regularizer when translating from French to English and vice-versa. Examples (5) and (6) show how this regularizer is improving the translation output. More precisely, entour\u00e9s in French is almost as frequent as surrounded in English, which is a word that our model with frequency-based regularization translates correctly, while the baseline does not. Additionally, in Example (6), our model has a better fluency and adequacy than the baseline since it not only correctly translates l'int\u00e9r\u00eat to interest, but also correctly produces of all instead of in all, as in the baseline output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-Based Regularizer",
"sec_num": "7.1"
},
{
"text": "The punctuation-based regularization performs best in the German-English and English-German translation tasks. This regularizer penalizes cases where the difference in the number of punctuation between the source and the target sentences is particularly large. As seen in Example (7), simply introducing this bias into a translation model leads to an output which more closely matches the punctuation of the source and target sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Punctuation-Based Regularizer",
"sec_num": "7.2"
},
{
"text": "(7) IN Und die Antwort , glaube ich , ist ja .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Punctuation-Based Regularizer",
"sec_num": "7.2"
},
{
"text": "[ \" F = T \u2207 S\u03c4 \" ] . Was Sie gerade sehen , ist wahrscheinlich die beste Entsprechung zu E = mc 2 f\u00fcr Intelligenz , die ich gesehen habe . REF And the answer , I believe , is yes .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Punctuation-Based Regularizer",
"sec_num": "7.2"
},
{
"text": "[ \" F = T \u2207 S\u03c4 \" ] What you're seeing is probably the closest equivalent to an E = mc 2 for intelligence that I've seen . BASE And the answer , I think , is yes . PUNC And the answer , I think , is yes .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Punctuation-Based Regularizer",
"sec_num": "7.2"
},
{
"text": "[ \" R = T T <unk> \" ] What you're looking at is probably the best <unk> <unk> <unk> of intelligence that I've seen .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Punctuation-Based Regularizer",
"sec_num": "7.2"
},
{
"text": "The baseline MT output completely fails to cap-ture anything from the input except for the first part up to \". . . is yes.\" Our punctuation-based model, however, manages to capture most parts of the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Punctuation-Based Regularizer",
"sec_num": "7.2"
},
{
"text": "Finally, the length-based regularization method leads to noticeable improvements in the Czech-English and English-Czech translation tasks. Example (8) shows that introducing an input sentence length bias led to an MT output that is much closer to the reference than the baseline. The input sentence consists of 12 tokens (including punctuation), the baseline output consists of 10 tokens, while our length based regularization model preserves the length of 12 tokens. ",
"cite_spans": [
{
"start": 139,
"end": 150,
"text": "Example (8)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Length-Based Regularizer",
"sec_num": "7.3"
},
{
"text": "The Slovene dataset is our smallest with about 17k sentence pairs for training. Despite the low amount of resources available in Slovene, we found that introducing very naive linguistic biases into our machine translation models actually leads to subtle differences that result in an output closer to the reference, not only lexically, but also semantically. In Example (9), we compare the output of the frequency based system against the baseline for the Slovene to English translation: In this particular case, the frequency based regularization model takes care of the translation of the word what, and although the word so is not translated, the overall meaning of the source is preserved. Example (10) shows another case of how the output of the frequency-based regularization system actually shows overall improvements in an extremely low-resource language. The output of our system is semantically closer to the reference than the baseline output, up to the word educate. In addition, the system preserves a similar length as the source sentence. 11 Finally, Example (11) shows a low-resource case where our system manages to make subtle changes in order to reach the correct translation, whereas the baseline system does not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Improvements",
"sec_num": "7.4"
},
{
"text": "We have shown that using naive regularization methods based on sentence length, punctuation, and word frequency consistently improves the translation quality in twelve low-resource translation tasks. The improvement is consistent across multiple language pairs and is not dependent on the language family. We have reported and discussed examples demonstrating why and how each regularizer is improving the translation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Our proposed approach shows that even naive, but linguistically motivated, regularizers help improve the translation quality when training NMT models. We believe this shows the usefulness of using task-related regularizers for improving neural models, and opens the door for future work to exploit these regularization methods in an even more efficient manner by experimenting with different ways of combining the regularizers with the loss function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "http://www.statmt.org/europarl/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. http://arxiv.org/abs/1409.0473.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. Me- teor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrin- sic and Extrinsic Evaluation Measures for Ma- chine Translation and/or Summarization. Associa- tion for Computational Linguistics, pages 65-72. https://www.aclweb.org/anthology/W05-0909.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Wit 3 : Web inventory of transcribed and translated talks",
"authors": [
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Girardi",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 16 th Conference of the European Association for Machine Translation (EAMT)",
"volume": "",
"issue": "",
"pages": "261--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Fed- erico. 2012. Wit 3 : Web inventory of transcribed and translated talks. In Proceedings of the 16 th Con- ference of the European Association for Machine Translation (EAMT). Trento, Italy, pages 261-268.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "On the properties of neural machine translation: Encoderdecoder approaches",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "103--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the prop- erties of neural machine translation: Encoder- decoder approaches. In Proceedings of SSST- 8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. Association for Computational Linguistics, pages 103-111. https://www.aclweb.org/anthology/W14-4012.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bilingual sentence alignment based on punctuation statistics and lexicon",
"authors": [
{
"first": "Jian-Cheng",
"middle": [],
"last": "Thomas C Chuang",
"suffix": ""
},
{
"first": "Tracy",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wen-Chie",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"S"
],
"last": "Shei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2004,
"venue": "International Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "224--232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas C Chuang, Jian-Cheng Wu, Tracy Lin, Wen- Chie Shei, and Jason S Chang. 2004. Bilingual sentence alignment based on punctuation statistics and lexicon. In International Conference on Natu- ral Language Processing. Springer, pages 224-232.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Better hypothesis testing for statistical machine translation: Controlling for optimizer instability",
"authors": [
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "176--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for opti- mizer instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies. Associa- tion for Computational Linguistics, pages 176-181. https://www.aclweb.org/anthology/P11-2031.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Multi-task learning for multiple language translation",
"authors": [
{
"first": "Daxiang",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Dianhai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1723--1732",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1166"
]
},
"num": null,
"urls": [],
"raw_text": "Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers). Association for Computational Linguistics, pages 1723-1732. https://doi.org/10.3115/v1/P15-1166.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multi-way, multilingual neural machine translation with a shared attention mechanism",
"authors": [
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Orhan Firat, KyungHyun Cho, and Yoshua Ben- gio. 2016. Multi-way, multilingual neu- ral machine translation with a shared at- tention mechanism. CoRR abs/1601.01073.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, De- nis Yarats, and Yann N. Dauphin. 2017. Con- volutional sequence to sequence learning. CoRR abs/1705.03122. http://arxiv.org/abs/1705.03122.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [
"B"
],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho- rat, Fernanda B. Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's multilingual neural machine translation system: Enabling zero-shot translation. CoRR abs/1611.04558. http://arxiv.org/abs/1611.04558.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. http://arxiv.org/abs/1412.6980.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexan- dra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine trans- lation. In Proceedings of the 45th Annual Meet- ing of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions. Association for Computational Lin- guistics, Prague, Czech Republic, pages 177-180. https://www.aclweb.org/anthology/P07-2045.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Six challenges for neural machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Knowles",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Neural Machine Translation. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "28--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine transla- tion. In Proceedings of the First Workshop on Neural Machine Translation. Association for Computational Linguistics, pages 28-39. https://www.aclweb.org/anthology/W17-3204.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Neural machine translation (seq2seq) tutorial",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Brevdo",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Eugene Brevdo, and Rui Zhao. 2017. Neural machine translation (seq2seq) tutorial. https://github.com/tensorflow/nmt.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Achieving open vocabulary neural machine translation with hybrid word-character models",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1054--1063",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1100"
]
},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural ma- chine translation with hybrid word-character mod- els. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Lin- guistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1054-1063. https://doi.org/10.18653/v1/P16-1100.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1166"
]
},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attention- based neural machine translation. In Proceed- ings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, pages 1412-1421. https://doi.org/10.18653/v1/D15-1166.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christo- pher D. Manning. 2015b. Effective approaches to attention-based neural machine translation. CoRR abs/1508.04025. http://arxiv.org/abs/1508.04025.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A smorgasbord of features to combine phrase-based and neural machine translation",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Marie",
"suffix": ""
},
{
"first": "Atsushi",
"middle": [],
"last": "Fujita",
"suffix": ""
}
],
"year": 2018,
"venue": "AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Marie and Atsushi Fujita. 2018. A smorgas- bord of features to combine phrase-based and neural machine translation. In AMTA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Regularization techniques for fine-tuning in neural machine translation",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Valerio Miceli",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Barone",
"suffix": ""
},
{
"first": "Ulrich",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Germann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1490--1495",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Valerio Miceli Barone, Barry Haddow, Ul- rich Germann, and Rico Sennrich. 2017. Regu- larization techniques for fine-tuning in neural ma- chine translation. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing. Association for Computational Lin- guistics, Copenhagen, Denmark, pages 1490-1495. https://www.aclweb.org/anthology/D17-1156.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "lamtram: A toolkit for language and translation modeling using neural networks",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig. 2015. lamtram: A toolkit for lan- guage and translation modeling using neural net- works. https://github.com/neubig/lamtram.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for au- tomatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. https://www.aclweb.org/anthology/P02-1040.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Overcoming the curse of sentence length for neural machine translation using automatic segmentation",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Pouget-Abadie",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Syntax, Semantics and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Pouget-Abadie, Dzmitry Bahdanau, Bart van Mer- ri\u00ebnboer, Kyunghyun Cho, and Yoshua Bengio. 2014. Overcoming the curse of sentence length for neural machine translation using automatic segmen- tation. Syntax, Semantics and Structure in Statistical Translation page 78.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. CoRR abs/1508.07909.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 7th Conference of the Association for Machine Translation in the Americas. The Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "223--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Associ- ation for Machine Translation in the Americas. The Association for Machine Translation in the Ameri- cas, pages 223-231. http://mt-archive.info/AMTA- 2006-Snover.pdf.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15:1929-1958.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2014. Se- quence to sequence learning with neural networks. In NIPS.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Atten- tion is all you need. CoRR abs/1706.03762.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Chinese semantic role labeling with bidirectional recurrent neural networks",
"authors": [
{
"first": "Zhen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Tingsong",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1626--1631",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1186"
]
},
"num": null,
"urls": [],
"raw_text": "Zhen Wang, Tingsong Jiang, Baobao Chang, and Zhi- fang Sui. 2015. Chinese semantic role labeling with bidirectional recurrent neural networks. In Proceed- ings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, pages 1626-1631. https://doi.org/10.18653/v1/D15-1186.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Improvements in phrase-based statistical machine translation",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "257--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Zens and Hermann Ney. 2004. Improvements in phrase-based statistical machine translation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT- NAACL 2004. Association for Computational Lin- guistics, Boston, Massachusetts, USA, pages 257- 264. https://www.aclweb.org/anthology/N04-1033.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "(5) IN 90 % de notre temps entour\u00e9s par l'architecture . REF That's 90 percent of our time surrounded by architecture . BASE <unk> percent of our time via architecture . FREQ <unk> percent of our time surrounded by architecture . (6) IN D\u00e9bloquer ce potentiel est dans l'int\u00e9r\u00eat de chacun d'entre nous . REF Unlocking this potential is in the interest of every single one of us . BASE <unk> that potential is in all of us . FREQ <unk> that potential is in the interest of all of us ."
},
"TABREF0": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "IN Pas parce qu'ils sont moins bons, pas parce qu'ils sont moins travailleurs. REF And it's not because they're less smart, and it's not because they're less diligent. OUT And . . . . . . . . . . . . ."
},
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">Languages #Words</td></tr><tr><td/><td>Czech</td><td>1.7M</td></tr><tr><td/><td>English</td><td>85.57M</td></tr><tr><td/><td>French</td><td>55.72M</td></tr><tr><td/><td>German</td><td>35.47M</td></tr><tr><td/><td>Russian</td><td>2.5M</td></tr><tr><td/><td>Slovene</td><td>1.45M</td></tr><tr><td/><td>Vietnamese</td><td>3.5M</td></tr><tr><td>Table 1:</td><td colspan=\"2\">The size of the Wikipedia dumps</td></tr><tr><td colspan=\"3\">(#words) used to calculate word frequencies for</td></tr><tr><td colspan=\"2\">each language.</td></tr></table>",
"text": "3) IN But now there is a bold new solution to get us out of this mess.REF Mais il exist une solution audacieusepour nous en sortir. OUT Mais maintenant il y a une solution pour nous en sortir.The English sentence contains the frequent word there and the less frequent word bold. The French output sentence is acceptable, but it is not accurate since the English word bold (audacieuse in the reference translation) was omitted in the output. During training, the frequency regularizer penalizes such cases that have a big divergence between the word frequencies in the input and output sentences.The purpose of our frequency-based regularizer"
},
"TABREF3": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>: The size of the training data in sentence</td></tr><tr><td>pairs. To test our proposed models, we experi-</td></tr><tr><td>ment by translating to/from English for every non-</td></tr><tr><td>English language.</td></tr><tr><td>OUT C'\u00e9tait une femme forte portant une</td></tr><tr><td>fourrure autour du cou</td></tr><tr><td>Example</td></tr></table>",
"text": ""
},
"TABREF6": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Number of steps it took until the models stopped improving for all the translation tasks."
},
"TABREF7": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td/><td>System</td><td/><td/><td colspan=\"2\">Languages</td><td/><td/></tr><tr><td/><td/><td colspan=\"6\">Czech French German Russian Slovene Vietnamese</td></tr><tr><td/><td>Baseline</td><td>14.01</td><td>32.13</td><td>22.07</td><td>12.87</td><td>5.60</td><td>26.43</td></tr><tr><td>EN\u2192Lang</td><td>Length Punct</td><td>14.65 14.98</td><td>32.32 32.79</td><td>21.64 22.89</td><td>12.81 13.06</td><td>4.98 5.64</td><td>26.77 26.71</td></tr><tr><td/><td colspan=\"2\">Frequency 14.75</td><td>33.47</td><td>22.14</td><td>13.50</td><td>1.95</td><td>26.12</td></tr><tr><td/><td>Baseline</td><td>21.32</td><td>31.51</td><td>24.41</td><td>15.39</td><td>8.85</td><td>24.94</td></tr><tr><td>Lang\u2192EN</td><td>Length Punct</td><td>21.83 21.96</td><td>31.09 32.43</td><td>24.56 25.17</td><td>15.29 16.36</td><td>9.05 9.63</td><td>25.87 25.32</td></tr><tr><td/><td colspan=\"2\">Frequency 21.88</td><td>32.26</td><td>24.87</td><td>15.90</td><td>9.18</td><td>24.35</td></tr><tr><td/><td/><td/><td colspan=\"2\">(a) BLEU</td><td/><td/><td/></tr><tr><td/><td>Baseline</td><td>17.62</td><td>51.11</td><td>40.47</td><td>16.12</td><td>26.52</td><td>11.46</td></tr><tr><td>EN\u2192Lang</td><td>Length Punct</td><td>18.41 18.43</td><td>51.10 51.67</td><td>39.93 41.18</td><td>16.80 16.77</td><td>27.03 27.00</td><td>12.01 12.30</td></tr><tr><td/><td colspan=\"2\">Frequency 18.16</td><td>52.10</td><td>40.57</td><td>16.79</td><td>26.95</td><td>12.29</td></tr><tr><td/><td>Baseline</td><td>24.66</td><td>31.77</td><td>27.23</td><td>20.63</td><td>16.28</td><td>28.11</td></tr><tr><td>Lang\u2192EN</td><td>Length Punct</td><td>25.07 25.10</td><td>31.55 32.31</td><td>27.11 27.75</td><td>20.65 21.45</td><td>15.95 17.05</td><td>28.71 28.48</td></tr><tr><td/><td colspan=\"2\">Frequency 25.27</td><td>32.16</td><td>27.43</td><td>20.80</td><td>16.85</td><td>27.86</td></tr><tr><td/><td/><td/><td colspan=\"2\">(b) METEOR</td><td/><td/><td/></tr><tr><td/><td>Baseline</td><td>62.64</td><td>49.21</td><td>57.17</td><td>70.17</td><td>77.20</td><td>54.29</td></tr><tr><td>EN\u2192Lang</td><td>Length Punct</td><td>62.18 61.69</td><td>48.96 48.57</td><td>57.90 57.24</td><td>70.85 70.04</td><td>79.51 77.02</td><td>53.93 54.03</td></tr><tr><td/><td colspan=\"2\">Frequency 62.46</td><td>48.87</td><td>57.63</td><td>69.40</td><td>87.20</td><td>54.99</td></tr><tr><td/><td>Baseline</td><td>57.06</td><td>46.42</td><td>53.31</td><td>63.62</td><td>72.46</td><td>53.66</td></tr><tr><td>Lang\u2192EN</td><td>Length Punct</td><td>55.68 56.29</td><td>46.44 45.37</td><td>53.29 52.31</td><td>63.31 62.24</td><td>72.54 72.11</td><td>52.74 53.51</td></tr><tr><td/><td colspan=\"2\">Frequency 57.32</td><td>45.55</td><td>52.75</td><td>62.10</td><td>75.73</td><td>54.72</td></tr><tr><td/><td/><td/><td colspan=\"2\">(c) TER</td><td/><td/><td/></tr></table>",
"text": "shows the results for all language pairs and all metrics. We observe an improvement over the"
},
"TABREF10": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "(10) IN Imeti mora\u0161 otroke , da pre\u017eivi\u0161 . REF You need to have children to survive . BASE Well you have the kids that you need to educate . FREQ You have to have kids to educate ."
},
"TABREF11": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "IN Mi smo tu na vrhu . REF We are here on top . BASE What we are at the top . FREQ We are here at the top ."
}
}
}
}