Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N18-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:54:39.748198Z"
},
"title": "Improving Lexical Choice in Neural Machine Translation",
"authors": [
{
"first": "Toan",
"middle": [
"Q"
],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Engineeering University of Notre Dame",
"location": {}
},
"email": "[email protected]"
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Engineeering University of Notre Dame",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We explore two solutions to the problem of mistranslating rare words in neural machine translation. First, we argue that the standard output layer, which computes the inner product of a vector representing the context with all possible output word embeddings, rewards frequent words disproportionately, and we propose to fix the norms of both vectors to a constant value. Second, we integrate a simple lexical module which is jointly trained with the rest of the model. We evaluate our approaches on eight language pairs with data sizes ranging from 100k to 8M words, and achieve improvements of up to +4.3 BLEU, surpassing phrasebased translation in nearly all settings. 1",
"pdf_parse": {
"paper_id": "N18-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "We explore two solutions to the problem of mistranslating rare words in neural machine translation. First, we argue that the standard output layer, which computes the inner product of a vector representing the context with all possible output word embeddings, rewards frequent words disproportionately, and we propose to fix the norms of both vectors to a constant value. Second, we integrate a simple lexical module which is jointly trained with the rest of the model. We evaluate our approaches on eight language pairs with data sizes ranging from 100k to 8M words, and achieve improvements of up to +4.3 BLEU, surpassing phrasebased translation in nearly all settings. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural network approaches to machine translation Bahdanau et al., 2015; Luong et al., 2015a; Gehring et al., 2017) are appealing for their single-model, end-to-end training process, and have demonstrated competitive performance compared to earlier statistical approaches (Koehn et al., 2007; Junczys-Dowmunt et al., 2016) . However, there are still many open problems in NMT (Koehn and Knowles, 2017) . One particular issue is mistranslation of rare words. For example, consider the Uzbek sentence:",
"cite_spans": [
{
"start": 49,
"end": 71,
"text": "Bahdanau et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 72,
"end": 92,
"text": "Luong et al., 2015a;",
"ref_id": "BIBREF18"
},
{
"start": 93,
"end": 114,
"text": "Gehring et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 271,
"end": 291,
"text": "(Koehn et al., 2007;",
"ref_id": "BIBREF12"
},
{
"start": 292,
"end": 321,
"text": "Junczys-Dowmunt et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 375,
"end": 400,
"text": "(Koehn and Knowles, 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Source: Ammo muammolar hali ko'p, deydi amerikalik olim Entoni Fauchi. Reference: But still there are many problems, says American scientist Anthony Fauci. Baseline NMT: But there is still a lot of problems, says James Chan.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "At the position where the output should be Fauci, the NMT model's top three candidates are Chan, Fauci, and Jenner. All three surnames occur in the training data with reference to immunologists: Fauci is the director of the National Institute of Allergy and Infectious Diseases, Margaret (not James) Chan is the former director of the World Health Organization, and Edward Jenner invented smallpox vaccine. But Chan is more frequent in the training data than Fauci, and James is more frequent than either Anthony or Margaret.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Because NMT learns word representations in continuous space, it tends to translate words that \"seem natural in the context, but do not reflect the content of the source sentence\" (Arthur et al., 2016) . This coincides with other observations that NMT's translations are often fluent but lack accuracy (Wang et al., 2017b; Wu et al., 2016) .",
"cite_spans": [
{
"start": 179,
"end": 200,
"text": "(Arthur et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 301,
"end": 321,
"text": "(Wang et al., 2017b;",
"ref_id": "BIBREF29"
},
{
"start": 322,
"end": 338,
"text": "Wu et al., 2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Why does this happen? At each time step, the model's distribution over output words e is p(e) \u221d exp W e \u2022h + b e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "where W e and b e are a vector and a scalar depending only on e, andh is a vector depending only on the source sentence and previous output words. We propose two modifications to this layer. First, we argue that the term W e \u2022h, which measures how well e fits into the contexth, favors common words disproportionately, and show that it helps to fix the norm of both vectors to a constant. Second, we add a new term representing a more direct connection from the source sentence, which allows the model to better memorize translations of rare words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Below, we describe our models in more detail. Then we evaluate our approaches on eight language pairs, with training data sizes ranging from 100k words to 8M words, and show improvements of up to +4.3 BLEU, surpassing phrasebased translation in nearly all settings. Finally, we provide some analysis to better understand why our modifications work well. ha-en tu-en hu-en untied embeddings 17.2 11.5 26.5 tied embeddings 17.4 13.8 26.5 don't normalizeh t 18.6 14.2 27.1 normalizeh t 20.5 16.1 28.8 Table 1 : Preliminary experiments show that tying target embeddings with output layer weights performs as well as or better than the baseline, and that normalizingh is better than not normalizingh. All numbers are BLEU scores on development sets, scored against tokenized references.",
"cite_spans": [],
"ref_spans": [
{
"start": 498,
"end": 505,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a source sequence f = f 1 f 2 \u2022 \u2022 \u2022 f m , the goal of NMT is to find the target sequence e = e 1 e 2 \u2022 \u2022 \u2022 e n that maximizes the objective function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "log p(e | f ) = n t=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "log p(e t | e <t , f ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "We use the global attentional model with general scoring function and input feeding by Luong et al. (2015a) . We provide only a very brief overview of this model here. It has an encoder, an attention, and a decoder. The encoder converts the words of the source sentence into word embeddings, then into a sequence of hidden states. The decoder generates the target sentence word by word with the help of the attention. At each time step t, the attention calculates a set of attention weights a t (s). These attention weights are used to form a weighted average of the encoder hidden states to form a context vector c t . From c t and the hidden state of the decoder are computed the attentional hidden stateh t . Finally, the predicted probability distribution of the t'th target word is:",
"cite_spans": [
{
"start": 87,
"end": 107,
"text": "Luong et al. (2015a)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(e t | e <t , f ) = softmax(W oh t + b o ).",
"eq_num": "(1)"
}
],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "The rows of the output layer's weight matrix W o can be thought of as embeddings of the output vocabulary, and sometimes are in fact tied to the embeddings in the input layer, reducing model size while often achieving similar performance (Inan et al., 2017; Press and Wolf, 2017) . We verified this claim on some language pairs and found out that this approach usually performs better than without tying, as seen in Table 1 . For this reason, we always tie the target embeddings and W o in all of our models.",
"cite_spans": [
{
"start": 238,
"end": 257,
"text": "(Inan et al., 2017;",
"ref_id": "BIBREF8"
},
{
"start": 258,
"end": 279,
"text": "Press and Wolf, 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 416,
"end": 423,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "The output word distribution (1) can be written as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalization",
"sec_num": "3"
},
{
"text": "p(e) \u221d exp W e h cos \u03b8 W e ,h + b e ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalization",
"sec_num": "3"
},
{
"text": "where W e is the embedding of e, b e is the e'th component of the bias b o , and \u03b8 W e ,h is the angle between W e andh. We can intuitively interpret the terms as follows. The term h has the effect of sharpening or flattening the distribution, reflecting whether the model is more or less certain in a particular context. The cosine similarity cos \u03b8 W e ,h measures how well e fits into the context. The bias b e controls how much the word e is generated; it is analogous to the language model in a log-linear translation model (Och and Ney, 2002) .",
"cite_spans": [
{
"start": 528,
"end": 547,
"text": "(Och and Ney, 2002)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Normalization",
"sec_num": "3"
},
{
"text": "Finally, W e also controls how much e is generated. Figure 1 shows that it generally correlates with frequency. But because it is multiplied by cos \u03b8 W e ,h , it has a stronger effect on words whose embeddings have direction similar toh, and less effect or even a negative effect on words in other directions. We hypothesize that the result is that the model learns W e that are disproportionately large.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 60,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Normalization",
"sec_num": "3"
},
{
"text": "For example, returning to the example from Section 1, these terms are: Observe that cos \u03b8 W e ,h and even b e both favor the correct output word Fauci, whereas W e favors the more frequent, but incorrect, word Chan. The most frequently-mentioned immunologist trumps other immunologists. To solve this issue, we propose to fix the norm of all target word embeddings to some value r. Followingthe weight normalization approach of Salimans and Kingma (2016) , we reparameterize W e as r v e v e , but keep r fixed. A similar argument could be made for h t : because a large h t sharpens the distribution, causing frequent words to more strongly dominate rare words, we might want to limit it as well. We compared both approaches on a development set and found that replacingh t in equation 1 ",
"cite_spans": [
{
"start": 428,
"end": 454,
"text": "Salimans and Kingma (2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Normalization",
"sec_num": "3"
},
{
"text": "The attentional hidden stateh contains information not only about the source word(s) corresponding to the current target word, but also the contexts of those source words and the preceding context of the target word. This could make the model prone to generate a target word that fits the context but doesn't necessarily correspond to the source word(s). Count-based statistical models, by contrast, don't have this problem, because they simply don't model any of this context. Arthur et al. (2016) try to alleviate this issue by integrating a count-based lexicon into an NMT system. However, this lexicon must be trained separately using GIZA++ (Och and Ney, 2003) , and its parameters form a large, sparse array, which can be difficult to store in GPU memory. We propose instead to use a simple feedforward neural network (FFNN) that is trained jointly with the rest of the NMT model to generate a target word based directly on the source word(s). Let f s (s = 1, . . . , m) be the embeddings of the source words. We use the attention weights to form a tokens vocab layers \u00d710 6",
"cite_spans": [
{
"start": 478,
"end": 498,
"text": "Arthur et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 646,
"end": 665,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Translation",
"sec_num": "4"
},
{
"text": "\u00d710 3 num/size ta-en 0.2/0.1 4.0/3.4 1/512 ur-en 0.2/0.2 4.2/4.2 1/512 ha-en 0.8/0.8 10.6/10.4 2/512 tu-en 0.8/1.1 21.1/13.3 2/512 uz-en 1.5/1.9 29.8/17.4 2/512 hu-en 2.0/2.3 27.3/15.7 2/512 en-vi 2.1/2.6 17.0/7.7 2/512 en-ja (BTEC) 3.6/5.0 17.8/21.8 4/768 en-ja (KFTT) 7.8/8.0 48.2/49.1 4/768 weighted average of the embeddings (not the hidden states, as in the main model) to give an average source-word embedding at each decoding time step t:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Translation",
"sec_num": "4"
},
{
"text": "f t = tanh s a t (s) f s .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Translation",
"sec_num": "4"
},
{
"text": "Then we use a one-hidden-layer FFNN with skip connections (He et al., 2016) :",
"cite_spans": [
{
"start": 58,
"end": 75,
"text": "(He et al., 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Translation",
"sec_num": "4"
},
{
"text": "h t = tanh(W f t ) + f t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Translation",
"sec_num": "4"
},
{
"text": "and combine its output with the decoder output to get the predictive distribution over output words at time step t:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Translation",
"sec_num": "4"
},
{
"text": "p(y t | y <t , x) = softmax(W oh t + b o + W h t + b ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Translation",
"sec_num": "4"
},
{
"text": "For the same reasons that were given in Section 3 for normalizingh t and the rows of W o t , we normalize h t and the rows of W as well. Note, however, that we do not tie the rows of W with the word embeddings; in preliminary experiments, we found this to yield worse results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Translation",
"sec_num": "4"
},
{
"text": "We conducted experiments testing our normalization approach and our lexical model on eight language pairs using training data sets of various sizes. This section describes the systems tested and our results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We evaluated our approaches on various language pairs and datasets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "\u2022 Tamil (ta), Urdu (ur), Hausa (ha), Turkish (tu), and Hungarian (hu) to English (en), using data from the LORELEI program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "\u2022 English to Vietnamese (vi), using data from the IWSLT 2015 shared task. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "\u2022 To compare our approach with that of Arthur et al. (2016) , we also ran on their English to Japanese (ja) KFTT and BTEC datasets. 3 We tokenized the LORELEI datasets using the default Moses tokenizer, except for Urdu-English, where the Urdu side happened to be tokenized using Morfessor FlatCat (w = 0.5). We used the preprocessed English-Vietnamese and English-Japanese datasets as distributed by Luong et al., and Arthur et al., respectively . Statistics about our data sets are shown in Table 2 .",
"cite_spans": [
{
"start": 39,
"end": 59,
"text": "Arthur et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 132,
"end": 133,
"text": "3",
"ref_id": null
},
{
"start": 400,
"end": 445,
"text": "Luong et al., and Arthur et al., respectively",
"ref_id": null
}
],
"ref_spans": [
{
"start": 492,
"end": 499,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "We compared our approaches against two baseline NMT systems:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems",
"sec_num": "5.2"
},
{
"text": "untied, which does not tie the rows of W o to the target word embeddings, and tied, which does.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems",
"sec_num": "5.2"
},
{
"text": "In addition, we compared against two other baseline systems:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems",
"sec_num": "5.2"
},
{
"text": "Moses: The Moses phrase-based translation system (Koehn et al., 2007) , trained on the same data as the NMT systems, with the same maximum sentence length of 50. No additional data was used for training the language model. Unlike the NMT systems, Moses used the full vocabulary from the training data; unknown words were copied to the target sentence. Arthur: Our reimplementation of the discrete lexicon approach of Arthur et al. (2016) . We only tried their auto lexicon, using GIZA++ (Och and Ney, 2003) , integrated using their bias approach. Note that we also tied embedding as we found it also helped in this case.",
"cite_spans": [
{
"start": 49,
"end": 69,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF12"
},
{
"start": 417,
"end": 437,
"text": "Arthur et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 487,
"end": 506,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Systems",
"sec_num": "5.2"
},
{
"text": "Against these baselines, we compared our new systems:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems",
"sec_num": "5.2"
},
{
"text": "fixnorm: The normalization approach described in Section 3. fixnorm+lex: The same, with the addition of the lexical translation module from Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems",
"sec_num": "5.2"
},
{
"text": "Model For all NMT systems, we fed the source sentences to the encoder in reverse order during both training and testing, following Luong et al. (2015a) . Information about the number and size of hidden layers is shown in Table 2 . The word embedding size is always equal to the hidden layer size.",
"cite_spans": [
{
"start": 131,
"end": 151,
"text": "Luong et al. (2015a)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 221,
"end": 228,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Details",
"sec_num": "5.3"
},
{
"text": "Following common practice, we only trained on sentences of 50 tokens or less. We limited the vocabulary to word types that appear no less than 5 times in the training data and map the rest to UNK. For the English-Japanese and English-Vietnamese datasets, we used the vocabulary sizes reported in their respective papers (Arthur et al., 2016; Luong and Manning, 2015) .",
"cite_spans": [
{
"start": 320,
"end": 341,
"text": "(Arthur et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 342,
"end": 366,
"text": "Luong and Manning, 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Details",
"sec_num": "5.3"
},
{
"text": "For fixnorm, we tried r \u2208 {3, 5, 7} and selected the best value based on the development set performance, which was r = 5 except for English-Japanese (BTEC), where r = 7. For fixnorm+lex, because W sht +W h t takes on values in [\u22122r 2 , 2r 2 ], we reduced our candidate r values by roughly a factor of \u221a 2, to r \u2208 {2, 3.5, 5}. A radius r = 3.5 seemed to work the best for all language pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Details",
"sec_num": "5.3"
},
{
"text": "Training We trained all NMT systems with Adadelta (Zeiler, 2012). All parameters were initialized uniformly from [\u22120.01, 0.01]. When a gradient's norm exceeded 5, we normalized it to 5. We also used dropout on non-recurrent connections only (Zaremba et al., 2014) , with probability 0.2. We used minibatches of size 32. We trained for 50 epochs, validating on the development set after every epoch, except on English-Japanese, where we validated twice per epoch. We kept the best checkpoint according to its BLEU on the development set.",
"cite_spans": [
{
"start": 241,
"end": 263,
"text": "(Zaremba et al., 2014)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Details",
"sec_num": "5.3"
},
{
"text": "Inference We used beam search with a beam size of 12 for translating both the development and test sets. Since NMT often favors short translations (Cho et al., 2014), we followed Wu et al. (2016) in using a modified score s(e | f ) in place of log-probability:",
"cite_spans": [
{
"start": 179,
"end": 195,
"text": "Wu et al. (2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Details",
"sec_num": "5.3"
},
{
"text": "s(e | f ) = log p(e | f ) lp(e)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Details",
"sec_num": "5.3"
},
{
"text": "lp(e) = (5 + |e|) \u03b1 (5 + 1) \u03b1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Details",
"sec_num": "5.3"
},
{
"text": "We set \u03b1 = 0.8 for all of our experiments. Finally, we applied a postprocessing step to replace each UNK in the target translation with the source word with the highest attention score (Luong et al., 2015b) .",
"cite_spans": [
{
"start": 185,
"end": 206,
"text": "(Luong et al., 2015b)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Details",
"sec_num": "5.3"
},
{
"text": "Evaluation For translation into English, we report case-sensitive NIST BLEU against detokenized references. For English-Japanese and English-Vietnamese, we report tokenized, casesensitive BLEU following Arthur et al. (2016) and Luong and Manning (2015) . We measure statistical significance using bootstrap resampling (Koehn, 2004) .",
"cite_spans": [
{
"start": 203,
"end": 223,
"text": "Arthur et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 228,
"end": 252,
"text": "Luong and Manning (2015)",
"ref_id": "BIBREF16"
},
{
"start": 318,
"end": 331,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Details",
"sec_num": "5.3"
},
{
"text": "6 Results and Analysis",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Details",
"sec_num": "5.3"
},
{
"text": "Our results are shown in Table 3 . First, we observe, as has often been noted in the literature, that NMT tends to perform poorer than PBMT on low resource settings (note that the rows of this table are sorted by training data size).",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Overall",
"sec_num": "6.1"
},
{
"text": "Our fixnorm system alone shows large improvements (shown in parentheses) relative to tied. Integrating the lexical module (fixnorm+lex) adds in further gains. Our fixnorm+lex models surpass Moses on all tasks except Urdu-and Hausa-English, where it is 1.6 and 0.7 BLEU short respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall",
"sec_num": "6.1"
},
{
"text": "The method of Arthur et al. (2016) does improve over the baseline NMT on most language pairs, but not by as much and as consistently as our models, and often not as well as Moses. Unfortunately, we could not replicate their approach for English-Japanese (KFTT) because the lexical table was too large to fit into the computational graph.",
"cite_spans": [
{
"start": 14,
"end": 34,
"text": "Arthur et al. (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overall",
"sec_num": "6.1"
},
{
"text": "For English-Japanese (BTEC), we note that, due to the small size of the test set, all systems except for Moses are in fact not significantly different from tied (p > 0.01). On all other tasks, however, our systems significantly improve over tied (p < 0.01).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall",
"sec_num": "6.1"
},
{
"text": "In Table 4 , we show examples of typical translation mistakes made by the baseline NMT systems. In the Uzbek example (top), untied and tied have confused 34 with UNK and 700, while in the Turkish one (middle), they incorrectly output other proper names, Afghan and Myanmar, for the proper name Kenya. Our systems, on the other hand, translate these words correctly.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Impact on translation",
"sec_num": "6.2"
},
{
"text": "The bottom example is the one introduced in Section 1. We can see that our fixnorm approach does not completely solve the mistranslation issue, since it translates Entoni Fauchi to UNK UNK (which is arguably better than James Chan). On the other hand, fixnorm+lex gets this right. To better understand how the lexical module helps in this case, we look at the top five translations for the word Fauci in fixnorm+lex: As we can see, while cos \u03b8 W e ,h might still be confused between similar words, cos \u03b8 W l e ,h l significantly favors Fauci.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact on translation",
"sec_num": "6.2"
},
{
"text": "e cos \u03b8 W",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact on translation",
"sec_num": "6.2"
},
{
"text": "Both our baseline NMT and fixnorm models suffer from the problem of shifted alignments noted by Koehn and Knowles (2017) . As seen in Figure 2a and 2b, the alignments for those two systems seem to shift by one word to the left (on the source side). For example, n\u00f3i should be aligned to said instead of Telekom, and so on. Although this is not a problem per se, since the decoder can decide to attend to any position in the encoder states as long as the state at that position holds the information the decoder needs, this becomes a real issue when we need to make use of the alignment information, as in unknown word replacement (Luong et al., 2015b ). As we can see in Figure 2 , because of the alignment shift, both tied and fixnorm incorrectly replace the two unknown words (in bold) with But Deutsche instead of Deutsche Telekom. In contrast, under fixnorm+lex and the model of Arthur et al. (2016) , the alignment is corrected, causing the UNKs to be replaced with the correct source words.",
"cite_spans": [
{
"start": 96,
"end": 120,
"text": "Koehn and Knowles (2017)",
"ref_id": "BIBREF13"
},
{
"start": 631,
"end": 651,
"text": "(Luong et al., 2015b",
"ref_id": "BIBREF19"
},
{
"start": 884,
"end": 904,
"text": "Arthur et al. (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 134,
"end": 144,
"text": "Figure 2a",
"ref_id": "FIGREF2"
},
{
"start": 672,
"end": 680,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Alignment and unknown words",
"sec_num": "6.3"
},
{
"text": "The single most important hyper-parameter in our models is r. Informally speaking, r controls how much surface area we have on the hypersphere to allocate to word embeddings. To better understand its impact, we look at the training perplexity and dev BLEUs during training with different values of r. Tomorrow a conference for aid will be conducted in Kenya . untied Tomorrow there will be an Afghan relief conference . tied Tomorrow there will be a relief conference in Myanmar . fixnorm Tomorrow it will be a aid conference in Kenya . fixnorm+lex Tomorrow there will be a relief conference in Kenya .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of r",
"sec_num": "6.4"
},
{
"text": "Ammo muammolar hali ko'p , deydi amerikalik olim Entoni Fauchi . reference But still there are many problems , says American scientist Anthony Fauci . untied But there is still a lot of problems , says James Chan . tied However , there is still a lot of problems , says American scientists . fixnorm But there is still a lot of problems , says American scientist UNK UNK . fixnorm+lex But there are still problems , says American scientist Anthony Fauci . worse training perplexity, indicating underfitting, whereas if r is too large, the model achieves better training perplexity but decrased dev BLEU, indicating overfitting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "input",
"sec_num": null
},
{
"text": "One byproduct of lex is the lexicon, which we can extract and examine simply by feeding each source word embedding to the FFNN module and calculating p (y) = softmax(W h +b ). In Table 5 , we show the top translations for some entries in the lexicons extracted from fixnorm+lex for Hungarian, Turkish, and Hausa-English. As expected, the lexical distribution is sparse, with a few top translations accounting for the most probability mass.",
"cite_spans": [],
"ref_spans": [
{
"start": 179,
"end": 186,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Lexicon",
"sec_num": "6.5"
},
{
"text": "Byte-Pair-Encoding (BPE) (Sennrich et al., 2016) is commonly used in NMT to break words into word-pieces, improving the translation of rare words. For this reason, we reran our experiments using BPE on the LORELEI and English-Vietnamese datasets. Additionally, to see if our proposed methods work in high-resource scenarios, we run on the WMT 2014 English-German (en-de) dataset, 4 using newstest2013 as the development set and reporting tokenized, case-sensitive BLEU on newstest2014 and newstest2015. We validate across different numbers of BPE operations; specifically, we try {1k, 2k, 3k} merge operations for ta-en and ur-en due to their small sizes, {10k, 12k, 15k} for the other LORELEI datasets and en-vi, and 32k for en-de. Using BPE results in much smaller vocabulary sizes, so we do not apply a vocabulary cut-off. Instead, we train on an additional copy of the training data in which all types that appear once are replaced with UNK, and halve the number of epochs accordingly. Our models, training, and evaluation processes are largely the same, except that for en-de, we use a 4-layer decoder and 4-layer bidirectional encoder (2 layers for each direction). Table 7 shows that our proposed methods also significantly improve the translation when used with BPE, for both high and low resource language pairs. With BPE, we are only behind Moses on Urdu-English.",
"cite_spans": [
{
"start": 25,
"end": 48,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 1172,
"end": 1179,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Byte Pair Encoding",
"sec_num": "6.6"
},
{
"text": "The closest work to our lex model is that of Arthur et al. (2016) , which we have discussed already in Section 4. Recent work by Liu et al. (2016) has very similar motivation to that of our fixnorm model. They reformulate the output layer in terms of directions and magnitudes, as we do here. Whereas we have focused on the magnitudes, they focus on the directions, modifying the loss function to try to learn a classifier that separates the classes' directions with something like a margin. Wang et al. (2017a) also make the same observation that we do for the fixnorm model, but for the task of face verification.",
"cite_spans": [
{
"start": 45,
"end": 65,
"text": "Arthur et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 129,
"end": 146,
"text": "Liu et al. (2016)",
"ref_id": "BIBREF15"
},
{
"start": 492,
"end": 511,
"text": "Wang et al. (2017a)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Handling rare words is an important problem for NMT that has been approached in various ways. Some have focused on reducing the number of UNKs by enabling NMT to learn from a larger vocabulary (Jean et al., 2015; Mi et al., 2016) ; others have focused on replacing UNKs by copying source words (Gulcehre et al., 2016; Gu et al., 2016; Luong et al., 2015b) . However, these methods only help with unknown words, not rare words. An approach that addresses both unknown and rare words is to use subword-level information (Sennrich et al., 2016; Chung et al., 2016; Luong and Manning, 2016) . Our approach is different in that we try to identify and address the root of the rare word problem. We expect that our models would benefit from more advanced UNKreplacement or subword-level techniques as well.",
"cite_spans": [
{
"start": 193,
"end": 212,
"text": "(Jean et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 213,
"end": 229,
"text": "Mi et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 294,
"end": 317,
"text": "(Gulcehre et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 318,
"end": 334,
"text": "Gu et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 335,
"end": 355,
"text": "Luong et al., 2015b)",
"ref_id": "BIBREF19"
},
{
"start": 518,
"end": 541,
"text": "(Sennrich et al., 2016;",
"ref_id": "BIBREF25"
},
{
"start": 542,
"end": 561,
"text": "Chung et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 562,
"end": 586,
"text": "Luong and Manning, 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Recently, Liu and Kirchhoff (2018) have shown that their baseline NMT system with BPE already outperforms Moses for low-resource translation. However, in their work, they use the Transformer network (Vaswani et al., 2017) , which is quite different from our baseline model. It would be interesting to see if our methods benefit the Trans-tied fixnorm fixnorm+lex ta-en 13 15 (+2.0) 15.9 (+2.9) ur-en 10.5 12.3 (+1.8) 13.7 (+3.2) ha-en 18 21.7 (+3.7) 22.3 (+4.3) tu-en 19.3 21 (+1.7) 22.2 (+2.9) uz-en 18.9 19.8 (+0.9) 21 (+2.1) hu-en 25.8 27.2 (+1.4) 27.9 (+2.1) en-vi 26.3 27.3 (+1.0) 27.5 (+1.2) en-de (newstest2014) 19.7 22.2 (+2.5) 20.4 (+0.7) en-de (newstest2015) 22.5 25 (+2.5) 23.2 (+0.7) Table 7 : Test BLEU for all BPE-based systems. Our models significantly improve over the baseline (p < 0.01) for both high and low resource when using BPE.",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "Liu and Kirchhoff (2018)",
"ref_id": "BIBREF14"
},
{
"start": 199,
"end": 221,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 696,
"end": 703,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "former network and other models as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "In this paper, we have presented two simple yet effective changes to the output layer of a NMT model. Both of these changes improve translation quality substantially on low-resource language pairs. In many of the language pairs we tested, the baseline NMT system performs poorly relative to phrase-based translation, but our system surpasses it (when both are trained on the same data). We conclude that NMT, equipped with the methods demonstrated here, is a more viable choice for low-resource translation than before, and are optimistic that NMT's repertoire will continue to grow.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "The code for this work can be found at https://github.com/tnq177/improving_lexical_ choice_in_nmt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://nlp.stanford.edu/projects/nmt/ 3 http://isw3.naist.jp/~philip-a/emnlp2016/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://nlp.stanford.edu/projects/nmt/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported in part by University of Southern California subcontract 67108176 under DARPA contract HR0011-15-C-0115. Nguyen was supported in part by a fellowship from the Vietnam Education Foundation. We would like to express our great appreciation to Sharon Hu for letting us use her group's GPU cluster (supported by NSF award 1629914), and to NVIDIA corporation for the donation of a Titan X GPU. We also thank Tomer Levinboim for insightful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Incorporating discrete translation lexicons into neural machine translation",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Arthur, Graham Neubig, and Satoshi Nakamura. 2016. Incorporating discrete translation lexicons into neural machine translation. In Proc. EMNLP.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. ICLR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On the properties of neural machine translation: Encoder-decoder approaches",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In Proceedings of SSST-8, Eighth Work- shop on Syntax, Semantics and Structure in Statisti- cal Translation.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A character-level decoder without explicit segmentation for neural machine translation",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Kyunghyun Cho, and Yoshua Ben- gio. 2016. A character-level decoder without ex- plicit segmentation for neural machine translation. In Proc. ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.03122"
]
},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. arXiv:1705.03122.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Incorporating copying mechanism in sequence-to-sequence learning",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proc. ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Pointing the unknown words",
"authors": [
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Sungjin",
"middle": [],
"last": "Ahn",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proc. ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proc. CVPR.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Tying word vectors and word classifiers: A loss framework for language modeling",
"authors": [
{
"first": "Khashayar",
"middle": [],
"last": "Hakan Inan",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Khosravi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hakan Inan, Khashayar Khosravi, and Richard Socher. 2017. Tying word vectors and word classifiers: A loss framework for language modeling. In Proc. ICLR.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "On using very large target vocabulary for neural machine translation",
"authors": [
{
"first": "S\u00e9bastien",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Memisevic",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00e9bastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large tar- get vocabulary for neural machine translation. In Proc. ACL-IJCNLP.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Is neural machine translation ready for deployment? A case study on 30 translation directions",
"authors": [
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Tomasz",
"middle": [],
"last": "Dwojak",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. IWSLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys-Dowmunt, Tomasz Dwojak, and Hieu Hoang. 2016. Is neural machine translation ready for deployment? A case study on 30 translation di- rections. In Proc. IWSLT.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proc. EMNLP.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Six challenges for neural machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Knowles",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. Workshop on Neural Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six chal- lenges for neural machine translation. In Proc. Workshop on Neural Machine Translation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Context models for oov word translation in low-resource languages",
"authors": [
{
"first": "Angli",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Kirchhoff",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of AMTA 2018",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angli Liu and Katrin Kirchhoff. 2018. Context models for oov word translation in low-resource languages. In Proceedings of AMTA 2018, vol. 1: MT Research Track. AMTA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Large-margin softmax loss for convolutional neural networks",
"authors": [
{
"first": "Weiyang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yandong",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Zhiding",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiyang Liu, Yandong Wen, Zhiding Yu, and Meng Yang. 2016. Large-margin softmax loss for convo- lutional neural networks. In Proc. ICML.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Stanford neural machine translation systems for spoken language domain",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. IWSLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong and Christopher D. Manning. 2015. Stanford neural machine translation systems for spoken language domain. In Proc. IWSLT.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Achieving open vocabulary neural machine translation with hybrid word-character models",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In Proc. ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attention- based neural machine translation. In Proc. EMNLP.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Addressing the rare word problem in neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the rare word problem in neural machine translation. In Proc. ACL-IJCNLP.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Vocabulary manipulation for neural machine translation",
"authors": [
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Abe",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Vocabulary manipulation for neural machine trans- lation. In Proc. ACL.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Discriminative training and maximum entropy models for statistical machine translation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2002. Discrimina- tive training and maximum entropy models for sta- tistical machine translation. In Proc. ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics 29(1).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Using the output embedding to improve language models",
"authors": [
{
"first": "Ofir",
"middle": [],
"last": "Press",
"suffix": ""
},
{
"first": "Lior",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proc. EACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks",
"authors": [
{
"first": "T",
"middle": [],
"last": "Salimans",
"suffix": ""
},
{
"first": "D",
"middle": [
"P"
],
"last": "Kingma",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Salimans and D. P. Kingma. 2016. Weight Normal- ization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks. ArXiv e-prints .",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proc. ACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS 27",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In NIPS 27.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems. pages 6000-6010.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Normface: L2 hypersphere embedding for face verification",
"authors": [
{
"first": "Feng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"L"
],
"last": "Yuille",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 25th ACM international conference on Multimedia",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3123266.3123359"
]
},
"num": null,
"urls": [],
"raw_text": "Feng Wang, Xiang Xiang, Jian Cheng, and Alan L. Yuille. 2017a. Normface: L2 hypersphere em- bedding for face verification. In Proceedings of the 25th ACM international conference on Multi- media. ACM. https://doi.org/https://doi. org/10.1145/3123266.3123359.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Neural machine translation advised by statistical machine translation",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Wang, Zhengdong Lu, Zhaopeng Tu, Hang Li, Deyi Xiong, and Min Zhang. 2017b. Neural machine translation advised by statistical machine translation. In Proc. AAAI.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv:1609.08144.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Recurrent neural network regularization",
"authors": [
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.2329"
]
},
"num": null,
"urls": [],
"raw_text": "Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv:1409.2329.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "ADADELTA: An adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1212.5701v1"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew D. Zeiler. 2012. ADADELTA: An adaptive learning rate method. arXiv:1212.5701v1.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "The word embedding norm W e generally correlates with the frequency of e, except for the most frequent words. The bias b e has the opposite behavior. The plots show the median and range of bins of size 256."
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "While the tied and fixnorm systems shift attention to the left one word (on the source side), our fixnorm+lex model and that ofArthur et al. (2016) put it back to the correct position, improving unknown-word replacement for the words Deutsche Telekom. Columns are source (English) words and rows are target (Vietnamese) words. Bolded words are unknown."
},
"TABREF2": {
"html": null,
"type_str": "table",
"text": "Statistics of data and models: effective number of training source/target tokens, source/target vocabulary sizes, number of hidden layers and number of units per layer.",
"content": "<table/>",
"num": null
},
"TABREF4": {
"html": null,
"type_str": "table",
"text": "shows the train perplexity and best tokenized dev BLEU on Turkish-English for fixnorm and fixnorm+lex with different values of r. As we can see, a smaller r results in",
"content": "<table><tr><td/><td>untied tied</td><td>fixnorm</td><td>fixnorm+lex</td><td>Moses</td><td>Arthur</td></tr><tr><td>ta-en</td><td>10.3 11.1</td><td colspan=\"4\">14 (+2.9) 15.3 (+4.2) 10.5 (\u22120.6) 14.1 (+3.0)</td></tr><tr><td>ur-en</td><td>7.9 10.7</td><td>12 (+1.3)</td><td colspan=\"3\">13 (+2.3) 14.6 (+3.9) 12.5 (+1.8)</td></tr><tr><td>ha-en</td><td>16.0 16.6</td><td colspan=\"4\">20 (+3.4) 21.5 (+4.9) 22.2 (+5.6) 18.7 (+2.1)</td></tr><tr><td>tu-en</td><td colspan=\"5\">12.2 12.6 16.4 (+3.8) 19.1 (+6.5) 18.1 (+5.5) 16.3 (+3.7)</td></tr><tr><td>uz-en hu-en en-vi en-ja (BTEC) en-ja (KFTT)</td><td colspan=\"5\">14.9 15.7 18.2 (+2.5) 19.3 (+3.6) 17.2 (+1.5) 17.1 (+1.4) 21.6 23.0 24.0 (+1.0) 25.3 (+2.3) 21.3 (\u22121.7) 22.7 (-0.3) \u2020 25.1 25.3 26.8 (+1.5) 27 (+1.7) 26.7 (+1.4) 26.2 (+0.9) 51.2 53.7 52.9 (-0.8) \u2020 51.3 (\u22122.6) \u2020 46.8 (\u22126.9) 52.4 (\u22121.3) \u2020 24.1 24.5 26.1 (+1.6) 26.2 (+1.7) 21.7 (\u22122.8) -</td></tr></table>",
"num": null
},
"TABREF5": {
"html": null,
"type_str": "table",
"text": "Test BLEU of all models. Differences shown in parentheses are relative to tied, with a dagger ( \u2020) indicating an insignificant difference in BLEU (p > 0.01). While the method ofArthur et al. (2016) does not always help, fixnorm and fixnorm+lex consistently achieve significant improvements over tied (p < 0.01) except for English-Japanese (BTEC). Our models also outperform the method of Arthur et al. on all tasks and outperform Moses on all tasks but Urdu-English and Hausa-English.input Dushanba kuni Hindistonda kamida 34 kishi halok bo'lgani xabar qilindi . reference At least 34 more deaths were reported Monday in India . untied At least UNK people have died in India on Monday . tied It was reported that at least 700 people died in Monday . fixnorm At least 34 people died in India on Monday . fixnorm+lex At least 34 people have died in India on Monday .",
"content": "<table/>",
"num": null
},
"TABREF6": {
"html": null,
"type_str": "table",
"text": "Example translations, in which untied and tied generate incorrect, but often semantically related, words, but fixnorm and/or fixnorm+lex generate the correct ones.",
"content": "<table/>",
"num": null
},
"TABREF7": {
"html": null,
"type_str": "table",
"text": "Top five translations for some entries of the lexical tables extracted from fixnorm+lex. Probabilities are shown in parentheses.",
"content": "<table/>",
"num": null
},
"TABREF9": {
"html": null,
"type_str": "table",
"text": "When r is too small, high train perplexity and low dev BLEU indicate underfitting; when r is too large, low train perplexity and low dev BLEU indicate overfitting.",
"content": "<table/>",
"num": null
}
}
}
}