Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K19-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:05:56.199029Z"
},
"title": "On the Relation between Position Information and Sentence Length in Neural Machine Translation",
"authors": [
{
"first": "Masato",
"middle": [],
"last": "Neishi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Naoki",
"middle": [],
"last": "Yoshinaga",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Tokyo",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Long sentences have been one of the major challenges in neural machine translation (NMT). Although some approaches such as the attention mechanism have partially remedied the problem, we found that the current standard NMT model, Transformer, has difficulty in translating long sentences compared to the former standard, Recurrent Neural Network (RNN)-based model. One of the key differences of these NMT models is how the model handles position information which is essential to process sequential data. In this study, we focus on the position information type of NMT models, and hypothesize that relative position is better than absolute position. To examine the hypothesis, we propose RNN-Transformer which replaces positional encoding layer of Transformer by RNN, and then compare RNN-based model and four variants of Transformer. Experiments on ASPEC English-to-Japanese and WMT2014 Englishto-German translation tasks demonstrate that relative position helps translating sentences longer than those in the training data. Further experiments on length-controlled training data reveal that absolute position actually causes overfitting to the sentence length.",
"pdf_parse": {
"paper_id": "K19-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "Long sentences have been one of the major challenges in neural machine translation (NMT). Although some approaches such as the attention mechanism have partially remedied the problem, we found that the current standard NMT model, Transformer, has difficulty in translating long sentences compared to the former standard, Recurrent Neural Network (RNN)-based model. One of the key differences of these NMT models is how the model handles position information which is essential to process sequential data. In this study, we focus on the position information type of NMT models, and hypothesize that relative position is better than absolute position. To examine the hypothesis, we propose RNN-Transformer which replaces positional encoding layer of Transformer by RNN, and then compare RNN-based model and four variants of Transformer. Experiments on ASPEC English-to-Japanese and WMT2014 Englishto-German translation tasks demonstrate that relative position helps translating sentences longer than those in the training data. Further experiments on length-controlled training data reveal that absolute position actually causes overfitting to the sentence length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sequence to sequence models for neural machine translation (NMT) are now utilized for various text generation tasks including automatic summarization (Chopra et al., 2016; Nallapati et al., 2016; Rush et al., 2015) and dialogue systems (Vinyals and Le, 2015; Shang et al., 2015) ; the models are required to take inputs of various length. Early studies on recurrent neural network (RNN)-based model analyze the translation quality with respect to the sentence length, and show that their models improve translations for long sentences, using the long short-term memory (LSTM) (Sutskever et al., 2014) or introducing the attention mechanism (Bahdanau et al., 2015; Luong et al., 2015) . However, Koehn and Knowles (2017) report that even RNN-based model with the attention mechanism performs worse than phrase-based statistical machine translation (Koehn et al., 2007) in translating very long sentences, which challenges us to develop an NMT model that is robust to long sentences or more generally, variations in input length.",
"cite_spans": [
{
"start": 150,
"end": 171,
"text": "(Chopra et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 172,
"end": 195,
"text": "Nallapati et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 196,
"end": 214,
"text": "Rush et al., 2015)",
"ref_id": "BIBREF19"
},
{
"start": 236,
"end": 258,
"text": "(Vinyals and Le, 2015;",
"ref_id": "BIBREF27"
},
{
"start": 259,
"end": 278,
"text": "Shang et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 576,
"end": 600,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF25"
},
{
"start": 640,
"end": 663,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 664,
"end": 683,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 695,
"end": 719,
"text": "Koehn and Knowles (2017)",
"ref_id": "BIBREF11"
},
{
"start": 847,
"end": 867,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Have the recent advances in NMT achieved the robustness to the variations in input length? NMT has been advancing by upgrading the model architecture: RNN-based model (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015; Luong et al., 2015) followed by convolutional neural network (CNN)-based model (Kalchbrenner et al., 2016; Gehring et al., 2017) and attention-based model (Vaswani et al., 2017) called Transformer ( \u00a7 2). Transformer is the de facto standard NMT model today for its better performance compared to the former standard RNN-based model. We thus came up with a question whether Transformer have acquired the robustness to the variations in input length.",
"cite_spans": [
{
"start": 167,
"end": 185,
"text": "(Cho et al., 2014;",
"ref_id": "BIBREF3"
},
{
"start": 186,
"end": 209,
"text": "Sutskever et al., 2014;",
"ref_id": "BIBREF25"
},
{
"start": 210,
"end": 232,
"text": "Bahdanau et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 233,
"end": 252,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 312,
"end": 339,
"text": "(Kalchbrenner et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 340,
"end": 361,
"text": "Gehring et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 388,
"end": 410,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the length of input sentence(s), the key difference between existing NMT models is how they incorporate information on word positions in the input. RNN or CNN-based NMT captures relative positions which stem from sequential operation of RNN or convolution operation of CNN. On the other hand, position embeddings or positional encodings (vector representations of positions) are used to handle absolute positions in Transformer. Gehring et al. (2017) integrate position embeddings, which are induced together with the other model parameters, into the CNN-based model, and showed that absolute position is still beneficial for their model in addition to the relative position captured by CNN. By contrast, Transformer only em-ploys positional encodings, which give fixed vectors to positions using sine and cosine functions.",
"cite_spans": [
{
"start": 432,
"end": 453,
"text": "Gehring et al. (2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this study, we suspect that these differences in position information types of the models have an impact on the accuracy of translating long sentences, and investigate the impact of position information on translating long sentences to realize an NMT model that is robust to variations in input length. We reveal that RNN-based model (relative position) is better than Transformer with positional encodings (absolute position) in translating longer sentences than those in the training data ( \u00a7 5.2). Motivated from this result, we propose a simple modification to Transformer, using RNN as relative positional encoder ( \u00a7 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Whereas RNN and CNN-based models are inseparable from relative position inside of RNN or CNN, Transformer allows us to change the position information type. We therefore compare the RNN-based model and four variants of Transformer: vanilla Transformer, the modified Transformer using self-attention with relative positional encodings (Shaw et al., 2018) , our modified Transformer with RNN instead of positional encoding layer, and a mixture of the last two models ( \u00a7 5). On ASPEC English-to-Japanese and WMT2014 English-to-German translation tasks, we show that relative information improves Transformer to be more robust to variations in input length.",
"cite_spans": [
{
"start": 334,
"end": 353,
"text": "(Shaw et al., 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contribution is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We identified a defect in Transformer. Use of absolute position makes it difficult to translate very long sentences. \u2022 We proposed a simple method to incorporate relative position into Transformer; it gives an additive improvement to the existing model by Shaw et al. (2018) which also incorporates relative position. \u2022 We revealed the overfitting property of Transformer to both short and long sentences.",
"cite_spans": [
{
"start": 258,
"end": 276,
"text": "Shaw et al. (2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Early studies on NMT, at that time RNN-based model, analyze the translation quality in terms of sentence length (Sutskever et al., 2014; Bahdanau et al., 2015; Luong et al., 2015) , and a few studies shed light on the details. Shi et al. (2016) examine why RNN-based model generates translations of the right length without special mechanism for the length, and report how LSTM regulates the output length. Koehn and Knowles (2017) reveal that RNN-based model has lower translation quality on very long sentences. Although researchers have proposed various new NMT architecture, they usually evaluate their models only in terms of the overall translation quality and rarely mention how the translation has changed (Gehring et al., 2017; Kalchbrenner et al., 2016; Vaswani et al., 2017) . Only a few studies do the analysis on the translation quality in terms of sentence length (Elbayad et al., 2018; Zhang et al., 2019) . The robustness of the recent NMT models on very long sentences remains to be assessed. What we focus on in this study is the word position information which will closely relate to the decodable sentence length. Relative information has been implicitly used in the models using RNN or CNN. Gehring et al. (2017) introduce position embeddings which represent absolute position information to their CNN-based model. Sukhbaatar et al. (2015) introduce another absolute position information, positional encodings, which need no parameter training, and Vaswani et al. (2017) adopt them in their model, Transformer, which has neither RNN nor CNN.",
"cite_spans": [
{
"start": 112,
"end": 136,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF25"
},
{
"start": 137,
"end": 159,
"text": "Bahdanau et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 160,
"end": 179,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 227,
"end": 244,
"text": "Shi et al. (2016)",
"ref_id": "BIBREF23"
},
{
"start": 407,
"end": 431,
"text": "Koehn and Knowles (2017)",
"ref_id": "BIBREF11"
},
{
"start": 714,
"end": 736,
"text": "(Gehring et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 737,
"end": 763,
"text": "Kalchbrenner et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 764,
"end": 785,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF26"
},
{
"start": 878,
"end": 900,
"text": "(Elbayad et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 901,
"end": 920,
"text": "Zhang et al., 2019)",
"ref_id": "BIBREF28"
},
{
"start": 1212,
"end": 1233,
"text": "Gehring et al. (2017)",
"ref_id": "BIBREF6"
},
{
"start": 1336,
"end": 1360,
"text": "Sukhbaatar et al. (2015)",
"ref_id": "BIBREF24"
},
{
"start": 1470,
"end": 1491,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently, Shaw et al. (2018) propose to incorporate relative position into Transformer by modifying the self-attention layer while removing positional encodings. Lei et al. (2018) propose a fast RNN named Simple Recurrent Units (SRU) and replace the feed-forward layers of Transformer by SRU considering that recurrent process would better capture sequential information. Although both approaches succeeded in improving BLEU score, the researchers did not report in what respect the models improved the translation.",
"cite_spans": [
{
"start": 10,
"end": 28,
"text": "Shaw et al. (2018)",
"ref_id": "BIBREF22"
},
{
"start": 162,
"end": 179,
"text": "Lei et al. (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Chen et al. 2018propose a RNN-based model, RNMT+, which is based on stacked LSTMs and incorporates some components from Transformer such as layer normalization and multi-head attention. On the other hand, our model is based on Transformer and incorporates RNN into Transformer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Transformer (Vaswani et al., 2017 ) is a sequence to sequence model that has an encoder to process and represent input sequence and a decoder to generate output sequence from the encoder outputs. Both the encoder and decoder have a word embedding layer, a positional encoding layer, and Figure 1 : The architectures of all the Transformer-based models we compare in this study; for simplicity, we show the encoder architectures here since the same modification is applied to their decoders.",
"cite_spans": [
{
"start": 12,
"end": 33,
"text": "(Vaswani et al., 2017",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 287,
"end": 295,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transformer",
"sec_num": "3.1"
},
{
"text": "stacked encoder/decoder layers. The encoder architecture is shown in Figure 1a . Word embedding layers encode input words into continuous low-dimension vectors, followed by positional encoding layers that add position information to them. Encoder/decoder layers consist of a few sub-layers, self-attention layer, attention layer (decoder only) and feed-forward layer, with layer normalization (Ba et al., 2016) for each. Both self-attention layer and attention layer employ the same architecture, and we explain the details in \u00a7 3.3. Feed-forward layer consists of two linear transformations with a ReLU activation in between. As for the decoder, a linear transformation and a softmax function follow the stacked layers to calculate probabilities of words to output. Figure 1 illustrates the architectures of all the Transformer-based models we compare in this study including our porposed model which will be introduced in \u00a7 4. The model in Shaw et al. (2018) modifies the self-attention layer ( \u00a7 3.3).",
"cite_spans": [
{
"start": 942,
"end": 960,
"text": "Shaw et al. (2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 69,
"end": 78,
"text": "Figure 1a",
"ref_id": null
},
{
"start": 767,
"end": 775,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transformer",
"sec_num": "3.1"
},
{
"text": "Transformer has positional encoding layers which follow the word embedding layers and capture absolute position. The process of positional encoding layer is to add positional encodings (position vectors) to input word embeddings. The positional encodings are generated using sinusoids of varying frequencies, which is designed to allow the model to attend to relative positions from the periodicity of positional encodings (sinusoids). This is in contrast to the position embeddings (Gehring et al., 2017) , a learned position vectors, which are not meant to attend to relative positions. Vaswani et al. (2017) report that both approaches produced nearly identical results in their experiments, and also mentioned that the model with positional encodings may handle longer inputs in testing than those in training, which implies that absolute position approach might have problems at this point. 1",
"cite_spans": [
{
"start": 483,
"end": 505,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 589,
"end": 610,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Position Information",
"sec_num": "3.2"
},
{
"text": "Some studies modify Transformer to consider relative position instead of absolute position. Shaw et al. (2018) propose an extension of self-attention mechanism which handles relative position inside in order to incorporate relative position into Transformer. We hereafter refer to their model as Rel-Transformer. In what follows, we explain the selfattention mechanism and their extension.",
"cite_spans": [
{
"start": 92,
"end": 110,
"text": "Shaw et al. (2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Self-attention with Relative Position",
"sec_num": "3.3"
},
{
"text": "Self-attention is a special case of general attention mechanism, which uses three elements called query, key and value. The basic idea is to compute weighted sum of values where the weights are computed using the query and keys. Each weight represents how much attention is paid to the corresponding value. In the case of self-attention, the input set of vectors behaves as all of the three elements (query, key and value) using three different transformations. When taking a sentence as input, it is processed as a set in the self-attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-attention with Relative Position",
"sec_num": "3.3"
},
{
"text": "Self-attention operation is to compute output sequence z = (z 1 , ..., z n ) out of input sequence x = (x 1 , ..., x n ), where both sequences have the same langth n and x i \u2208 R dx , z i \u2208 R dz . The output element z i is computed as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-attention with Relative Position",
"sec_num": "3.3"
},
{
"text": "z i = n j=1 \u03b1 ij (x j W V )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-attention with Relative Position",
"sec_num": "3.3"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-attention with Relative Position",
"sec_num": "3.3"
},
{
"text": "\u03b1 ij = exp e ij n k=1 exp e ik",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-attention with Relative Position",
"sec_num": "3.3"
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-attention with Relative Position",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e ij = x i W Q (x j W K ) T \u221a d z ,",
"eq_num": "(3)"
}
],
"section": "Self-attention with Relative Position",
"sec_num": "3.3"
},
{
"text": "where W Q , W K , W V \u2208 R dx\u00d7dz are the matrices that transform input elements into querys, keys, and values, respectively. The extension proposed by Shaw et al. (2018) adds only two terms to the original self-attention:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-attention with Relative Position",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "the relative position vectors w K j\u2212i , w V j\u2212i \u2208 R dz . z i = n j=1 \u03b1 ij (x j W V + w V j\u2212i )",
"eq_num": "(4)"
}
],
"section": "Self-attention with Relative Position",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 ij = exp e ij n k=1 exp e ik",
"eq_num": "(5)"
}
],
"section": "Self-attention with Relative Position",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e ij = x i W Q (x j W K + w K j\u2212i ) T \u221a d z ,",
"eq_num": "(6)"
}
],
"section": "Self-attention with Relative Position",
"sec_num": "3.3"
},
{
"text": "Note that when using the relative position vectors, the input is processed as a directed graph instead of a set. Maximum distance k is employed to clip the relative distance within a certain distance so that the value of relative distance is limited as \u2212k < j \u2212 i < k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-attention with Relative Position",
"sec_num": "3.3"
},
{
"text": "The approach by Shaw et al. (2018) is not the only way to incorporate relative position into Transformer. Lei et al. (2018) replace feed-forward layers by their proposed SRU which also incorporates relative position. Both approaches modify the encoder and decoder layers that are repeatedly stacked, which means their models handle position information multiple times. However, the original Transformer does only once at the positional encoding layer which locates shallow layer of the deep layered network.",
"cite_spans": [
{
"start": 16,
"end": 34,
"text": "Shaw et al. (2018)",
"ref_id": "BIBREF22"
},
{
"start": 106,
"end": 123,
"text": "Lei et al. (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RNN as a Relative Positional Encoding",
"sec_num": "4"
},
{
"text": "To conduct a clear comparison of the position information types, we propose another simple method that replaces the positional encoding layer of Transformer by RNN. As the RNN has the nature to handle a sequence using relative position information, it can be used not only as a main processing unit of RNN-based model, but also as a relative positional encoder. While Lei et al. (2018) also employ RNN, they use position embeddings.",
"cite_spans": [
{
"start": 368,
"end": 385,
"text": "Lei et al. (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RNN as a Relative Positional Encoding",
"sec_num": "4"
},
{
"text": "Our approach is a pure replacement of position information type for Transformer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN as a Relative Positional Encoding",
"sec_num": "4"
},
{
"text": "In the original Transformer, the positional encoding layer adds the i-th position vector pe(i) \u2208 R dwv to the i-th input word vector wv i \u2208 R dwv and outputs the position informed word vector wv i \u2208 R dwv :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RNN as a Relative Positional Encoding",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "wv i = wv i + pe(i)",
"eq_num": "(7)"
}
],
"section": "RNN as a Relative Positional Encoding",
"sec_num": "4"
},
{
"text": "In our approach, we adopt RNN, specifically GRU (Cho et al., 2014) in this study, as a relative positional encoder. GRU computes its output or its i-th time hidden state h i \u2208 R dwv given the input word vector wv i and the previous hidden state h i\u22121 \u2208 R dwv , and we take h i as the position informed word vector wv i :",
"cite_spans": [
{
"start": 48,
"end": 66,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RNN as a Relative Positional Encoding",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i = GRU(wv i , h i\u22121 ) (8) wv i = h i",
"eq_num": "(9)"
}
],
"section": "RNN as a Relative Positional Encoding",
"sec_num": "4"
},
{
"text": "Although LSTM (Hochreiter and Schmidhuber, 1997) is more often used as an RNN module in RNN-based models, we employed GRU which has less parameters. This is because, in our approach, RNN is just a positional encoder which we do not expect to work more, even though it can. We refer to our proposed model as RNN-Transformer. We also consider the mixture of Shaw et al. (2018) and our method to investigate whether the two methods of considering relative position have additive improvements. Although both methods are intended to incorporate relative position into Transformer, they modify different parts of Transformer. By combining both, we can see either of modification suffices to incorporate relative position. We refer to this model as RR-Transformer.",
"cite_spans": [
{
"start": 14,
"end": 48,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF7"
},
{
"start": 356,
"end": 374,
"text": "Shaw et al. (2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RNN as a Relative Positional Encoding",
"sec_num": "4"
},
{
"text": "We conduct two experiments to evaluate our modification to Transformer and to investigate the impact of using relative position in NMT models. The first experiment is a basic translation experiment which uses all the training data. We carry out analysis on the translations generated by the NMT models in terms of sentence length, especially focusing on long sentences. In the second experiment, we control the training data by the sentence length so that the NMT models are trained only on sentences with lengths in a certain range. We also analyze the result in terms of sentence length, focusing on the short sentences. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Dataset and Preprocess: We perform a series of experiments on English-to-Japanese and English-to-German translation tasks. For Englishto-Japanese translation task, we exploit ASPEC (Nakazawa et al., 2016) , a parallel corpus compiled from abstract sections of scientific papers. For English-to-German translation task, we exploit a dataset in WMT2014, which is one of the most common dataset for translation task.",
"cite_spans": [
{
"start": 181,
"end": 204,
"text": "(Nakazawa et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "5.1"
},
{
"text": "For ASPEC English-to-Japanese data, we used scripts of Moses toolkit 2 (ver. 2.2.1) (Koehn et al., 2007) for English tokenization and truecasing, and KyTea 3 (ver. 0.4.2) (Neubig et al., 2011) for Japanese segmentations. Following those wordlevel preprocess, we further applied Sentence-Piece (Kudo and Richardson, 2018) to segment texts down to subword level with shared vocabulary size of 16,000. Finally we selected the first 1,500,000 sentence pairs for the poor quality of the latter part, and filtered out sentence pairs with more than 49 subwords in either of the languages.",
"cite_spans": [
{
"start": 84,
"end": 104,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF10"
},
{
"start": 171,
"end": 192,
"text": "(Neubig et al., 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "5.1"
},
{
"text": "For WMT2014 English-to-German translation task, we used preprocessed data provided from the Stanford NLP Group, 4 and used newstest2013 and newstest2014 as development and test data, respectively. We also applied SentencePiece to this data to segment into subwords with shared vocabulary size of 40,000. We filtered out the sentence pairs in the same way as the ASPEC. Table 1 shows the number of sentence pairs of preprocessed data. Figure 2 shows the distributions of the sentences plotted against the length of input sentence. Althought ASPEC data has slightly larger peak at sentence length of 20-29 subwords, both datasets have no big difference in length distributions. The training and test data have almost identical curves.",
"cite_spans": [],
"ref_spans": [
{
"start": 434,
"end": 442,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Setup",
"sec_num": "5.1"
},
{
"text": "We compare the following five NMT models: 0 -9 1 0 -1 9 2 0 -2 9 3 0 -3 9 4 0 -4 9 5 0 -5 9 6 0 -6 9 7 0 -7 9 8 0 -8 9 9 0 -9 9 1 0 0 -1 0 9 1 1 0 -1 1 9 1 2 0 -1 2 9 1 3 0 -1 3 9 0 20 40 ASPEC (En-Ja) Train ASPEC (En-Ja) Test 0 -9 1 0 -1 9 2 0 -2 9 3 0 -3 9 4 0 -4 9 5 0 -5 9 6 0 -6 9 7 0 -7 9 8 0 -8 9 9 0 -9 9 1 0 0 -1 0 9 1 1 0 -1 1 9 1 2 0 -1 2 9 RNN-NMT is a RNN-based NMT model with dot-attention and input-feeding (Luong et al., 2015) . This model consists of four layered bi-directional LSTM for encoder and three layered uni-directional LSTM for decoder. Transformer is a vanilla Transformer model (the base model in Vaswani et al. (2017) consists of the same number of encoder and decoder layers as RNN-Transformer model, with the modified self-attention layer.",
"cite_spans": [
{
"start": 422,
"end": 442,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 627,
"end": 648,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model:",
"sec_num": null
},
{
"text": "We implemented all the models using PyTorch 6 (ver. 0.4.1). Taking the base model of Transformer (Vaswani et al., 2017) which consists of six-layered encoder and decoder as a reference model, we built the other models to have almost the same number of model parameters for a fair comparison. For all models, we set word embedding dimension and model dimension (or hidden size for RNNs) to 512. For the Transformer-based models, we set feed-forward layer dimension to 2048, and the number of attention head to 8. Table 2 shows the total number of model parameters for all the models in our implementation. The difference of the numbers by the datasets comes from the difference in vocabulary size.",
"cite_spans": [
{
"start": 97,
"end": 119,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 512,
"end": 519,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Model:",
"sec_num": null
},
{
"text": "Training: We used Adam optimizer (Kingma and Ba, 2015) with initial learning rate of 0.0001, and set dropout rate of 0.2 and gradient clipping value of 3.0. We adopted warm-up strategy (Vaswani et al., 2017) for fast convergence with warm-up step of 4k, and trained all the model for 300k steps. The mini-batch size was set to 128.",
"cite_spans": [
{
"start": 185,
"end": 207,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model:",
"sec_num": null
},
{
"text": "Evaluation: We performed greedy search for translation with the models, and evaluated the translation quality in terms of BLEU score (Papineni et al., 2002) using multi-bleu.perl in the Moses toolkit. We checked model's BLEU score on the development data at every 10k steps during the training, and took the best performing model for evaluation on the test data. Table 3 shows the BLEU scores of the NMT models on the test data of ASPEC English-to-Japanese and WMT2014 English-to-German when using all the preprocessed training data for training. Ta In order to see the capability of translating long sentences of the models, we split the test data into different bins according to the length of input sentences, and then calculated BLEU scores on each bin. The following evaluation uses the raw subword-level outputs of the models since the sentence length is based on subwords. Figure 3a and 3b show the BLEU scores on the split test data of ASPEC English-to-Japanese and WMT2014 English-to-German, respectively. The BLEU score of Transformer, the only model that uses absolute position, more sharply drops than the BLEU scores of the other models at the input length of 50-59, which is outside of the length range of the training data. As for the input length of 60-, Transformer performs the worst among all the models. These results indicate that relative position works better than absolute position in translating sentences longer than those of the training data. Meanwhile, for the lengths with enough amount of training data, both position information types seem to work almost equally. On WMT2014 English-to-German, all the models except Transformer successfully keep as good performance in 50-59 and 60-bins as the other bins.",
"cite_spans": [
{
"start": 133,
"end": 156,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 363,
"end": 370,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 547,
"end": 549,
"text": "Ta",
"ref_id": null
},
{
"start": 880,
"end": 889,
"text": "Figure 3a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model:",
"sec_num": null
},
{
"text": "To figure out the effect of position information on the ability of the models to generate output of proper length, we look into the difference of sentence length between the model's output and the reference translation. Figure 4a and 4b show the averaged differences plotted against the input sentence length on both language pairs. We can ob-serve that all the models tend to output shorter sentence than the reference. However, Transformer shows the largest drop at the input length of 50-59 again among all the models, which is even more than RNN-NMT. The difference between Transformer and RNN-Transformer indicates the advantage of relative position against absolute position, while the difference between the three modified Transformer-based models and RNN-NMT indicates the structural advantage of Transformer to RNN-based model in generating translations with appropriate lengths. min len. max len. # of sentences # of tokens Short 2 26 555,922 10,392,775 Middle 26 34 350,176 10,392,797 Long 34 49 260,626 10,392,729 (a) ASPEC English-to-Japanese min len. max len. # of sentences # of tokens Short 1 24 1,878,354 29,841,533 Middle 24 34 1,041,794 29,841,531 Long 34 49 740,887 29,841,519 (b) WMT2014 English-to-German Table 5 : Statistics of the split training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 220,
"end": 229,
"text": "Figure 4a",
"ref_id": null
},
{
"start": 934,
"end": 1039,
"text": "Short 2 26 555,922 10,392,775 Middle 26 34 350,176 10,392,797 Long 34 49 260,626 10,392,729",
"ref_id": "TABREF1"
},
{
"start": 1115,
"end": 1224,
"text": "Short 1 24 1,878,354 29,841,533 Middle 24 34 1,041,794 29,841,531 Long 34 49 740,887 29,841,519",
"ref_id": "TABREF1"
},
{
"start": 1255,
"end": 1262,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Long Sentence Translation",
"sec_num": "5.2"
},
{
"text": "The above result that the models tend to output shorter sentences suggests that the models may have a limit in the range of output length. To confirm this possibility, we look into the distributions of the model's output length. Figure 5a and 5b show distributions of output length of Transformer and RR-Transformer for the input length of 40-49 (length within the training data) and 50-59 (length outside of the training data). For the input length of 40-49, the distributions of both models are flat and have no big difference. For the input length of 50-59, on the other hand, we can see a sharp peak in the distribution of Transformer in which most of the values distribute around 50 tokens or less. These results indicate that Transformer tends to overfit to a range of length of input sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 238,
"text": "Figure 5a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Long Sentence Translation",
"sec_num": "5.2"
},
{
"text": "The above experiments focus on trainslation of long sentences, or, strictly speaking, sentences longer than those in the training data. With the use of absolute position, it is no surprise that the model fails to handle longer sentences since those sentences demand the model to handle the position vectors which are never seen during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length-Controlled Training Data",
"sec_num": "5.3"
},
{
"text": "In this section, we focus on short sentences to investigate whether Transformer overfits to the length of input sentences in the training data. Note that position vectors of small numbers are included in long sentences. If the problem is only unseen position vectors, then the model shall be able to handle short sentences because short sentences do not include any unseen position numbers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length-Controlled Training Data",
"sec_num": "5.3"
},
{
"text": "To figure out how the NMT models behave on sentences shorter than those in the training data, we conduct another experiment in which the length of the training data is controlled. We split the training data of both ASPEC English-to-Japanese and WMT2014 English-to-German into three portions according to the length of input sentences so that each of them has almost the same number of tokens. We then trained the five NMT models on each of the three training data. We hereafter refer to these three length-controlled training data as Short, Middle and Long. The statistics of these data is summarized in Table 5a and 5b.",
"cite_spans": [],
"ref_spans": [
{
"start": 604,
"end": 612,
"text": "Table 5a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Length-Controlled Training Data",
"sec_num": "5.3"
},
{
"text": "To see how the translation quality changes between inside and outside of the length within the training data, we split the test data with respect to the lengths of split training data. Figure 6a and 6b show the BLEU scores on all the three training data of both language pairs. Transformer shows the worst performance among the four Transformer-based models on the sentences longer than those in the training data for any controlled length. However, on the shorter sentences than those in the training data, RNN-Transformer scores almost the same as Transformer on the Middle and Long training data of ASPEC English-to-Japanese and also shows a larger drop than RNN-NMT at length of -24 on the Long training data of WMT2014 English-to-German. This implies that our proposed method to replace absolute positional encoding layer by RNN does not work well in translating shorter sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 185,
"end": 201,
"text": "Figure 6a and 6b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Length-Controlled Training Data",
"sec_num": "5.3"
},
{
"text": "We can also see that Rel-Transformer and RR-Transformer are quite competitive across all the situations. This suggests that one Transformer decoder layer and two GRUs contribute almost equally to the translation quality. Figure 7a and 7b show the averaged difference of length between NMT model's output and the reference translation on Long training data of both datasets. 7 These figures indicate that Transformer and RNN-Transformer tend to generate inappropriately long sentences in translating much shorter sentences than those in the training data. As mentioned above, when translating short sentences, there is no unseen positions in Transformer, while there is no concrete position representation in RNN-Transformer; the above results suggest that these two models overfit to the (longer) length of input sentences. In contrast, the result of Rel- Transformer and RR-Transformer indicates that self-attention with relative position prevents this overfitting.",
"cite_spans": [
{
"start": 374,
"end": 375,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 221,
"end": 230,
"text": "Figure 7a",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Length-Controlled Training Data",
"sec_num": "5.3"
},
{
"text": "In this paper, we examined the relation between position information and the length of input sentences by comparing absolute position and relative position using RNN-based model and variations of Transformer models. Experiments on all the preprocessed training data revealed the crucial weakness of the original Transformer, which uses absolute position, in translating sentences longer than those of the training data. We also confirmed that incorporating relative position into Transformer helps to handle those long sentences and improves the translation quality. Another experiment on the length-controlled training data revealed that absolute position of Transformer causes overfitting to the input sentence length. To conclude, all the experiments suggest to use relative position and not to use absolute position.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Considering that the available data is not balanced in terms of the sentence length in practice, preventing the overfitting is useful for building a practical NMT system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Our preliminary experiment confirmed that positional encodings perform better for longer sentences than those in the training data, while position embeddings perform slightly better for the other length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.statmt.org/moses/ 3 http://www.phontron.com/kytea/ 4 https://nlp.stanford.edu/projects/ nmt/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This configuration was chosen because it performed better than a model with five-layered encoder and six-layered decoder, and was comparable to five-layred encoder and decoder with bi-directional (instead of uni-directional) GRU for the relative position encoder in preliminary experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that Figure 7a and 7b use different x-axis scale from Figure 6a and 6b in order to show the difference clearly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We deeply thank Satoshi Tohda for proofreading the draft of our paper. This work was partially supported by JST CREST Grant Number JP-MJCR19A4, Japan. This research was also partially supported by NII CRIS Contract Research 2019.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the third International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the third International Conference on Learning Rep- resentations (ICLR).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The best of both worlds: Combining recent advances in neural machine translation",
"authors": [
{
"first": "Mia",
"middle": [],
"last": "Xu Chen",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Bapna",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "76--86",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1008"
]
},
"num": null,
"urls": [],
"raw_text": "Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Mike Schuster, Noam Shazeer, Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. 2018. The best of both worlds: Combining recent advances in neural machine translation. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 76-86.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1179"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724- 1734.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Abstractive sentence summarization with attentive recurrent neural networks",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "93--98",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1012"
]
},
"num": null,
"urls": [],
"raw_text": "Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceed- ings of the 2016 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies (NAACL- HLT), pages 93-98.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Pervasive attention: 2D convolutional neural networks for sequence-to-sequence prediction",
"authors": [
{
"first": "Maha",
"middle": [],
"last": "Elbayad",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Verbeek",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "97--107",
"other_ids": {
"DOI": [
"10.18653/v1/K18-1010"
]
},
"num": null,
"urls": [],
"raw_text": "Maha Elbayad, Laurent Besacier, and Jakob Verbeek. 2018. Pervasive attention: 2D convolutional neural networks for sequence-to-sequence prediction. In Proceedings of the 22nd Conference on Computa- tional Natural Language Learning (CoNLL), pages 97-107.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "1243--1252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, De- nis Yarats, and Yann N. Dauphin. 2017. Convolu- tional sequence to sequence learning. In Proceed- ings of the 34th International Conference on Ma- chine Learning (ICML), pages 1243-1252.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A\u00e4ron van den Oord, Alex Graves, and Koray Kavukcuoglu",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
}
],
"year": 2016,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, A\u00e4ron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2016. Neural machine translation in linear time. CoRR, abs/1610.10099.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the third International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the third International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL): Demo and Poster Sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL): Demo and Poster Sessions, pages 177-180.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Six challenges for neural machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Knowles",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Neural Machine Translation",
"volume": "",
"issue": "",
"pages": "28--39",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3204"
]
},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six chal- lenges for neural machine translation. In Pro- ceedings of the First Workshop on Neural Machine Translation, pages 28-39.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "",
"middle": [],
"last": "Taku",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP): System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2012"
]
},
"num": null,
"urls": [],
"raw_text": "Taku. Kudo and John Richardson. 2018. Sentence- piece: A simple and language independent subword tokenizer and detokenizer for neural text process- ing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP): System Demonstrations, pages 66-71.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Simple recurrent units for highly parallelizable recurrence",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Sida",
"middle": [
"I"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "4470--4481",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1477"
]
},
"num": null,
"urls": [],
"raw_text": "Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, and Yoav Artzi. 2018. Simple recurrent units for highly par- allelizable recurrence. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 4470-4481.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1166"
]
},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1412-1421.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "ASPEC: Asian scientific paper excerpt corpus",
"authors": [
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Yaguchi",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the tenth International Conference on Language Resources and Evaluation (LREC 2016)",
"volume": "",
"issue": "",
"pages": "2204--2208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshiaki Nakazawa, Manabu Yaguchi, Kiyotaka Uchi- moto, Masao Utiyama, Eiichiro Sumita, Sadao Kurohashi, and Hitoshi Isahara. 2016. ASPEC: Asian scientific paper excerpt corpus. In Pro- ceedings of the tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 2204-2208.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Abstractive text summarization using sequence-to-sequence RNNs and beyond",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cicero Dos Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Aglar Gu\u00e7ehre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "280--290",
"other_ids": {
"DOI": [
"10.18653/v1/K16-1028"
]
},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, \u00c7 aglar Gu\u00e7ehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL), pages 280-290.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Pointwise prediction for robust, adaptable Japanese morphological analysis",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Yosuke",
"middle": [],
"last": "Nakata",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT), Short Papers",
"volume": "",
"issue": "",
"pages": "529--533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable Japanese morphological analysis. In Proceedings of the 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technolo- gies (ACL-HLT), Short Papers, pages 529-533.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics (ACL), pages 311-318.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A neural attention model for abstractive sentence summarization",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1044"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proceedings of the 2015",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "379--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 379-389.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Neural responding machine for short-text conversation",
"authors": [
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1577--1586",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1152"
]
},
"num": null,
"urls": [],
"raw_text": "Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversa- tion. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP), pages 1577- 1586.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Self-attention with relative position representations",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Shaw",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Short Papers",
"volume": "",
"issue": "",
"pages": "464--468",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2074"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position represen- tations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (NAACL-HLT), Short Papers, pages 464- 468.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Why neural translations are the right length",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2278--2282",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1248"
]
},
"num": null,
"urls": [],
"raw_text": "Xing Shi, Kevin Knight, and Deniz Yuret. 2016. Why neural translations are the right length. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2278-2282.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "End-to-end memory networks",
"authors": [
{
"first": "Sainbayar",
"middle": [],
"last": "Sukhbaatar",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems (NIPS) 28",
"volume": "",
"issue": "",
"pages": "2440--2448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sainbayar Sukhbaatar, arthur szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Advances in Neural Information Processing Sys- tems (NIPS) 28, pages 2440-2448.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems (NIPS) 27",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in Neural Information Process- ing Systems (NIPS) 27, pages 3104-3112.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems (NIPS)",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems (NIPS) 30, pages 5998-6008.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A neural conversational model",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Deep Learning Workshop held at the 31st International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals and Quoc V. Le. 2015. A neural conver- sational model. In Proceedings of Deep Learning Workshop held at the 31st International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Bridging the gap between training and inference for neural machine translation",
"authors": [
{
"first": "Wen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Fandong",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "You",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "4334--4343",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1426"
]
},
"num": null,
"urls": [],
"raw_text": "Wen Zhang, Yang Feng, Fandong Meng, Di You, and Qun Liu. 2019. Bridging the gap between train- ing and inference for neural machine translation. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), pages 4334-4343.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "Sentence length ratio of preprocessed corpus.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "BLEU scores on test data split by the sentence length (no training data in the gray-colored area). 0-9 10-19 20-29 30-39 40-49 50-59 60-Averaged difference of sentence length between NMT model's output and the reference translation (no training data in the gray-colored area). Distributions of output sentence length of Transformer and RR-Transformer.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "(b) WMT2014 English-to-GermanFigure 6: BLEU scores of models trained on three length-controlled training data on test data split in the same way as the training data (almost no training data in the gray-colored area). Averaged difference of sentence length between NMT model's output translation and reference translation (almost no training data in the gray-colored area).",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"text": "Number of sentence pairs in the preprocessed corpus.",
"content": "<table/>",
"num": null
},
"TABREF3": {
"type_str": "table",
"html": null,
"text": "Number of the model parameters.",
"content": "<table/>",
"num": null
},
"TABREF5": {
"type_str": "table",
"html": null,
"text": "BLEU scores on test data.",
"content": "<table><tr><td/><td colspan=\"5\">RNN-NMT Trans Rel RNN RR</td></tr><tr><td>RNN-NMT</td><td/><td>&lt;&lt;</td><td>&lt;&lt;</td><td>&lt;&lt;</td><td>&lt;&lt;</td></tr><tr><td>Trans</td><td>&gt;&gt;</td><td/><td>&lt;&lt;</td><td>&lt;&lt;</td><td>&lt;&lt;</td></tr><tr><td>Rel</td><td>&gt;&gt;</td><td>&gt;&gt;</td><td/><td>\u223c</td><td>&lt;</td></tr><tr><td>RNN</td><td>&gt;&gt;</td><td>&gt;&gt;</td><td>\u223c</td><td/><td>&lt;&lt;</td></tr><tr><td>RR</td><td>&gt;&gt;</td><td>&gt;&gt;</td><td>&gt;&gt;</td><td>&gt;&gt;</td></tr></table>",
"num": null
},
"TABREF6": {
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td>: Results of statistical significance test on AS-</td></tr><tr><td>PEC English-to-Japanese (lower-left) and WMT2014</td></tr><tr><td>English-to-German (upper-right): \"&gt;&gt;\" or \"&lt;&lt;\"</td></tr><tr><td>means p &lt; 0.01, \"&gt;\" or \"&lt;\" means p &lt; 0.05 and</td></tr><tr><td>\"\u223c\" means p \u2265 0.05.</td></tr><tr><td>test using bootstrapping of 10,000 samples. The</td></tr><tr><td>evaluation is done on word-level, which means</td></tr><tr><td>that we converted the outputs of NMT mod-</td></tr><tr><td>els from subword-level into word-level before</td></tr><tr><td>scoring. On both datasets, Transformer outper-</td></tr><tr><td>forms RNN-NMT, and all of the three modified</td></tr><tr><td>versions of Transformer outperform the Trans-</td></tr><tr><td>former. RNN-Transformer was comparable to</td></tr><tr><td>Rel-Transformer, and RR-Transformer, the mix-</td></tr><tr><td>ture of RNN-Transformer and Rel-Transformer,</td></tr><tr><td>gives the best score.</td></tr></table>",
"num": null
}
}
}
}