Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R19-1028",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:03:22.684126Z"
},
"title": "Dependency-Based Self-Attention for Transformer NMT",
"authors": [
{
"first": "Hiroyuki",
"middle": [],
"last": "Deguchi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ehime University",
"location": {}
},
"email": "deguchi@ai."
},
{
"first": "Akihiro",
"middle": [],
"last": "Tamura",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ehime University",
"location": {}
},
"email": "tamura@"
},
{
"first": "Takashi",
"middle": [],
"last": "Ninomiya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ehime University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose a new Transformer neural machine translation (NMT) model that incorporates dependency relations into self-attention on both source and target sides, dependency-based selfattention. The dependency-based selfattention is trained to attend to the modifiee for each token under constraints based on the dependency relations, inspired by linguistically-informed self-attention (LISA). While LISA was originally designed for Transformer encoder for semantic role labeling, this paper extends LISA to Transformer NMT by masking future information on words in the decoderside dependency-based self-attention. Additionally, our dependency-based selfattention operates at subword units created by byte pair encoding. Experiments demonstrate that our model achieved a 1.0 point gain in BLEU over the baseline model on the WAT'18 Asian Scientific Paper Excerpt Corpus Japanese-to-English translation task.",
"pdf_parse": {
"paper_id": "R19-1028",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose a new Transformer neural machine translation (NMT) model that incorporates dependency relations into self-attention on both source and target sides, dependency-based selfattention. The dependency-based selfattention is trained to attend to the modifiee for each token under constraints based on the dependency relations, inspired by linguistically-informed self-attention (LISA). While LISA was originally designed for Transformer encoder for semantic role labeling, this paper extends LISA to Transformer NMT by masking future information on words in the decoderside dependency-based self-attention. Additionally, our dependency-based selfattention operates at subword units created by byte pair encoding. Experiments demonstrate that our model achieved a 1.0 point gain in BLEU over the baseline model on the WAT'18 Asian Scientific Paper Excerpt Corpus Japanese-to-English translation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the field of machine translation (MT), the Transformer model (Vaswani et al., 2017) has outperformed recurrent neural network (RNN)based models (Sutskever et al., 2014) and convolutional neural network (CNN)-based models (Gehring et al., 2017) on many translation tasks, and thus has garnered attention from MT researchers. The Transformer model computes the strength of a relationship between two words in a sentence by means of a self-attention mechanism, which has contributed to the performance improvement in not only MT but also various NLP tasks such as language modeling and semantic role labeling (SRL).",
"cite_spans": [
{
"start": 64,
"end": 86,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 147,
"end": 171,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF13"
},
{
"start": 224,
"end": 246,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The performance of MT, including statistical machine translation and RNN-based neural machine translation (NMT), has been improved by incorporating sentence structures (Lin, 2004; Chen et al., 2017; Eriguchi et al., 2017; Wu et al., 2018) . In addition, Strubell et al. (2018) have improved a Transformer-based SRL model by incorporating dependency structures of sentences into self-attention, which is called linguisticallyinformed self-attention (LISA) . In LISA, one attention head of a multi-head self-attention is trained with constraints based on dependency relations to attend to syntactic parents for each token.",
"cite_spans": [
{
"start": 168,
"end": 179,
"text": "(Lin, 2004;",
"ref_id": "BIBREF7"
},
{
"start": 180,
"end": 198,
"text": "Chen et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 199,
"end": 221,
"text": "Eriguchi et al., 2017;",
"ref_id": "BIBREF3"
},
{
"start": 222,
"end": 238,
"text": "Wu et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 254,
"end": 276,
"text": "Strubell et al. (2018)",
"ref_id": "BIBREF12"
},
{
"start": 448,
"end": 454,
"text": "(LISA)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the present work, we aim to improve translation performance by utilizing dependency relations in Transformer NMT. To this end, we propose a Transformer NMT model that incorporates dependency relations into self-attention on both source and target sides. Specifically, in training, a part of self-attention is learned with constraints based on dependency relations of source or target sentences to attend to a modifiee for each token, and, in decoding, the proposed model translates a sentence in consideration of dependency relations in both the source and target sides, which are captured by our self-attention mechanisms. Hereafter, the proposed self-attention is called dependencybased self-attention. Note that the dependencybased self-attention is inspired by LISA, but the straightforward adaptation of LISA, which is proposed for Transformer encoder, does not work well for NMT because a target sentence is not fully revealed in inference. Therefore, the proposed model masks future information on words in the decoder-side dependency-based self-attention to prevent from attending to unpredicted subsequent tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent NMT models treat a sentence as a sub- word sequence rather than a word sequence to address the translation of out-of-vocabulary words (Sennrich et al., 2016) . Therefore, we extend dependency-based self-attention to operate at subword units created by byte pair encoding (BPE) rather than word-units.",
"cite_spans": [
{
"start": 141,
"end": 164,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our experiments demonstrate that the proposed Transformer NMT model performs 1.0 BLEU points higher than the baseline Transformer NMT model, which does not incorporate dependency structures, on the WAT'18 Asian Scientific Paper Excerpt Corpus (ASPEC) Japanese-to-English translation task. The experiments also demonstrate the effectiveness of each of our proposals, namely, encoder-side dependency-based selfattention, decoder-side dependency-based selfattention, and extension for BPE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We provide here an overview of the Transformer NMT model (Vaswani et al., 2017) , which is the basis of our proposed model. The outline of the Transformer NMT model is shown in Fig. 1 . The Transformer NMT model is an encoderdecoder model that has a self-attention mechanism. The encoder maps an input sequence of symbol representations (i.e., a source sentence) X = (x 1 , x 2 , . . . , x nenc ) T to an intermediate vector. Then, the decoder generates an output sequence (i.e., a target sentence) Y = (y 1 , y 2 , . . . , y n dec ) T , given the intermediate vector. The encoder and the decoder are composed of a stack of J e encoder layers and of J d decoder layers, respectively. Because the Transformer model does not include recurrent or convolutional structures, it encodes word positional information as sinusoidal positional encodings:",
"cite_spans": [
{
"start": 57,
"end": 79,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 177,
"end": 183,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (pos,2i) = sin(pos/10000 2i/d ),",
"eq_num": "(1)"
}
],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (pos,2i+1) = cos(pos/10000 2i/d ),",
"eq_num": "(2)"
}
],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "where pos is the position, i is the dimension, and d is the dimension of the intermediate representation. At the first layers of the encoder and decoder, the positional encodings calculated by Equations (1) and (2) are added to the input embeddings. The j-th encoder layer's output S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "(j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "enc is generated by a self-attention layer Self Attn() and a position-wise fully connected feed forward network layer F F N () as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H (j) enc = LN (S (j\u22121) enc + Self Attn(S (j\u22121) enc )), (3) S (j) enc = LN (H (j) enc + F F N (H (j) enc )),",
"eq_num": "(4)"
}
],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "S (0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "enc is the input of the encoder, H dec is generated by an encoder-decoder attention layer EncDecAttn() in addition to the two sublayers of the encoder (i.e., Self Attn() and F F N ()) as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H (j) dec = LN (S (j\u22121) dec + Self Attn(S (j\u22121) dec )), (5) G (j) dec = LN (H (j) dec + EncDecAttn(H (j) dec )), (6) S (j) dec = LN (G (j) dec + F F N (H (j) dec )),",
"eq_num": "(7)"
}
],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "S (0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "dec is the input of the decoder, H",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "dec is the output of the j-th decoder's self-attention, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "G (j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "dec is the output of the j-th decoder's encoderdecoder attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "The last decoder layer's output S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "(J d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "dec is linearly mapped to a V -dimensional matrix, where V is the output vocabulary size. Then, the output sequence Y is generated based on P (Y | X), which is calculated by applying the softmax function to the V -dimensional matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "Self-attention computes the strength of the relationship between two words in the same sentence (i.e., between two source words or between two target words), and encoder-decoder attention computes the strength of the relationship between a source word and a target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "Both the self-attention and encoder-decoder attention are implemented with multi-head attention, which projects the embedding space into n head subspaces of the d head = d/n head dimension and calculates attention in each subspace. In the j-th layer's self-attention, the previous layer's output S (j\u22121) \u2208 R n\u00d7d is linearly mapped to three",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "d head -dimensional subspaces, Q (j) h , K (j) h , and V (j) h , using parameter matrices W Q (j) h \u2208 R d\u00d7d head , W K (j) h \u2208 R d\u00d7d head , and W V (j) h \u2208 R d\u00d7d head ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "where n is the length of the input sequence and 1 \u2264 h \u2264 n head 1 . In the j-th decoder layer's encoder-decoder attention, the previous layer's output S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "(j\u22121) dec is mapped to Q (j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "h , and the last encoder layer's output S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "(Je) enc is mapped to K (j) h and V (j) h .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "Then, an attention weight matrix, where each value represents the strength of the relationship between two words, is calculated on each subspace as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A (j) h = sof tmax(d \u22120.5 head Q (j) h K (j) T h ).",
"eq_num": "(8)"
}
],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "By multiplying A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "(j) h and V (j) h , a weighted repre- sentation matrix M (j) h is obtained: M (j) h = A (j) h V (j) h . (9) M (j) h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "in self-attention includes the strengths of the relationships with all words in the same sentence for each source or target word, and M (j) h in encoder-decoder attention includes the strengths of the relationships with all source words for each target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "Finally, the concatenation of all M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "(j) h (i.e., M (j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "1,2,...,n head ) is mapped to a d-dimensional matrix M (j) as follows:",
"cite_spans": [
{
"start": 55,
"end": 58,
"text": "(j)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M (j) = W M (j) [M (j) 1 ; . . . ; M (j) n head ],",
"eq_num": "(10)"
}
],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "where W M (j) \u2208 R d\u00d7d is a parameter matrix. Note that, in training, the decoder's selfattention masks future words so as to ensure that the attentions of a target word do not rely on unpredicted words in inference. lations into self-attention on both source and target sides, dependency-based self-attention. In particular, it parses the dependency structures of source sentences and target sentences by one attention head of the p e -th encoder layer's multihead self-attention and one of the p d -th decoder layer's multi-head self-attention, respectively, and translates a sentence based on the source-side and target-side dependency structures. We use the deep bi-affine parser (Dozat and Manning, 2016) as a model for dependency parsing in the dependency-based self-attention according to LISA. There are two inherent differences between LISA and our dependency-based self-attention: (i) our decoder-side dependency-based self-attention masks future information on words, and (ii) our dependency-based self-attention operates at subword units created by byte pair encoding rather than word-units.",
"cite_spans": [
{
"start": 683,
"end": 708,
"text": "(Dozat and Manning, 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer NMT",
"sec_num": "2"
},
{
"text": "The dependency-based self-attention parses dependency structures by extending the multi-head self-attention of the p-th layer of the encoder or decoder 2 . First, the p-th self-attention layer maps the previous layer's output S (p\u22121) of d-dimension to d head -dimensional subspaces of multi-head at-tention as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency-Based Self-Attention",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Q parse = S (p\u22121) W Qparse , (11) K parse = S (p\u22121) W Kparse , (12) V parse = S (p\u22121) W Vparse ,",
"eq_num": "(13)"
}
],
"section": "Dependency-Based Self-Attention",
"sec_num": "3.1"
},
{
"text": "where W Qparse , W Kparse , and W Vparse are d \u00d7 d head weight matrices. Next, an attention weight matrix A parse , where each value indicates the dependency relationship between two words, is calculated by using the bi-affine operation as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency-Based Self-Attention",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A parse = sof tmax(Q parse U (1) K T parse +Q parse U (2) ),",
"eq_num": "(14)"
}
],
"section": "Dependency-Based Self-Attention",
"sec_num": "3.1"
},
{
"text": "where . u) , and u \u2208 R d head are the parameters. In A parse , the probability of token q being the head of token t (i.e., t modifying q) is modeled as A parse [t, q] :",
"cite_spans": [
{
"start": 160,
"end": 166,
"text": "[t, q]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 6,
"end": 10,
"text": ". u)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dependency-Based Self-Attention",
"sec_num": "3.1"
},
{
"text": "U (1) \u2208 R d head \u00d7d head , U (2) = n (u . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency-Based Self-Attention",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (q = head(t) | X) = A parse [t, q],",
"eq_num": "(15)"
}
],
"section": "Dependency-Based Self-Attention",
"sec_num": "3.1"
},
{
"text": "where X is a source sentence or a target sentence, and the root token is defined as having a self-loop (i.e., q = head(t) = ROOT ). Then, a weighted representation matrix M parse , which includes dependency relationships in the source sentence or target sentence, is obtained by multiplying A parse and V parse :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency-Based Self-Attention",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M parse = A parse V parse .",
"eq_num": "(16)"
}
],
"section": "Dependency-Based Self-Attention",
"sec_num": "3.1"
},
{
"text": "Finally, after one attention head (e.g., M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency-Based Self-Attention",
"sec_num": "3.1"
},
{
"text": "n head ) is replaced with M parse , the concatenation of all M 1,2,...,n head\u22121 ) is mapped to a d-dimensional matrix M (p) like the conventional multi-head attention:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency-Based Self-Attention",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M (p) = W M (p) [M parse ; M (p) 1 ; . . . ; M (p) n head \u22121 ],",
"eq_num": "(17)"
}
],
"section": "Dependency-Based Self-Attention",
"sec_num": "3.1"
},
{
"text": "where W M (p) \u2208 R d\u00d7d is a parameter matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency-Based Self-Attention",
"sec_num": "3.1"
},
{
"text": "As can be seen in Equation 17, in the dependency-based self-attention, dependency relations are identified by one attention head M parse of the p-th layer's multi-head attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency-Based Self-Attention",
"sec_num": "3.1"
},
{
"text": "Our model learns translation and dependency parsing at the same time by minimizing the following objective function: e tokens + \u03bb enc e parse enc + \u03bb dec e parse dec ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.2"
},
{
"text": "Objects and methods of surveillance are explained . where e tokens is the error of translation, and e parse enc and e parse dec are the errors of dependency parsing in the encoder and the decoder, respectively. \u03bb enc > 0 and \u03bb dec > 0 are hyper-parameters to control the influence of dependency parsing errors in the encoder and the decoder, respectively. e tokens is calculated by label smoothed cross entropy (Szegedy et al., 2016) , and e parse enc and e parse dec are calculated by cross entropy. Note that, in the training of the decoder-side dependency-based self-attention, future information is masked to prevent attending to unpredicted tokens in inference. An example of training data for the decoder-side dependency-based selfattention is provided in Figure 3 element. As shown, future information on each word is masked. For example, the dependency relation from \"are\" to \"explained\" is masked.",
"cite_spans": [
{
"start": 411,
"end": 433,
"text": "(Szegedy et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 762,
"end": 770,
"text": "Figure 3",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.2"
},
{
"text": "Self-Attention",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subword Dependency-Based",
"sec_num": "3.3"
},
{
"text": "Recent NMT models have improved the translation performance by treating a sentence as a subword sequence rather than a word sequence. Therefore, we extend dependency-based selfattention to work for subword sequences. In our subword dependency-based self-attention, a sentence is divided into a subword sequence by BPE (Sennrich et al., 2016) . When a word is divided into multiple subwords, the modifiee (i.e., the head) of the rightmost subword is set to the modifiee of the original word and the modifiee of each subword other than the rightmost one is set to the right adjacent subword. Figure 4 shows an example of subword-level dependency relations, where \"@@\" is a subword segmentation symbol. \"Fingerprint\" is divided into the three subwords: \"Fing@@\", \"er@@\", and \"print\". When the head of the word \"Fingerprint\" is \"input\" in the original word-level sentence, the heads of the three subwords are determined as follows: \"er@@\" = head(\"Fing@@\"), \"print\" = head(\"er@@\"), and \"input\" = head(\"print\").",
"cite_spans": [
{
"start": 318,
"end": 341,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 590,
"end": 598,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Subword Dependency-Based",
"sec_num": "3.3"
},
{
"text": "In our experiments, we compared the proposed model with a conventional Transformer NMT model, which does not incorporate dependency structures, to confirm the effectiveness of the proposed model. We stacked six layers for each encoder and decoder and set n head = 8 and d = 512. For the proposed model, we incorporated dependency-based self-attention into the fourth layer in both the encoder and the decoder (i.e., p e = p d = 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Settings",
"sec_num": "4.1"
},
{
"text": "We evaluated translation performance on the WAT'18 ASPEC (Nakazawa et al., 2016) Japanese-to-English translation task.",
"cite_spans": [
{
"start": 57,
"end": 80,
"text": "(Nakazawa et al., 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Settings",
"sec_num": "4.1"
},
{
"text": "We tokenized each Japanese sentence with KyT ea (Neubig et al., 2011) and preprocessed according to the recommendations from WAT'18 4 . We used the vocabulary of 100K subword tokens based on BPE for both languages and used the first 1.5 million translation pairs as the training data. In the training, long sentences with over 250 subword-tokens were filtered out. Table 1 shows the statistics of our experiment data.",
"cite_spans": [
{
"start": 48,
"end": 69,
"text": "(Neubig et al., 2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 365,
"end": 372,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiment Settings",
"sec_num": "4.1"
},
{
"text": "We used Japanese dependency structures generated by EDA 5 and English dependency structures generated by Stanford Dependencies 6 in the training of the source-side dependency-based selfattention and the target-side dependency-based self-attention, respectively. Note that Stanford Dependencies and EDA are not used in the testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Settings",
"sec_num": "4.1"
},
{
"text": "We trained each model using Adam (Kingma and Ba, 2014) , where the learning rate and hyperparameter settings are set following Vaswani et al. (2017) . For the objective function, we set \u03f5 ls (Szegedy et al., 2016) in label smoothing to 0.1 and both the hyperparameters \u03bb enc and \u03bb dec to 1.0. We set the mini-batch size to 224 and the number of epochs to 20. We chose the model that achieved the best BLEU score on the development set and evaluated the sentences generated from the test set using beam search with a beam size of 4 and length penalty \u03b1 = 0.6 (Wu et al., 2016) . Table 2 lists the experiment results. Translation performance is measured by BLEU (Papineni et al., 2002) .",
"cite_spans": [
{
"start": 45,
"end": 54,
"text": "Ba, 2014)",
"ref_id": "BIBREF5"
},
{
"start": 127,
"end": 148,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF15"
},
{
"start": 191,
"end": 213,
"text": "(Szegedy et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 558,
"end": 575,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF17"
},
{
"start": 660,
"end": 683,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 578,
"end": 585,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.2"
},
{
"text": "In Table 2 , DBSA denotes our dependency-based selfattention.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4.3"
},
{
"text": "As shown, our proposed model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4.3"
},
{
"text": "Fing@@ er@@ print is input as an image . \"Trans.+DBSA(Enc)+DBSA(Dec)\" performed significantly better than the baseline model \"Trans.\", which demonstrates the effectiveness of our dependency-based self-attention. Table 2 also shows that using either the encoder-side dependency-based self-attention or the decoderside dependency-based self-attention improves translation performance, and using them in combination achieves further improvements.",
"cite_spans": [],
"ref_spans": [
{
"start": 212,
"end": 220,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4.3"
},
{
"text": "To determine the effectiveness of our extension to utilize subwords, we evaluated the models without BPE, where each sentence is treated as a word sequence. In the models without BPE, words that appeared fewer than five times in the training data were replaced with the special token \"<UNK>\". Table 3 lists the results. As shown, BPE improves the performance of both the baseline and the proposed model, which demonstrates the effectiveness of the subword dependency-based selfattention. Table 3 also shows that the proposed model outperforms the baseline model when BPE is not used. This strengthens the usefulness of our dependency-based self-attention.",
"cite_spans": [],
"ref_spans": [
{
"start": 293,
"end": 300,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 488,
"end": 495,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "NMT models have been improved by incorporating source-side dependency relations (Chen et al., 2017) , or target-side dependency relations (Eriguchi et al., 2017) , or both (Wu et al., 2018) . Chen et al. (2017) have proposed SDRNMT, which computes dependency-based context vectors from source-side dependency trees by CNN and then uses the representations in the encoder of an RNN-based NMT model. Eriguchi et al. (2017) have proposed NMT+RNNG, which combines the RNNbased dependency parser, RNNG (Dyer et al., 2016) , and the decoder of an RNN-based NMT model. Wu et al. (2018) have proposed a syntax-aware encoder, which encodes two extra sequences linearized from source-side dependency trees in addition to word sequences, and have incorporated Action RNN, which implements a shift-reduce transition-based dependency parsing by predicting action sequences, into the decoder. Their method has been applied to an RNN-based NMT model and a Transformer NMT model.",
"cite_spans": [
{
"start": 80,
"end": 99,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF0"
},
{
"start": 138,
"end": 161,
"text": "(Eriguchi et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 172,
"end": 189,
"text": "(Wu et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 192,
"end": 210,
"text": "Chen et al. (2017)",
"ref_id": "BIBREF0"
},
{
"start": 398,
"end": 420,
"text": "Eriguchi et al. (2017)",
"ref_id": "BIBREF3"
},
{
"start": 497,
"end": 516,
"text": "(Dyer et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 562,
"end": 578,
"text": "Wu et al. (2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "As far as we know, except for Wu et al. (2018) , existing dependency-based NMT models have been based on RNN-based NMT. Although Wu et al. (2018) used dependency relations in Transformer NMT, they did not modify the Transformer model itself. In contrast, we have improved a Transformer NMT model to explicitly incorporate dependency relations (i.e., dependencybased self-attention). In addition, while Wu et al. (2018) need a parser for constructing source-side dependency structures in inference, our proposed method does not require an external parser in inference because the learned dependency-based self-attention of the encoder finds dependency relations.",
"cite_spans": [
{
"start": 30,
"end": 46,
"text": "Wu et al. (2018)",
"ref_id": "BIBREF16"
},
{
"start": 129,
"end": 145,
"text": "Wu et al. (2018)",
"ref_id": "BIBREF16"
},
{
"start": 402,
"end": 418,
"text": "Wu et al. (2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In this paper, we have proposed a method to incorporate dependency relations on both source and target sides into Transformer NMT through dependency-based self-attention. Our decoderside dependency-based self-attention masks future information to avoid conflicts between training and inference. In addition, our dependency-based self-attention is extended to work well for subword sequences. Experimental results showed that the proposed model achieved a 1.0 point gain in BLEU over the baseline Transformer model on the WAT'18 ASPEC Japanese-English translation task. In future work, we will explore the effectiveness of our proposed method for language pairs other than Japanese-to-English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "p indicates pe for the encoder and pj for the decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this paper, an arrow is drawn from a modifier to its modifiee. For example, the arrow drawn from \"Objects\" to \"explained\" indicates that \"Objects\" modifies \"explained\" (i.e., \"explained\" = head(\"Objects\")).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://lotus.kuee.kyoto-u.ac.jp/WAT/ WAT2018/baseline/dataPreparationJE.html 5 http://www.ar.media.kyoto-u.ac.jp/ tool/EDA 6 https://nlp.stanford.edu/software/ stanford-dependencies.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research results have been achieved by \"Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation\", the Commissioned Research of National Institute of Information and Communications Technology (NICT) , JAPAN. This work was partially supported by JSPS KAKENHI Grant Number JP18K18110.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation with source dependency representation",
"authors": [
{
"first": "Kehai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Lemao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Akihiro",
"middle": [],
"last": "Tamura",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "Tiejun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2846--2852",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kehai Chen, Rui Wang, Masao Utiyama, Lemao Liu, Akihiro Tamura, Eiichiro Sumita, and Tiejun Zhao. 2017. Neural machine translation with source de- pendency representation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 2846-2852.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Deep biaffine attention for neural dependency parsing",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01734"
]
},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat and Christopher D Manning. 2016. Deep biaffine attention for neural dependency pars- ing. arXiv preprint arXiv:1611.01734 .",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Recurrent neural network grammars",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "199--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural net- work grammars. In Proceedings of the 2016 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies. Association for Computational Linguistics, San Diego, California, pages 199-209.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning to parse and translate improves neural machine translation",
"authors": [
{
"first": "Akiko",
"middle": [],
"last": "Eriguchi",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "72--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to parse and translate improves neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers). pages 72-78.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann N",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "1243--1252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, De- nis Yarats, and Yann N Dauphin. 2017. Convolu- tional sequence to sequence learning. In Proceed- ings of the 34th International Conference on Ma- chine Learning-Volume 70. JMLR. org, pages 1243- 1252.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A path-based transfer model for machine translation",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "625--630",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. 2004. A path-based transfer model for machine translation. In Proceedings of the 20th In- ternational Conference on Computational Linguis- tics. pages 625-630.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Aspec: Asian scientific paper excerpt corpus",
"authors": [
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Yaguchi",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshiaki Nakazawa, Manabu Yaguchi, Kiyotaka Uchi- moto, Masao Utiyama, Eiichiro Sumita, Sadao Kurohashi, and Hitoshi Isahara. 2016. Aspec: Asian scientific paper excerpt corpus. In Proc. of LREC 2016.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Pointwise prediction for robust, adaptable Japanese morphological analysis",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Yosuke",
"middle": [],
"last": "Nakata",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "529--533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable Japanese morphological analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Tech- nologies. Association for Computational Linguis- tics, pages 529-533.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bleu: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. ACL '02",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics. ACL '02, pages 311-318.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1715-1725.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Linguistically-informed self-attention for semantic role labeling",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Verga",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Andor",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In Proc. of EMNLP 2018.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Sequence to Sequence Learning with Neural Networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V Le ; Z",
"middle": [],
"last": "Ghahramani",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Welling",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "N D",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "K Q",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence Learning with Neural Net- works. In Z Ghahramani, M Welling, C Cortes, N D Lawrence, and K Q Weinberger, editors, Advances in Neural Information Processing Systems 27, Cur- ran Associates, Inc., pages 3104-3112.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Rethinking the inception architecture for computer vision",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Vanhoucke",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Ioffe",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Zbigniew",
"middle": [],
"last": "Wojna",
"suffix": ""
}
],
"year": 2016,
"venue": "The IEEE Conference on CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In The IEEE Conference on CVPR.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin ; I Guyon",
"suffix": ""
},
{
"first": "U V",
"middle": [],
"last": "Luxburg",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fergus",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Vishwanathan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Garnett",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, and R Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, Curran Associates, Inc., pages 5998-6008.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Dependency-to-dependency neural machine translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing",
"volume": "26",
"issue": "11",
"pages": "2132--2141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Wu, D. Zhang, Z. Zhang, N. Yang, M. Li, and M. Zhou. 2018. Dependency-to-dependency neural machine translation. IEEE/ACM Transac- tions on Audio, Speech, and Language Processing 26(11):2132-2141.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural ma- chine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 .",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Transformer model.",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "of the j-th encoder's self-attention, and LN () is layer normalization (Lei Ba et al., 2016). The j-th decoder layer's output S (j)",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "shows the outline of the proposed model. The proposed model incorporates dependency re-1 S (j) indicates S (j) enc for the encoder and S (j) dec for the decoder.",
"uris": null,
"num": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Proposed model.",
"uris": null,
"num": null
},
"FIGREF4": {
"type_str": "figure",
"text": ".e., M parse and M(p)",
"uris": null,
"num": null
},
"FIGREF6": {
"type_str": "figure",
"text": "Decoder side masked dependency-based self-attention.",
"uris": null,
"num": null
},
"FIGREF7": {
"type_str": "figure",
"text": ", where (a) is an example of dependency structures 3 and (b) shows the attention matrix representing the supervisions from (a). In (b), a dark cell indicates a dependency relation and a dotted cell means a",
"uris": null,
"num": null
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"text": "Statistics of the ASPEC data.",
"content": "<table><tr><td>Model</td><td>BLEU</td></tr><tr><td>Trans.</td><td>27.29</td></tr><tr><td>Trans. + DBSA(Enc)</td><td>28.05</td></tr><tr><td>Trans. + DBSA(Dec)</td><td>27.86</td></tr><tr><td colspan=\"2\">Trans. + DBSA(Enc) + DBSA(Dec) 28.29</td></tr></table>"
},
"TABREF3": {
"html": null,
"num": null,
"type_str": "table",
"text": "Translation performance.",
"content": "<table/>"
},
"TABREF5": {
"html": null,
"num": null,
"type_str": "table",
"text": "Effectiveness of subword.",
"content": "<table/>"
}
}
}
}