Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D17-1012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:14:08.270647Z"
},
"title": "Neural Machine Translation with Source-Side Latent Graph Parsing",
"authors": [
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {
"addrLine": "7-3-1 Hongo, Bunkyo-ku",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {
"addrLine": "7-3-1 Hongo, Bunkyo-ku",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a novel neural machine translation model which jointly learns translation and source-side latent graph representations of sentences. Unlike existing pipelined approaches using syntactic parsers, our end-to-end model learns a latent graph parser as part of the encoder of an attention-based neural machine translation model, and thus the parser is optimized according to the translation objective. In experiments, we first show that our model compares favorably with state-of-the-art sequential and pipelined syntax-based NMT models. We also show that the performance of our model can be further improved by pretraining it with a small amount of treebank annotations. Our final ensemble model significantly outperforms the previous best models on the standard Englishto-Japanese translation dataset.",
"pdf_parse": {
"paper_id": "D17-1012",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a novel neural machine translation model which jointly learns translation and source-side latent graph representations of sentences. Unlike existing pipelined approaches using syntactic parsers, our end-to-end model learns a latent graph parser as part of the encoder of an attention-based neural machine translation model, and thus the parser is optimized according to the translation objective. In experiments, we first show that our model compares favorably with state-of-the-art sequential and pipelined syntax-based NMT models. We also show that the performance of our model can be further improved by pretraining it with a small amount of treebank annotations. Our final ensemble model significantly outperforms the previous best models on the standard Englishto-Japanese translation dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural Machine Translation (NMT) is an active area of research due to its outstanding empirical results (Bahdanau et al., 2015; Sutskever et al., 2014) . Most of the existing NMT models treat each sentence as a sequence of tokens, but recent studies suggest that syntactic information can help improve translation accuracy (Eriguchi et al., 2016b (Eriguchi et al., , 2017 Stahlberg et al., 2016) . The existing syntax-based NMT models employ a syntactic parser trained by supervised learning in advance, and hence the parser is not adapted to the translation tasks. An alternative approach for leveraging syntactic structure in a language processing task is to jointly learn syntactic trees of the sentences along with the target task (Socher et al., 2011; Yogatama et al., 2017) .",
"cite_spans": [
{
"start": 104,
"end": 127,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 128,
"end": 151,
"text": "Sutskever et al., 2014)",
"ref_id": "BIBREF40"
},
{
"start": 323,
"end": 346,
"text": "(Eriguchi et al., 2016b",
"ref_id": "BIBREF10"
},
{
"start": 347,
"end": 371,
"text": "(Eriguchi et al., , 2017",
"ref_id": "BIBREF11"
},
{
"start": 372,
"end": 395,
"text": "Stahlberg et al., 2016)",
"ref_id": "BIBREF39"
},
{
"start": 735,
"end": 756,
"text": "(Socher et al., 2011;",
"ref_id": "BIBREF38"
},
{
"start": 757,
"end": 779,
"text": "Yogatama et al., 2017)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Motivated by the promising results of recent joint learning approaches, we present a novel NMT model that can learn a task-specific latent graph structure for each source-side sentence. The graph structure is similar to the dependency structure of the sentence, but it can have cycles and is learned specifically for the translation task. Unlike the aforementioned approach of learning single syntactic trees, our latent graphs are composed of \"soft\" connections, i.e., the edges have realvalued weights (Figure 1 ). Our model consists of two parts: one is a task-independent parsing component, which we call a latent graph parser, and the other is an attention-based NMT model. The latent parser can be independently pre-trained with human-annotated treebanks and is then adapted to the translation task.",
"cite_spans": [],
"ref_spans": [
{
"start": 504,
"end": 513,
"text": "(Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In experiments, we demonstrate that our model can be effectively pre-trained by the treebank annotations, outperforming a state-of-the-art sequential counterpart and a pipelined syntax-based model. Our final ensemble model outperforms the previous best results by a large margin on the WAT English-to-Japanese dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We model the latent graph parser based on dependency parsing. In dependency parsing, a sentence is represented as a tree structure where each node corresponds to a word in the sentence and a unique root node (ROOT) is added. Given a sentence of length N , the parent node H w i \u2208 {w 1 , . . . , w N , ROOT} (H w i = w i ) of each word w i (1 \u2264 i \u2264 N ) is called its head. The sentence is thus represented as a set of tuples (w i , H w i , w i ), where w i is a dependency label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Graph Parser",
"sec_num": "2"
},
{
"text": "In this paper, we remove the constraint of using the tree structure and represent a sentence as a set of tuples (w i , p(H w i |w i ), p( w i |w i )), where p(H w i |w i ) is the probability distribution of w i 's parent nodes, and p( w i |w i ) is the probability distribution of the dependency labels. For example, p(H w i = w j |w i ) is the probability that w j is the parent node of w i . Here, we assume that a special token EOS is appended to the end of the sentence, and we treat the EOS token as ROOT. This approach is similar to that of graph-based dependency parsing (McDonald et al., 2005) in that a sentence is represented with a set of weighted arcs between the words. To obtain the latent graph representation of the sentence, we use a dependency parsing model based on multi-task learning proposed by Hashimoto et al. (2017) .",
"cite_spans": [
{
"start": 578,
"end": 601,
"text": "(McDonald et al., 2005)",
"ref_id": "BIBREF27"
},
{
"start": 817,
"end": 840,
"text": "Hashimoto et al. (2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Graph Parser",
"sec_num": "2"
},
{
"text": "The i-th input word w i is represented with the concatenation of its",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Representation",
"sec_num": "2.1"
},
{
"text": "d 1 -dimensional word embedding v dp (w i ) \u2208 R d 1 and its character n-gram embed- ding c(w i ) \u2208 R d 1 : x(w i ) = [v dp (w i ); c(w i )]. c(w i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Representation",
"sec_num": "2.1"
},
{
"text": "is computed as the average of the embeddings of the character n-grams in w i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Representation",
"sec_num": "2.1"
},
{
"text": "Our latent graph parser builds upon multilayer bi-directional Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units (Graves and Schmidhuber, 2005) . In the first layer, POS tagging is handled by computing a hidden state h",
"cite_spans": [
{
"start": 136,
"end": 166,
"text": "(Graves and Schmidhuber, 2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "POS Tagging Layer",
"sec_num": "2.2"
},
{
"text": "(1) i = [ \u2212 \u2192 h (1) i ; \u2190 \u2212 h (1) i ] \u2208 R 2d 1 for w i , where \u2212 \u2192 h (1) i = LSTM( \u2212 \u2192 h (1) i\u22121 , x(w i )) \u2208 R d 1 and \u2190 \u2212 h (1) i = LSTM( \u2190 \u2212 h (1) i+1 , x(w i )) \u2208 R d 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS Tagging Layer",
"sec_num": "2.2"
},
{
"text": "are hidden states of the forward and backward LSTMs, respectively. h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS Tagging Layer",
"sec_num": "2.2"
},
{
"text": "(1) i is then fed into a softmax classifier to predict a probability distribution p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS Tagging Layer",
"sec_num": "2.2"
},
{
"text": "(1) i \u2208 R C (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS Tagging Layer",
"sec_num": "2.2"
},
{
"text": "for word-level tags, where C (1) is the number of POS classes. The model parameters of this layer can be learned not only by human-annotated data, but also by backpropagation from higher layers, which are described in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS Tagging Layer",
"sec_num": "2.2"
},
{
"text": "Dependency parsing is performed in the second layer. A hidden state h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsing Layer",
"sec_num": "2.3"
},
{
"text": "(2) i \u2208 R 2d 1 is computed by \u2212 \u2192 h (2) i = LSTM( \u2212 \u2192 h (2) i\u22121 , [x(w i ); y(w i ); \u2212 \u2192 h (1) i ]) and \u2190 \u2212 h (2) i = LSTM( \u2190 \u2212 h (2) i+1 , [x(w i ); y(w i ); \u2190 \u2212 h (1) i ]), where y(w i ) = W (1) p (1) i \u2208 R d 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsing Layer",
"sec_num": "2.3"
},
{
"text": "is the POS information output from the first layer, and W",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsing Layer",
"sec_num": "2.3"
},
{
"text": "(1) \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsing Layer",
"sec_num": "2.3"
},
{
"text": "R d 2 \u00d7C (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsing Layer",
"sec_num": "2.3"
},
{
"text": "is a weight matrix. Then, (soft) edges of our latent graph representation are obtained by computing the probabilities",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsing Layer",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(H w i = w j |w i ) = exp (m(i, j)) k =i exp (m(i, k)) ,",
"eq_num": "(1)"
}
],
"section": "Dependency Parsing Layer",
"sec_num": "2.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsing Layer",
"sec_num": "2.3"
},
{
"text": "m(i, k) = h (2)T k W dp h (2) i (1 \u2264 k \u2264 N + 1, k = i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsing Layer",
"sec_num": "2.3"
},
{
"text": "is a scoring function with a weight matrix W dp \u2208 R 2d 1 \u00d72d 1 . While the models of Hashimoto et al. (2017) , , and Dozat and Manning (2017) learn the model parameters of their parsing models only by humanannotated data, we allow the model parameters to be learned by the translation task.",
"cite_spans": [
{
"start": 85,
"end": 108,
"text": "Hashimoto et al. (2017)",
"ref_id": "BIBREF15"
},
{
"start": 117,
"end": 141,
"text": "Dozat and Manning (2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsing Layer",
"sec_num": "2.3"
},
{
"text": "Next",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsing Layer",
"sec_num": "2.3"
},
{
"text": ", [h (2) i ; z(H w i )] is fed into a softmax classifier to predict the probability distribu- tion p( w i |w i ), where z(H w i ) \u2208 R 2d 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsing Layer",
"sec_num": "2.3"
},
{
"text": "is the weighted average of the hidden states of the parent nodes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsing Layer",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "j =i p(H w i = w j |w i )h",
"eq_num": "(2)"
}
],
"section": "Dependency Parsing Layer",
"sec_num": "2.3"
},
{
"text": "j . This results in the latent graph representation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsing Layer",
"sec_num": "2.3"
},
{
"text": "(w i , p(H w i |w i ), p( w i |w i )) of the input sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsing Layer",
"sec_num": "2.3"
},
{
"text": "The latent graph representation described in Section 2 can be used for any sentence-level tasks, and here we apply it to an Attention-based NMT (ANMT) model . We modify the encoder and the decoder in the ANMT model to learn the latent graph representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NMT with Latent Graph Parser",
"sec_num": "3"
},
{
"text": "The ANMT model first encodes the information about the input sentence and then generates a sentence in another language. The encoder represents the word w i with a word embedding v enc (w i ) \u2208 R d 3 . It should be noted that v enc (w i ) is different from v dp (w i ) because each component is separately modeled. The encoder then takes the word embedding v enc (w i ) and the hidden state h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder with Dependency Composition",
"sec_num": "3.1"
},
{
"text": "(2) i as the input to a uni-directional LSMT:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder with Dependency Composition",
"sec_num": "3.1"
},
{
"text": "h (enc) i = LSTM(h (enc) i\u22121 , [v enc (w i ); h (2) i ]), (2) where h (enc) i \u2208 R d 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder with Dependency Composition",
"sec_num": "3.1"
},
{
"text": "is the hidden state corresponding to w i . That is, the encoder of our model is a three-layer LSTM network, where the first two layers are bi-directional.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder with Dependency Composition",
"sec_num": "3.1"
},
{
"text": "In the sequential LSTMs, relationships between words in distant positions are not explicitly considered. In our model, we explicitly incorporate such relationships into the encoder by defining a dependency composition function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder with Dependency Composition",
"sec_num": "3.1"
},
{
"text": "dep(w i ) = tanh(W dep [h enc i ; h(H w i ); p( w i |w i )]), (3) where h(H w i ) = j =i p(H w i = w j |w i )h (enc) j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder with Dependency Composition",
"sec_num": "3.1"
},
{
"text": "is the weighted average of the hidden states of the parent nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder with Dependency Composition",
"sec_num": "3.1"
},
{
"text": "Note on character n-gram embeddings In NMT models, sub-word units are widely used to address rare or unknown word problems . In our model, the character n-gram embeddings are fed through the latent graph parsing component. To the best of our knowledge, the character n-gram embeddings have never been used in NMT models. Wieting et al. (2016) , Bojanowski et al. 2017, and Hashimoto et al. (2017) have reported that the character n-gram embeddings are useful in improving several NLP tasks by better handling unknown words.",
"cite_spans": [
{
"start": 321,
"end": 342,
"text": "Wieting et al. (2016)",
"ref_id": "BIBREF41"
},
{
"start": 373,
"end": 396,
"text": "Hashimoto et al. (2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder with Dependency Composition",
"sec_num": "3.1"
},
{
"text": "The decoder of our model is a single-layer LSTM network, and the initial state is set with h (enc) N +1 and its corresponding memory cell. Given the t-th hidden state h (dec) t \u2208 R d 3 , the decoder predicts the t-th word in the target language using an attention mechanism. The attention mechanism in computes the weighted average of the hidden states h (enc) i of the encoder:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s(i, t) = exp (h (dec) t \u2022h (enc) i ) N +1 j=1 exp (h (dec) t \u2022h (enc) j ) ,",
"eq_num": "(4)"
}
],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a t = N +1 i=1 s(i, t)h (enc) i ,",
"eq_num": "(5)"
}
],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "where s(i, t) is a scoring function which specifies how much each source-side hidden state contributes to the word prediction. In addition, like the attention mechanism over constituency tree nodes (Eriguchi et al., 2016b) , our model uses attention to the dependency composition vectors:",
"cite_spans": [
{
"start": 198,
"end": 222,
"text": "(Eriguchi et al., 2016b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s (i, t) = exp (h (dec) t \u2022dep(w i )) N j=1 exp (h (dec) t \u2022dep(w j )) ,",
"eq_num": "(6)"
}
],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a t = N i=1 s (i, t)dep(w i ),",
"eq_num": "(7)"
}
],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "To predict the target word, a hidden stateh",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(dec) t \u2208 R d 3 is then computed as follows: h (dec) t = tanh(W [h (dec) t ; a t ; a t ]),",
"eq_num": "(8)"
}
],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "whereW",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "\u2208 R d 3 \u00d73d 3 is a weight matrix.h (dec) t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "is fed into a softmax classifier to predict a target word distribution.h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "(dec) t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "is also used in the transition of the decoder LSTMs along with a word",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "embedding v dec (w t ) \u2208 R d 3 of the target word w t : h (dec) t+1 = LSTM(h (dec) t , [v dec (w t );h (dec) t ]), (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "where the use ofh",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "(dec) t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "is called input feeding proposed by .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "The overall model parameters, including those of the latent graph parser, are jointly learned by minimizing the negative log-likelihood of the prediction probabilities of the target words in the training data. To speed up the training, we use BlackOut sampling (Ji et al., 2016) . By this joint learning using Equation 3and 7, the latent graph representations are automatically learned according to the target task.",
"cite_spans": [
{
"start": 261,
"end": 278,
"text": "(Ji et al., 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "Implementation Tips Inspired by Zoph et al. (2016) , we further speed up BlackOut sampling by sharing noise samples across words in the same sentences. This technique has proven to be effective in RNN language modeling, and we have found that it is also effective in the NMT model. We have also found it effective to share the model parameters of the target word embeddings and the softmax weight matrix for word prediction (Inan et al., 2016; Press and Wolf, 2017) . Also, we have found that a parameter averaging technique (Hashimoto et al., 2013) is helpful in improving translation accuracy.",
"cite_spans": [
{
"start": 32,
"end": 50,
"text": "Zoph et al. (2016)",
"ref_id": "BIBREF48"
},
{
"start": 424,
"end": 443,
"text": "(Inan et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 444,
"end": 465,
"text": "Press and Wolf, 2017)",
"ref_id": "BIBREF35"
},
{
"start": 525,
"end": 549,
"text": "(Hashimoto et al., 2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "Translation At test time, we use a novel beam search algorithm which combines statistics of sentence lengths (Eriguchi et al., 2016b) and length normalization (Cho et al., 2014) . During the beam search step, we use the following scoring function for a generated word sequence y = (y 1 , y 2 , . . . , y Ly ) given a source word sequence",
"cite_spans": [
{
"start": 109,
"end": 133,
"text": "(Eriguchi et al., 2016b)",
"ref_id": "BIBREF10"
},
{
"start": 159,
"end": 177,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x = (x 1 , x 2 , . . . , x Lx ): 1 L y \uf8eb \uf8ed Ly i=1 log p(y i |x, y 1:i\u22121 ) + log p(L y |L x ) \uf8f6 \uf8f8 ,",
"eq_num": "(10)"
}
],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "where p(L y |L x ) is the probability that sentences of length L y are generated given source-side sentences of length L x . The statistics are taken by using the training data in advance. In our experiments, we have empirically found that this beam search algorithm helps the NMT models to avoid generating translation sentences that are too short.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder with Attention Mechanism",
"sec_num": "3.2"
},
{
"text": "We used an English-to-Japanese translation task of the Asian Scientific Paper Excerpt Corpus (AS-PEC) (Nakazawa et al., 2016b) used in the Workshop on Asian Translation (WAT), since it has been shown that syntactic information is useful in English-to-Japanese translation (Eriguchi et al., 2016b; Neubig et al., 2015) . We followed the data preprocessing instruction for the English-to-Japanese task in Eriguchi et al. (2016b) . The English sentences were tokenized by the tokenizer in the Enju parser (Miyao and Tsujii, 2008) , and the Japanese sentences were segmented by the KyTea tool 1 . Among the first 1,500,000 translation pairs in the training data, we selected 1,346,946 pairs where the maximum sentence length is 50. In what follows, we call this dataset the large training dataset. We further selected the first 20,000 and 100,000 pairs to construct the small and medium training datasets, respectively. The development data include 1,790 pairs, and the test data 1,812 pairs.",
"cite_spans": [
{
"start": 102,
"end": 126,
"text": "(Nakazawa et al., 2016b)",
"ref_id": "BIBREF30"
},
{
"start": 272,
"end": 296,
"text": "(Eriguchi et al., 2016b;",
"ref_id": "BIBREF10"
},
{
"start": 297,
"end": 317,
"text": "Neubig et al., 2015)",
"ref_id": "BIBREF32"
},
{
"start": 403,
"end": 426,
"text": "Eriguchi et al. (2016b)",
"ref_id": "BIBREF10"
},
{
"start": 502,
"end": 526,
"text": "(Miyao and Tsujii, 2008)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "For the small and medium datasets, we built the vocabulary with words whose minimum frequency is two, and for the large dataset, we used words whose minimum frequency is three for English and five for Japanese. As a result, the vocabulary of the target language was 8,593 for the small dataset, 23,532 for the medium dataset, and 65,680 for the large dataset. A special token UNK was used to replace words which were not included in the vocabularies. The character ngrams (n = 2, 3, 4) were also constructed from each training dataset with the same frequency settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "We turned hyper-parameters of the model using development data. We set (d 1 , d 2 ) = (100, 50) for the latent graph parser. The word and character n-gram embeddings of the latent graph parser 1 http://www.phontron.com/kytea/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Optimization and Translation",
"sec_num": "4.2"
},
{
"text": "were initialized with the pre-trained embeddings in Hashimoto et al. (2017) . 2 The weight matrices in the latent graph parser were initialized with uniform random values in [\u2212",
"cite_spans": [
{
"start": 52,
"end": 75,
"text": "Hashimoto et al. (2017)",
"ref_id": "BIBREF15"
},
{
"start": 78,
"end": 79,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Optimization and Translation",
"sec_num": "4.2"
},
{
"text": "\u221a 6 \u221a row+col , + \u221a 6 \u221a row+col ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Optimization and Translation",
"sec_num": "4.2"
},
{
"text": "where row and col are the number of rows and columns of the matrices, respectively. All the bias vectors and the weight matrices in the softmax layers were initialized with zeros, and the bias vectors of the forget gates in the LSTMs were initialized by ones (Jozefowicz et al., 2015) .",
"cite_spans": [
{
"start": 259,
"end": 284,
"text": "(Jozefowicz et al., 2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Optimization and Translation",
"sec_num": "4.2"
},
{
"text": "We set d 3 = 128 for the small training dataset, d 3 = 256 for the medium training dataset, and d 3 = 512 for the large training dataset. The word embeddings and the weight matrices of the NMT model were initialized with uniform random values in [\u22120.1, +0.1]. The training was performed by mini-batch stochastic gradient descent with momentum. For the BlackOut objective (Ji et al., 2016) , the number of the negative samples was set to 2,000 for the small and medium training datasets, and 2,500 for the large training dataset. The mini-batch size was set to 128, and the momentum rate was set to 0.75 for the small and medium training datasets and 0.70 for the large training dataset. A gradient clipping technique was used with a clipping value of 1.0. The initial learning rate was set to 1.0, and the learning rate was halved when translation accuracy decreased. We used the BLEU scores obtained by greedy translation as the translation accuracy and checked it at every half epoch of the model training. We saved the model parameters at every half epoch and used the saved model parameters for the parameter averaging technique. For regularization, we used L2-norm regularization with a coefficient of 10 \u22126 and applied dropout (Hinton et al., 2012) to Equation (8) with a dropout rate of 0.2.",
"cite_spans": [
{
"start": 371,
"end": 388,
"text": "(Ji et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 1233,
"end": 1254,
"text": "(Hinton et al., 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Optimization and Translation",
"sec_num": "4.2"
},
{
"text": "The beam size for the beam search algorithm was 12 for the small and medium training datasets, and 50 for the large training dataset. We used BLEU (Papineni et al., 2002) , RIBES (Isozaki et al., 2010) , and perplexity scores as our evaluation metrics. Note that lower perplexity scores indicate better accuracy.",
"cite_spans": [
{
"start": 147,
"end": 170,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF34"
},
{
"start": 179,
"end": 201,
"text": "(Isozaki et al., 2010)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Optimization and Translation",
"sec_num": "4.2"
},
{
"text": "The latent graph parser in our model can be optionally pre-trained by using human annotations for dependency parsing. In this paper we used the widely-used Wall Street Journal (WSJ) training data to jointly train the POS tagging and dependency parsing components. We used the standard training split (Section 0-18) for POS tagging. We followed Chen and Manning (2014) to generate the training data (Section 2-21) for dependency parsing. From each training dataset, we selected the first K sentences to pre-train our model. The training dataset for POS tagging includes 38,219 sentences, and that for dependency parsing includes 39,832 sentences.",
"cite_spans": [
{
"start": 344,
"end": 367,
"text": "Chen and Manning (2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-Training of Latent Graph Parser",
"sec_num": "4.3"
},
{
"text": "The parser including the POS tagger was first trained for 10 epochs in advance according to the multi-task learning procedure of Hashimoto et al. (2017) , and then the overall NMT model was trained. When pre-training the POS tagging and dependency parsing components, we did not apply dropout to the model and did not fine-tune the word and character n-gram embeddings to avoid strong overfitting.",
"cite_spans": [
{
"start": 129,
"end": 152,
"text": "Hashimoto et al. (2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-Training of Latent Graph Parser",
"sec_num": "4.3"
},
{
"text": "LGP-NMT is our proposed model that learns the Latent Graph Parsing for NMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Configurations",
"sec_num": "4.4"
},
{
"text": "LGP-NMT+ is constructed by pre-training the latent parser in LGP-NMT as described in Section 4.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Configurations",
"sec_num": "4.4"
},
{
"text": "SEQ is constructed by removing the dependency composition in Equation (3), forming a sequential NMT model with the multi-layer encoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Configurations",
"sec_num": "4.4"
},
{
"text": "DEP is constructed by using pre-trained dependency relations rather than learning them. That is, p(H w i = w j |w i ) is fixed to 1.0 such that w j is the head of w i . The dependency labels are also given by the parser which was trained by using all the training samples for parsing and tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Configurations",
"sec_num": "4.4"
},
{
"text": "i = w j |w i ) to 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UNI is constructed by fixing p(H w",
"sec_num": null
},
{
"text": "N for all the words in the same sentence. That is, the uniform probability distributions are used for equally connecting all the words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UNI is constructed by fixing p(H w",
"sec_num": null
},
{
"text": "We first show our translation results using the small and medium training datasets. We report averaged scores with standard deviations across five different runs of the model training. and UNI, which shows that the small training dataset is not enough to learn useful latent graph structures from scratch. However, LGP-NMT+ (K = 10,000) outperforms SEQ and UNI, and the standard deviations are the smallest. Therefore, the results suggest that pre-training the parsing and tagging components can improve the translation accuracy of our proposed model. We can also see that DEP performs the worst. This is not surprising because previous studies, e.g., Li et al. (2015) , have reported that using syntactic structures do not always outperform competitive sequential models in several NLP tasks. Now that we have observed the effectiveness of pre-training our model, one question arises naturally: how many training samples for parsing and tagging are necessary for improving the translation accuracy? Table 2 shows the results of using different numbers of training samples for parsing and tagging. The results of K= 0 and K= 10,000 correspond to those of LGP-NMT and LGP-NMT+ in Table 1, respectively. We can see that using the small amount of the training samples performs better than using all the training samples. 3 One possible reason is that the domains of the translation dataset and the parsing (tagging) dataset are considerably different. The parsing and tagging datasets come from WSJ, whereas the translation dataset comes from abstract text of scientific papers in a wide range of domains, such as Table 3 : Evaluation on the development data using the medium training dataset (100,000 pairs).",
"cite_spans": [
{
"start": 652,
"end": 668,
"text": "Li et al. (2015)",
"ref_id": "BIBREF24"
},
{
"start": 1318,
"end": 1319,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1000,
"end": 1007,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 1611,
"end": 1618,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on Small and Medium Datasets",
"sec_num": "5"
},
{
"text": "biomedicine and computer science. These results suggest that our model can be improved by a small amount of parsing and tagging datasets in different domains. Considering the recent universal dependency project 4 which covers more than 50 languages, our model has the potential of being applied to a variety of language pairs. Table 3 shows the results of using the medium training dataset. In contrast with using the small training dataset, LGP-NMT is slightly better than SEQ.",
"cite_spans": [],
"ref_spans": [
{
"start": 327,
"end": 334,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Small Training Dataset",
"sec_num": "5.1"
},
{
"text": "LGP-NMT significantly outperforms UNI, which shows that our adaptive learning is more effective than using the uniform graph weights. By pre-training our model, LGP-NMT+ significantly outperforms SEQ in terms of the BLEU score. Again, DEP performs the worst among all the models. By using our beam search strategy, the Brevity Penalty (BP) values of our translation results are equal to or close to 1.0, which is important when evaluating the translation results using the BLEU scores. A BP value ranges from 0.0 to 1.0, and larger values mean that the translated sentences have relevant lengths compared with the reference translations. As a result, our BLEU evaluation results are affected only by the word n-gram precision scores. BLEU scores are sensitive to the BP values, and thus our beam search strategy leads to more solid evaluation for NMT models. Table 4 shows the BLEU and RIBES scores on the development data achieved with the large training dataset. Here we focus on our models and SEQ because UNI and DEP consistently perform worse than the other models as shown in Table 1 and 3. The averaging technique and attentionbased unknown word replacement (Jean et al., 2015; Hashimoto et al., 2016) improve the scores. 4 http://universaldependencies.org/.",
"cite_spans": [
{
"start": 1165,
"end": 1184,
"text": "(Jean et al., 2015;",
"ref_id": "BIBREF19"
},
{
"start": 1185,
"end": 1208,
"text": "Hashimoto et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 1229,
"end": 1230,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 859,
"end": 866,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 1082,
"end": 1089,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Medium Training Dataset",
"sec_num": "5.2"
},
{
"text": "Single +Averaging +UnkRep Cromieres et al. (2016) 38.20 82.39 Neubig et al. (2015) 38.17 81.38 Eriguchi et al. (2016a) 36.95 82.45 Neubig and Duh (2014) 36.58 79.65 Zhu (2015) 36.21 80.91 Lee et al. (2015) 35.75 81.15 Again, we see that the translation scores of our model can be further improved by pre-training the model. Table 5 shows our results on the test data, and the previous best results summarized in Nakazawa et al. (2016a) and the WAT website 5 are also shown. Our proposed models, LGP-NMT and LGP-NMT+, outperform not only SEQ but also all of the previous best results. Notice also that our implementation of the sequential model (SEQ) provides a very strong baseline, the performance of which is already comparable to the previous state of the art, even without using ensemble techniques. The confidence interval (p \u2264 0.05) of the RIBES score of LGP-NMT+ estimated by bootstrap resampling (Noreen, 1989) is (82.27, 83.37) , and thus the RIBES score of LGP-NMT+ is significantly better than that of SEQ, which shows that our latent parser can be effectively pre-trained with the human-annotated treebank.",
"cite_spans": [
{
"start": 26,
"end": 49,
"text": "Cromieres et al. (2016)",
"ref_id": "BIBREF6"
},
{
"start": 62,
"end": 82,
"text": "Neubig et al. (2015)",
"ref_id": "BIBREF32"
},
{
"start": 95,
"end": 118,
"text": "Eriguchi et al. (2016a)",
"ref_id": "BIBREF9"
},
{
"start": 131,
"end": 152,
"text": "Neubig and Duh (2014)",
"ref_id": "BIBREF31"
},
{
"start": 165,
"end": 175,
"text": "Zhu (2015)",
"ref_id": "BIBREF47"
},
{
"start": 188,
"end": 205,
"text": "Lee et al. (2015)",
"ref_id": "BIBREF23"
},
{
"start": 409,
"end": 435,
"text": "in Nakazawa et al. (2016a)",
"ref_id": "BIBREF29"
},
{
"start": 904,
"end": 918,
"text": "(Noreen, 1989)",
"ref_id": "BIBREF33"
},
{
"start": 922,
"end": 936,
"text": "(82.27, 83.37)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 324,
"end": 331,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "B./R.",
"sec_num": null
},
{
"text": "The sequential NMT model in Cromieres et al. (2016) and the tree-to-sequence NMT model in Eriguchi et al. (2016b) rely on ensemble techniques while our results mentioned above are obtained using single models. Moreover, our model is more compact 6 than the previous best NMT model in Cromieres et al. (2016) . By applying the ensemble technique to LGP-NMT, LGP-NMT+, 5 http://lotus.kuee.kyoto-u.ac.jp/WAT/ evaluation/list.php?t=1&o=1.",
"cite_spans": [
{
"start": 28,
"end": 51,
"text": "Cromieres et al. (2016)",
"ref_id": "BIBREF6"
},
{
"start": 284,
"end": 307,
"text": "Cromieres et al. (2016)",
"ref_id": "BIBREF6"
},
{
"start": 367,
"end": 368,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B./R.",
"sec_num": null
},
{
"text": "6 Our training time is within five days on a c4.8xlarge machine of Amazon Web Service by our CPU-based C++ code, while it is reported that the training time is more than two weeks in Cromieres et al. (2016) by their GPU code.",
"cite_spans": [
{
"start": 183,
"end": 206,
"text": "Cromieres et al. (2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B./R.",
"sec_num": null
},
{
"text": "As a result , it was found that a path which crosses a sphere obliquely existed .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B./R.",
"sec_num": null
},
{
"text": "LGP-NMT: \u305d\u306e\u7d50\u679c\u3001\u7403\u3092\u659c\u3081\u306b\u6a2a\u5207\u308b\u7d4c\u8def\u304c\u5b58\u5728\u3059\u308b\u3053\u3068\u304c\u5206\u304b\u3063\u305f\u3002 LGP-NMT+: \u305d\u306e\u7d50\u679c\u3001\u7403\u3092\u659c\u3081\u306b\u6a2a\u5207\u308b\u7d4c\u8def\u304c\u5b58\u5728\u3059\u308b\u3053\u3068\u304c\u5206\u304b\u3063\u305f\u3002 (As a result , it was found that a path which obliquely crosses a sphere existed .)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference: \u305d\u306e\u7d50\u679c\u3001\u7403\u5185\u90e8\u3092\u659c\u3081\u306b\u6a2a\u5207\u308b\u884c\u8def\u306e\u5b58\u5728\u3059\u308b\u3053\u3068\u304c\u5206\u304b\u3063\u305f\u3002",
"sec_num": null
},
{
"text": "Google trans: \u305d\u306e\u7d50\u679c\u3001\u7403\u3092\u6a2a\u5207\u308b\u7d4c\u8def\u304c\u659c\u3081\u306b\u5b58\u5728\u3059\u308b\u3053\u3068\u304c\u5224\u660e\u3057\u305f\u3002 SEQ: \u305d\u306e\u7d50\u679c\u3001\u7403\u3092\u6a2a\u65ad\u3059\u308b\u7d4c\u8def\u304c\u659c\u3081\u306b\u5b58\u5728\u3059\u308b\u3053\u3068\u304c\u5206\u304b\u3063\u305f\u3002 (As a result , it was found that a path which crosses a sphere existed obliquely .)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference: \u305d\u306e\u7d50\u679c\u3001\u7403\u5185\u90e8\u3092\u659c\u3081\u306b\u6a2a\u5207\u308b\u884c\u8def\u306e\u5b58\u5728\u3059\u308b\u3053\u3068\u304c\u5206\u304b\u3063\u305f\u3002",
"sec_num": null
},
{
"text": "The androgen controls negatively ImRNA . and SEQ, the BLEU and RIBES scores are further improved, and both of the scores are significantly better than the previous best scores. Figure 2 shows two translation examples 7 to see how the proposed model works and what is missing in the state-of-the-art sequential NMT model, SEQ. Besides the reference translation, the outputs of our models with and without pre-training, SEQ, and Google Translation 8 are shown.",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 185,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Reference: \u305d\u306e\u7d50\u679c\u3001\u7403\u5185\u90e8\u3092\u659c\u3081\u306b\u6a2a\u5207\u308b\u884c\u8def\u306e\u5b58\u5728\u3059\u308b\u3053\u3068\u304c\u5206\u304b\u3063\u305f\u3002",
"sec_num": null
},
{
"text": "In the translation example (1) in Figure 2 , we see that the adverb \"obliquely\" is interpreted differently across the systems. As in the reference translation, \"obliquely\" is a modifier of the verb \"crosses\". Our models correctly capture the relationship between the two words, whereas Google Translation and SEQ treat \"obliquely\" as a modifier of the verb \"existed\". This error is not a surprise since the verb \"existed\" is located closer to \"obliquely\" than the verb \"crosses\". A possible reason for the correct interpretation by our models is that they can better capture long-distance dependencies and are less susceptible to surface word distances. This is an indication of our models' ability of capturing domain-specific selectional preference that cannot be captured by purely sequential Adverb or Adjective The translation example (2) in Figure 2 shows another example where the adverb \"negatively\" is interpreted as an adverb or an adjective. As in the reference translation, \"negatively\" is a modifier of the verb \"controls\". Only LGP-NMT+ correctly captures the adverb-verb relationship, whereas \"negatively\" is interpreted as the adjective \"negative\" to modify the noun \"ImRNA\" in the translation results from Google Translation and LGP-NMT. SEQ interprets \"negatively\" as both an adverb and an adjective, which leads to the repeated translations. This error suggests that the state-of-the-art NMT models are strongly affected by the word order. By contrast, the pre-training strategy effectively embeds the information about the POS tags and the dependency relations into our model.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 42,
"text": "Figure 2",
"ref_id": "FIGREF3"
},
{
"start": 847,
"end": 855,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Selectional Preference",
"sec_num": null
},
{
"text": "Without Pre-Training We inspected the latent graphs learned by LGP-NMT. Figure 1 shows an example of the learned latent graph obtained for a sentence taken from the development data of the translation task. It has long-range dependencies and cycles as well as ordinary left-to-right dependencies. We have observed that the punctuation mark \".\" is often pointed to by other words with large weights. This is primarily because the hidden state corresponding to the mark in each sentence has rich information about the sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 80,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Analysis on Learned Latent Graphs",
"sec_num": "6.2"
},
{
"text": "To measure the correlation between the latent graphs and human-defined dependencies, we parsed the sentences on the development data of the WSJ corpus and converted the graphs into dependency trees by Eisner's algorithm (Eisner, 1996) . For evaluation, we followed Chen and Manning (2014) and measured Unlabeled Attachment Score (UAS). The UAS is 24.52%, which shows that the implicitly-learned latent graphs are partially consistent with the human-defined syntactic structures. Similar trends have been reported by Yogatama et al. (2017) in the case of binary constituency parsing. We checked the most dominant gold dependency labels which were assigned for the dependencies detected by LGP-NMT. The labels whose ratio is more than 3% are nn, amod, prep, pobj, dobj, nsubj, num, det, advmod, and poss. We see that dependencies between words in distant positions, such as subject-verb-object relations, can be captured.",
"cite_spans": [
{
"start": 220,
"end": 234,
"text": "(Eisner, 1996)",
"ref_id": "BIBREF8"
},
{
"start": 265,
"end": 288,
"text": "Chen and Manning (2014)",
"ref_id": "BIBREF3"
},
{
"start": 516,
"end": 538,
"text": "Yogatama et al. (2017)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on Learned Latent Graphs",
"sec_num": "6.2"
},
{
"text": "With Pre-Training We also inspected the pretrained latent graphs. Figure 3-(a) shows the dependency structure output by the pre-trained latent parser for the same sentence in Figure 1 . This is an ordinary dependency tree, and the head selection is almost deterministic; that is, for each word, the largest weight of the head selection is close to 1.0. By contrast, the weight values are more evenly distributed in the case of LGP-NMT as shown in Figure 1 . After the overall NMT model training, the latent parser is adapted to the translation task, and Figure 3-(b) shows the adapted latent graph. Again, we can see that the adapted weight values are also distributed and different from the original pre-trained weight values, which suggests that human-defined syntax is not always optimal for the target task.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 78,
"text": "Figure 3-(a)",
"ref_id": "FIGREF4"
},
{
"start": 175,
"end": 183,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 447,
"end": 455,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 554,
"end": 566,
"text": "Figure 3-(b)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Analysis on Learned Latent Graphs",
"sec_num": "6.2"
},
{
"text": "The UAS of the pre-trained dependency trees is 92.52% 9 , and that of the adapted latent graphs is 18.94%. Surprisingly, the resulting UAS (18.94%) is lower than the UAS of our model without pretraining (24.52%). However, in terms of the translation accuracy, our model with pre-training is better than that without pre-training. These results suggest that human-annotated treebanks can provide useful prior knowledge to guide the overall model training by pre-training, but the resulting sentence structures adapted to the target task do not need to highly correlate with the treebanks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on Learned Latent Graphs",
"sec_num": "6.2"
},
{
"text": "While initial studies on NMT treat each sentence as a sequence of words (Bahdanau et al., 2015; Sutskever et al., 2014) , researchers have recently started investigating into the use of syntactic structures in NMT models (Bastings et al., 2017; Chen et al., 2017; Eriguchi et al., 2016a Eriguchi et al., ,b, 2017 Li et al., 2017; Stahlberg et al., 2016; Yang et al., 2017) . In particular, Eriguchi et al. (2016b) introduced a tree-to-sequence NMT model by building a tree-structured encoder on top of a standard sequential encoder, which motivated the use of the dependency composition vectors in our proposed model. Prior to the advent of NMT, the syntactic structures had been successfully used in statistical machine translation systems (Neubig and Duh, 2014; Yamada and Knight, 2001 ). These syntax-based approaches are pipelined; a syntactic parser is first trained by supervised learning using a treebank such as the WSJ dataset, and then the parser is used to automatically extract syntactic information for machine translation. They rely on the output from the parser, and therefore parsing errors are propagated through the whole systems. By contrast, our model allows the parser to be adapted to the translation task, thereby providing a first step towards addressing ambiguous syntactic and semantic problems, such as domain-specific selectional preference and PP attachments, in a task-oriented fashion.",
"cite_spans": [
{
"start": 72,
"end": 95,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 96,
"end": 119,
"text": "Sutskever et al., 2014)",
"ref_id": "BIBREF40"
},
{
"start": 221,
"end": 244,
"text": "(Bastings et al., 2017;",
"ref_id": null
},
{
"start": 245,
"end": 263,
"text": "Chen et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 264,
"end": 286,
"text": "Eriguchi et al., 2016a",
"ref_id": "BIBREF9"
},
{
"start": 287,
"end": 312,
"text": "Eriguchi et al., ,b, 2017",
"ref_id": null
},
{
"start": 313,
"end": 329,
"text": "Li et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 330,
"end": 353,
"text": "Stahlberg et al., 2016;",
"ref_id": "BIBREF39"
},
{
"start": 354,
"end": 372,
"text": "Yang et al., 2017)",
"ref_id": "BIBREF44"
},
{
"start": 390,
"end": 413,
"text": "Eriguchi et al. (2016b)",
"ref_id": "BIBREF10"
},
{
"start": 741,
"end": 763,
"text": "(Neubig and Duh, 2014;",
"ref_id": "BIBREF31"
},
{
"start": 764,
"end": 787,
"text": "Yamada and Knight, 2001",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Our model learns latent graph structures in a source-side language. Eriguchi et al. (2017) have proposed a model which learns to parse and translate by using automatically-parsed data. Thus, it is also an interesting direction to learn latent structures in a target-side language.",
"cite_spans": [
{
"start": 68,
"end": 90,
"text": "Eriguchi et al. (2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "As for the learning of latent syntactic structure, there are several studies on learning task-oriented syntactic structures. Yogatama et al. (2017) used a reinforcement learning method on shift-reduce action sequences to learn task-oriented binary constituency trees. They have shown that the learned trees do not necessarily highly correlate with the human-annotated treebanks, which is consistent with our experimental results. Socher et al. (2011) used a recursive autoencoder model to greedily construct a binary constituency tree for each sentence. The autoencoder objective works as a regularization term for sentiment classification tasks. Prior to these deep learning approaches, Wu (1997) presented a method for bilingual parsing. One of the characteristics of our model is directly using the soft connections of the graph edges with the real-valued weights, whereas all of the above-mentioned methods use one best structure for each sentence. Our model is based on dependency structures, and it is a promising future direction to jointly learn dependency and constituency structures in a task-oriented fashion.",
"cite_spans": [
{
"start": 125,
"end": 147,
"text": "Yogatama et al. (2017)",
"ref_id": "BIBREF45"
},
{
"start": 430,
"end": 450,
"text": "Socher et al. (2011)",
"ref_id": "BIBREF38"
},
{
"start": 688,
"end": 697,
"text": "Wu (1997)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Finally, more related to our model, Kim et al. (2017) applied their structured attention networks to a Natural Language Inference (NLI) task for learning dependency-like structures. They showed that pre-training their model by a parsing dataset did not improve accuracy on the NLI task. By contrast, our experiments show that such a parsing dataset can be effectively used to improve translation accuracy by varying the size of the dataset and by avoiding strong overfitting. Moreover, our translation examples show the concrete benefit of learning task-oriented latent graph structures.",
"cite_spans": [
{
"start": 36,
"end": 53,
"text": "Kim et al. (2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "We have presented an end-to-end NMT model by jointly learning translation and source-side latent graph representations. By pre-training our model using treebank annotations, our model significantly outperforms both a pipelined syntax-based model and a state-of-the-art sequential model. On English-to-Japanese translation, our model outperforms the previous best models by a large margin. In future work, we investigate the effectiveness of our approach in different types of target tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "The pre-trained embeddings can be found at https: //github.com/hassyGo/charNgram2vec.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We did not observe such significant difference when using the larger datasets, and we used all the training samples in the remaining part of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These English sentences were created by manual simplification of sentences in the development data.8 The translations were obtained at https: //translate.google.com in Feb. and Mar. 2017. models. It should be noted that simply using standard treebank-based parsers does not necessarily address this error, because our pre-trained dependency parser interprets that \"obliquely\" is a modifier of the verb \"existed\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The UAS is significantly lower than the reported score inHashimoto et al. (2017). The reason is described in Section 4.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers and Akiko Eriguchi for their helpful comments and suggestions. We also thank Yuchen Qiao and Kenjiro Taura for their help in speeding up our training code. This work was supported by CREST, JST, and JSPS KAKENHI Grant Number 17J09620.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural Machine Translation by Jointly Learning to Align and Translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the 3rd International Conference on Learning Rep- resentations.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Graph Convolutional Encoders for Syntax-aware Neural Machine Translation. arXiv",
"authors": [
{
"first": "Joost",
"middle": [],
"last": "Bastings",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Wilker",
"middle": [],
"last": "Aziz",
"suffix": ""
},
{
"first": "Diego",
"middle": [],
"last": "Marcheggiani",
"suffix": ""
},
{
"first": "Khalil",
"middle": [],
"last": "Sima",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima'an. 2017. Graph Convolutional Encoders for Syntax-aware Neural Machine Translation. arXiv, cs.CL 1704.04675.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enriching Word Vectors with Subword Information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Fast and Accurate Dependency Parser using Neural Networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "740--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher Manning. 2014. A Fast and Accurate Dependency Parser using Neural Net- works. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing, pages 740-750.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improved Neural Machine Translation with a Syntax-Aware Encoder and Decoder",
"authors": [
{
"first": "Huadong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Shujian",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huadong Chen, Shujian Huang, David Chiang, and Jia- jun Chen. 2017. Improved Neural Machine Transla- tion with a Syntax-Aware Encoder and Decoder. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers). To appear.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "On the Properties of Neural Machine Translation: Encoder-Decoder Approaches",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "103--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the Prop- erties of Neural Machine Translation: Encoder- Decoder Approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Struc- ture in Statistical Translation, pages 103-111.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Kyoto University Participation to WAT 2016",
"authors": [
{
"first": "Fabien",
"middle": [],
"last": "Cromieres",
"suffix": ""
},
{
"first": "Chenhui",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 3rd Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "166--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabien Cromieres, Chenhui Chu, Toshiaki Nakazawa, and Sadao Kurohashi. 2016. Kyoto University Par- ticipation to WAT 2016. In Proceedings of the 3rd Workshop on Asian Translation, pages 166-174.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deep Biaffine Attention for Neural Dependency Parsing",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat and Christopher D. Manning. 2017. Deep Biaffine Attention for Neural Dependency Parsing. In Proceedings of the 5th International Conference on Learning Representations.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Efficient Normal-Form Parsing for Combinatory Categorial Grammar",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner. 1996. Efficient Normal-Form Parsing for Combinatory Categorial Grammar. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics, pages 79-86.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Character-based Decoding in Tree-to-Sequence Attention-based Neural Machine Translation",
"authors": [
{
"first": "Akiko",
"middle": [],
"last": "Eriguchi",
"suffix": ""
},
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 3rd Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "175--183",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016a. Character-based Decoding in Tree-to-Sequence Attention-based Neural Machine Translation. In Proceedings of the 3rd Workshop on Asian Translation, pages 175-183.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Tree-to-Sequence Attentional Neural Machine Translation",
"authors": [
{
"first": "Akiko",
"middle": [],
"last": "Eriguchi",
"suffix": ""
},
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "823--833",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016b. Tree-to-Sequence Attentional Neural Machine Translation. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 823-833.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning to Parse and Translate Improves Neural Machine Translation",
"authors": [
{
"first": "Akiko",
"middle": [],
"last": "Eriguchi",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to Parse and Translate Im- proves Neural Machine Translation. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers). To appear.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Jurgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2005,
"venue": "Neural Networks",
"volume": "18",
"issue": "5",
"pages": "602--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves and Jurgen Schmidhuber. 2005. Frame- wise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures. Neural Networks, 18(5):602-610.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Domain Adaptation and Attention-Based Unknown Word Replacement in Chinese-to-Japanese Neural Machine Translation",
"authors": [
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Akiko",
"middle": [],
"last": "Eriguchi",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 3rd Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "75--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazuma Hashimoto, Akiko Eriguchi, and Yoshimasa Tsuruoka. 2016. Domain Adaptation and Attention- Based Unknown Word Replacement in Chinese-to- Japanese Neural Machine Translation. In Proceed- ings of the 3rd Workshop on Asian Translation, pages 75-83.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Simple Customization of Recursive Neural Networks for Semantic Relation Classification",
"authors": [
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Chikayama",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1372--1376",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazuma Hashimoto, Makoto Miwa, Yoshimasa Tsu- ruoka, and Takashi Chikayama. 2013. Simple Cus- tomization of Recursive Neural Networks for Se- mantic Relation Classification. In Proceedings of the 2013 Conference on Empirical Methods in Nat- ural Language Processing, pages 1372-1376.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks",
"authors": [
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsu- ruoka, and Richard Socher. 2017. A Joint Many- Task Model: Growing a Neural Network for Mul- tiple NLP Tasks. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing. To appear.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Improving neural networks by preventing co-adaptation of feature detectors",
"authors": [
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhut- dinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling. arXiv",
"authors": [
{
"first": "Khashayar",
"middle": [],
"last": "Hakan Inan",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Khosravi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hakan Inan, Khashayar Khosravi, and Richard Socher. 2016. Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling. arXiv, cs.CL 1611.01462.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatic Evaluation of Translation Quality for Distant Language Pairs",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "944--952",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic Eval- uation of Translation Quality for Distant Language Pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Process- ing, pages 944-952.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Montreal Neural Machine Translation Systems for WMTf15",
"authors": [
{
"first": "S\u00e9bastien",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Memisevic",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "134--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00e9bastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. Montreal Neural Machine Translation Systems for WMTf15. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 134-140.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "BlackOut: Speeding up Recurrent Neural Network Language Models With Very Large Vocabularies",
"authors": [
{
"first": "S",
"middle": [
"V N"
],
"last": "Shihao Ji",
"suffix": ""
},
{
"first": "Nadathur",
"middle": [],
"last": "Vishwanathan",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Satish",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dubey",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 4th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shihao Ji, S. V. N. Vishwanathan, Nadathur Satish, Michael J. Anderson, and Pradeep Dubey. 2016. BlackOut: Speeding up Recurrent Neural Network Language Models With Very Large Vocabularies. In Proceedings of the 4th International Conference on Learning Representations.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "An Empirical Exploration of Recurrent Network Architectures",
"authors": [
{
"first": "Rafal",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 32nd International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2342--2350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An Empirical Exploration of Re- current Network Architectures. In Proceedings of the 32nd International Conference on Machine Learning, pages 2342-2350.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Deep Biaffine Attention for Neural Dependency Parsing",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Denton",
"suffix": ""
},
{
"first": "Luong",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim, Carl Denton, Luong Hoang, and Alexan- der M. Rush. 2017. Deep Biaffine Attention for Neural Dependency Parsing. In Proceedings of the 5th International Conference on Learning Represen- tations.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "NAVER Machine Translation System for WAT",
"authors": [
{
"first": "Hyoung-Gyu",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jaesong",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jun-Seok",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chang-Ki",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "69--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hyoung-Gyu Lee, JaeSong Lee, Jun-Seok Kim, and Chang-Ki Lee. 2015. NAVER Machine Translation System for WAT 2015. In Proceedings of the 2nd Workshop on Asian Translation, pages 69-73.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "When Are Tree Structures Necessary for Deep Learning of Representations?",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2304--2314",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Thang Luong, Dan Jurafsky, and Eduard Hovy. 2015. When Are Tree Structures Necessary for Deep Learning of Representations? In Proceed- ings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2304-2314.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Modeling Source Syntax for Neural Machine Translation",
"authors": [
{
"first": "Junhui",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Muhua",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, Min Zhang, and Guodong Zhou. 2017. Modeling Source Syntax for Neural Machine Translation. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers). To appear.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Effective Approaches to Attentionbased Neural Machine Translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective Approaches to Attention- based Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Online Large-Margin Training of Dependency Parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "91--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online Large-Margin Training of De- pendency Parsers. In Proceedings of the 43rd An- nual Meeting of the Association for Computational Linguistics, pages 91-98.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Feature Forest Models for Probabilistic HPSG Parsing",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "1",
"pages": "35--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Miyao and Jun'ichi Tsujii. 2008. Feature For- est Models for Probabilistic HPSG Parsing. Compu- tational Linguistics, 34(1):35-80.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Overview of the 3rd Workshop on Asian Translation",
"authors": [
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Hideya",
"middle": [],
"last": "Mino",
"suffix": ""
},
{
"first": "Chenchen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Isao",
"middle": [],
"last": "Goto",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 3rd Workshop on Asian Translation (WAT2016)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshiaki Nakazawa, Hideya Mino, Chenchen Ding, Isao Goto, Graham Neubig, Sadao Kurohashi, and Eiichiro Sumita. 2016a. Overview of the 3rd Work- shop on Asian Translation. In Proceedings of the 3rd Workshop on Asian Translation (WAT2016).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "ASPEC: Asian Scientific Paper Excerpt Corpus",
"authors": [
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Yaguchi",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th Conference on International Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshiaki Nakazawa, Manabu Yaguchi, Kiyotaka Uchi- moto, Masao Utiyama, Eiichiro Sumita, Sadao Kurohashi, and Hitoshi Isahara. 2016b. ASPEC: Asian Scientific Paper Excerpt Corpus. In Proceed- ings of the 10th Conference on International Lan- guage Resources and Evaluation.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "On the Elements of an Accurate Tree-to-String Machine Translation System",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "143--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig and Kevin Duh. 2014. On the Ele- ments of an Accurate Tree-to-String Machine Trans- lation System. In Proceedings of the 52nd Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 143-149.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Neural Reranking Improves Subjective Quality of Machine Translation: NAIST at WAT2015",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Morishita",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Asian Translation (WAT2015)",
"volume": "",
"issue": "",
"pages": "35--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Makoto Morishita, and Satoshi Naka- mura. 2015. Neural Reranking Improves Subjec- tive Quality of Machine Translation: NAIST at WAT2015. In Proceedings of the 2nd Workshop on Asian Translation (WAT2015), pages 35-41.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Computer-Intensive Methods for Testing Hypotheses: An Introduction",
"authors": [
{
"first": "Eric",
"middle": [
"W"
],
"last": "Noreen",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses: An Introduction. Wiley- Interscience.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "BLEU: A Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Com- putational Linguistics, pages 311-318.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Using the Output Embedding to Improve Language Models",
"authors": [
{
"first": "Ofir",
"middle": [],
"last": "Press",
"suffix": ""
},
{
"first": "Lior",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "157--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ofir Press and Lior Wolf. 2017. Using the Output Em- bedding to Improve Language Models. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 157-163.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Linguistic Input Features Improve Neural Machine Translation",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "83--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich and Barry Haddow. 2016. Linguistic In- put Features Improve Neural Machine Translation. In Proceedings of the First Conference on Machine Translation, pages 83-91.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Neural Machine Translation of Rare Words with Subword Units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"H"
],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "151--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011. Semi-Supervised Recursive Autoencoders for Pre- dicting Sentiment Distributions. In Proceedings of the 2011 Conference on Empirical Methods in Nat- ural Language Processing, pages 151-161.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Syntactically Guided Neural Machine Translation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Stahlberg",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Hasler",
"suffix": ""
},
{
"first": "Aurelien",
"middle": [],
"last": "Waite",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "299--305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Stahlberg, Eva Hasler, Aurelien Waite, and Bill Byrne. 2016. Syntactically Guided Neural Machine Translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 299-305.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Sequence to Sequence Learning with Neural Networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence Learning with Neural Net- works. In Advances in Neural Information Process- ing Systems 27, pages 3104-3112.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Charagram: Embedding Words and Sentences via Character n-grams",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1504--1515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Charagram: Embedding Words and Sentences via Character n-grams. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 1504-1515.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu. 1997. Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Cor- pora. Computational Linguistics, 23(3):377-404.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "A Syntaxbased Statistical Translation Model",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of 39th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Yamada and Kevin Knight. 2001. A Syntax- based Statistical Translation Model. In Proceedings of 39th Annual Meeting of the Association for Com- putational Linguistics, pages 523-530.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Towards Bidirectional Hierarchical Representations for Attention-Based Neural Machine Translation",
"authors": [
{
"first": "Baosong",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Derek",
"middle": [
"F"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Lidia",
"middle": [
"S"
],
"last": "Chao",
"suffix": ""
},
{
"first": "Jingbo",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baosong Yang, Derek F. Wong, Tong Xiao, Lidia S. Chao, and Jingbo Zhu. 2017. Towards Bidirec- tional Hierarchical Representations for Attention- Based Neural Machine Translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. To appear.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Learning to Compose Words into Sentences with Reinforcement Learning",
"authors": [
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. 2017. Learning to Compose Words into Sentences with Reinforcement Learning. In Proceedings of the 5th International Conference on Learning Representations.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Dependency Parsing as Head Selection",
"authors": [
{
"first": "Xingxing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianpeng",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "665--676",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2017. Dependency Parsing as Head Selection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics, pages 665-676.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Evaluating Neural Machine Translation in English-Japanese Task",
"authors": [
{
"first": "Zhongyuan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "61--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongyuan Zhu. 2015. Evaluating Neural Machine Translation in English-Japanese Task. In Proceed- ings of the 2nd Workshop on Asian Translation, pages 61-68.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Simple, Fast Noise-Contrastive Estimation for Large RNN Vocabularies",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1217--1222",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Ashish Vaswani, Jonathan May, and Kevin Knight. 2016. Simple, Fast Noise-Contrastive Estimation for Large RNN Vocabularies. In Pro- ceedings of the 2016 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1217-1222.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "An example of the learned latent graphs. Edges with a small weight are omitted.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Reference: ImRNA \u306f\u30a2\u30f3\u30c9\u30ed\u30b2\u30f3\u306b\u3088\u308a\u8ca0\u306b\u8abf\u7bc0\u3055\u308c\u308b\u3002 LGP-NMT+: \u30a2\u30f3\u30c9\u30ed\u30b2\u30f3\u306f ImRNA \u3092\u8ca0\u306b\u5236\u5fa1\u3057\u3066\u3044\u308b\u3002 (The androgen negatively controls ImRNA .) Google trans: \u30a2\u30f3\u30c9\u30ed\u30b2\u30f3\u306f\u8ca0\u306e ImRNA \u3092\u5236\u5fa1\u3059\u308b\u3002 LGP-NMT: \u30a2\u30f3\u30c9\u30ed\u30b2\u30f3\u306f\u8ca0\u306e ImRNA \u3092\u5236\u5fa1\u3059\u308b\u3002 (The androgen controls negative ImRNA .) SEQ: \u30a2\u30f3\u30c9\u30ed\u30b2\u30f3\u306f\u8ca0\u306e ImRNA \u3092\u8ca0\u306b\u5236\u5fa1\u3059\u308b\u3002 (The androgen negatively controls negative ImRNA .)",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "English-to-Japanese translation examples for focusing on the usage of adverbs.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF4": {
"text": "An example of the pre-trained dependency structures (a) and its corresponding latent graph adapted by our model (b).",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"html": null,
"num": null,
"text": "shows the results of using the small training dataset. LGP-NMT performs worse than SEQ 31\u00b11.49 65.96\u00b11.86 41.13\u00b12.66LGP-NMT+ 16.81\u00b10.31 69.03\u00b10.28 38.33\u00b11.18 SEQ 15.37\u00b11.18 67.01\u00b11.55 38.12\u00b12.52 UNI 15.13\u00b11.67 66.95\u00b11.94 39.25\u00b12.98 DEP 13.34\u00b10.67 64.95\u00b10.75 43.89\u00b11.52Table 1: Evaluation on the development data using the small training dataset (20,000 pairs).",
"content": "<table><tr><td>BLEU</td><td>RIBES</td><td>Perplexity</td></tr><tr><td>LGP-NMT 14.K BLEU</td><td>RIBES</td><td>Perplexity</td></tr><tr><td colspan=\"3\">0 14.31\u00b11.49 65.96\u00b11.86 41.13\u00b12.66</td></tr><tr><td colspan=\"3\">5,000 16.99\u00b11.00 69.03\u00b10.93 37.14\u00b11.96</td></tr><tr><td colspan=\"3\">10,000 16.81\u00b10.31 69.03\u00b10.28 38.33\u00b11.18</td></tr><tr><td colspan=\"3\">All 16.09\u00b10.56 68.19\u00b10.59 39.24\u00b11.88</td></tr></table>",
"type_str": "table"
},
"TABREF1": {
"html": null,
"num": null,
"text": "Effects of the size K of the training datasets for POS tagging and dependency parsing.",
"content": "<table/>",
"type_str": "table"
},
"TABREF2": {
"html": null,
"num": null,
"text": "70\u00b10.27 77.51\u00b10.13 12.10\u00b10.16 LGP-NMT+ 29.06\u00b10.25 77.57\u00b10.24 12.09\u00b10.27 SEQ 28.60\u00b10.24 77.39\u00b10.15 12.15\u00b10.12 UNI 28.25\u00b10.35 77.13\u00b10.20 12.37\u00b10.08 DEP 26.83\u00b10.38 76.05\u00b10.22 13.33\u00b10.23",
"content": "<table><tr><td/><td>BLEU</td><td>RIBES</td><td>Perplexity</td></tr><tr><td>LGP-NMT</td><td>28.</td><td/></tr></table>",
"type_str": "table"
},
"TABREF3": {
"html": null,
"num": null,
"text": "BLEU (B.) and RIBES (R.) scores on the development data using the large training dataset.",
"content": "<table><tr><td>BLEU RIBES</td></tr></table>",
"type_str": "table"
},
"TABREF4": {
"html": null,
"num": null,
"text": "BLEU and RIBES scores on the test data.",
"content": "<table/>",
"type_str": "table"
}
}
}
}