Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D18-1048",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:50:45.986871Z"
},
"title": "Adaptive Multi-pass Decoder for Neural Machine Translation",
"authors": [
{
"first": "Xinwei",
"middle": [],
"last": "Geng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"country": "China"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Although end-to-end neural machine translation (NMT) has achieved remarkable progress in the recent years, the idea of adopting multipass decoding mechanism into conventional NMT is not well explored. In this paper, we propose a novel architecture called adaptive multi-pass decoder, which introduces a flexible multi-pass polishing mechanism to extend the capacity of NMT via reinforcement learning. More specifically, we adopt an extra policy network to automatically choose a suitable and effective number of decoding passes, according to the complexity of source sentences and the quality of the generated translations. Extensive experiments on Chinese-English translation demonstrate the effectiveness of our proposed adaptive multi-pass decoder upon the conventional NMT with a significant improvement about 1.55 BLEU.",
"pdf_parse": {
"paper_id": "D18-1048",
"_pdf_hash": "",
"abstract": [
{
"text": "Although end-to-end neural machine translation (NMT) has achieved remarkable progress in the recent years, the idea of adopting multipass decoding mechanism into conventional NMT is not well explored. In this paper, we propose a novel architecture called adaptive multi-pass decoder, which introduces a flexible multi-pass polishing mechanism to extend the capacity of NMT via reinforcement learning. More specifically, we adopt an extra policy network to automatically choose a suitable and effective number of decoding passes, according to the complexity of source sentences and the quality of the generated translations. Extensive experiments on Chinese-English translation demonstrate the effectiveness of our proposed adaptive multi-pass decoder upon the conventional NMT with a significant improvement about 1.55 BLEU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the past several years, end-to-end neural machine translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; has attracted increasing attention from both academic and industry communities. Compared with conventional statistical machine translation (SMT) (Brown et al., 1993; Koehn et al., 2003) , which needs to explicitly model latent structures, NMT adopts a unified encoder-decoder framework to directly transform a source sentence into a target sentence. Furthermore, the introduction of attention mechanism enhances the capability of NMT in capturing long-distance dependencies.",
"cite_spans": [
{
"start": 71,
"end": 103,
"text": "(Kalchbrenner and Blunsom, 2013;",
"ref_id": "BIBREF10"
},
{
"start": 104,
"end": 127,
"text": "Sutskever et al., 2014;",
"ref_id": "BIBREF17"
},
{
"start": 273,
"end": 293,
"text": "(Brown et al., 1993;",
"ref_id": "BIBREF1"
},
{
"start": 294,
"end": 313,
"text": "Koehn et al., 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, a number of authors have endeavored to adopt the polishing mechanism into NMT. Similar to human cognitive process for writing a good paper, their models first create a complete * Corresponding author.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "h\u00e9sh\u00eczh\u00f9 xi\u0101nsh\u0113ng d\u0113 w\u011bir\u00e8nq\u012b w\u00e9i y\u012b ni\u00e1n , y\u01d0 p\u00e8ih\u00e9 q\u00ed w\u00e9i f\u00e1ngw\u011bihu\u00ec w\u011biyu\u00e1n d\u0113 r\u00e8ngq\u012b ji\u00e8m\u01cen r\u00ecq\u012b , q\u00edt\u0101 x\u012bn w\u011bir\u00e8ngw\u011biyu\u0101n d\u0113 r\u00e8ngq\u012b z\u00e9 w\u00e9i li\u01cengn\u00edan .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "Reference all appointments are for two years , except that of mr ho sai -chu 's which is for one year in order to tie in with the expiry date of his appointment as an ha member .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "1st-pass mr ho sai -chu 's UNK is a year -long term of two years with a term of two years as the term of his term of office of the ha .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "2nd-pass mr ho sai -chu 's UNK is a year -long term of two years with a term of two years to serve as the term of office of the ha .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "mr ho sai -chu 's UNK is a year -long term of two years with a term of two years to tie in with the expiry date of his term of office .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3rd-pass",
"sec_num": null
},
{
"text": "mr ho sai -chu has been serving as a member of authority for a term of two years with a term of two years . draft and then polish it based on global understanding of the whole draft (Niehues et al., 2016; Chatterjee et al., 2016; Zhou et al., 2017; Xia et al., 2017; Junczys Dowmunt and Grundkiewicz, 2017) . Moreover, Zhang et al. (2018) introduces a backward decoder to better exploit the right-toleft target-side contexts. Generally these methods employ two separate decoders to accomplish the polishing task.",
"cite_spans": [
{
"start": 182,
"end": 204,
"text": "(Niehues et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 205,
"end": 229,
"text": "Chatterjee et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 230,
"end": 248,
"text": "Zhou et al., 2017;",
"ref_id": "BIBREF22"
},
{
"start": 249,
"end": 266,
"text": "Xia et al., 2017;",
"ref_id": "BIBREF20"
},
{
"start": 267,
"end": 306,
"text": "Junczys Dowmunt and Grundkiewicz, 2017)",
"ref_id": "BIBREF9"
},
{
"start": 319,
"end": 338,
"text": "Zhang et al. (2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "4th-pass",
"sec_num": null
},
{
"text": "Although these polishing mechanism-based approaches demonstrate their effectiveness with twopass decoding, the idea of multi-pass decoding is not well explored for NMT. Motivated by it, we first propose a novel multi-pass decoder to perform the translation procedure with a fixed number of decoding passes, referred to as decoding depth. According to the preliminary results, just as expected, multi-pass decoding really benefit to most translations. However, in some cases, the more decoding passes perhaps lead to the poor translation. For example in Table 1 , the 3rd-pass de-coding achieves a better result compared to 1stand 2nd-pass decoding. Nevertheless, a drastic decrease arises, when we perform the 4th-pass decoding. Therefore, it's necessary to introduce a flexible multi-pass decoding, which has the ability to adaptively choose the suitable decoding passes.",
"cite_spans": [],
"ref_spans": [
{
"start": 553,
"end": 560,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "4th-pass",
"sec_num": null
},
{
"text": "Towards above goal, we further propose a novel framework called adaptive multi-pass decoder to automatically choose a proper decoding depth using reinforcement learning. Our model considers multi-pass decoding as a sequential decision making process, where continuing decoding or halt is chosen at each step. An extra policy network is employed to learn to automatically choose to continue next pass decoding or halt via reinforcement learning. For the purpose of making accurate and effective choices, the policy network employs recurrent neural network to capture the complexity of source sentence as well as the difference between the consecutive generated translations. Extensive experiments on Chinese-English translation show the proposed adaptive multi-pass decoder is capable of choosing a suitable decoding depth and significantly improves translation performance over conventional NMT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4th-pass",
"sec_num": null
},
{
"text": "Given a source sentence x = x 1 , . . . , x m , . . . , x M and a target sentence y = y 1 , . . . , y n , . . . , y N , end-to-end neural machine translation directly models translation probability word by word as a single, large neural network:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y|x; \u03b8) = N n=1 P (y n |x, y <n ; \u03b8)",
"eq_num": "(1)"
}
],
"section": "Background",
"sec_num": "2"
},
{
"text": "where \u03b8 is a set of model parameters and y <n denotes a partial translation. Prediction of n-th word is generally made in an encoder-decoder framework:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y n |x, y <n ; \u03b8) = g(y n\u22121 , s n , c n )",
"eq_num": "(2)"
}
],
"section": "Background",
"sec_num": "2"
},
{
"text": "where g(\u2022) is a non-linear function, y n\u22121 denotes the previously generated word, s n is n-th decoding hidden state, and c n is a context vector for generating n-th target word. The decoder state s n is computed by RNNs as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s n = f (s n\u22121 , y n\u22121 , c n )",
"eq_num": "(3)"
}
],
"section": "Background",
"sec_num": "2"
},
{
"text": "where f (\u2022) is an activation function. Actually it's found gated RNN alternatives such as LSTM (Hochreiter and Schmidhuber, 1997) or GRU often achieve better performance than vanilla ones. c n is a dynamic vector that selectively summarizes certain parts of source sentence at each decoding step:",
"cite_spans": [
{
"start": 95,
"end": 129,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c n = M m=1 \u03b1 n,m h m",
"eq_num": "(4)"
}
],
"section": "Background",
"sec_num": "2"
},
{
"text": "where \u03b1 m,n measures how well x m and y n are aligned, calculated by attention model Luong et al., 2015) , and h m is the encoder hidden state of the m-th source word. For the purpose of capturing both forward and backward contexts, bidirectional RNN (Schuster and Paliwal, 1997) is often employed as the encoder which converts the source sentence into an annotation sequence",
"cite_spans": [
{
"start": 85,
"end": 104,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF13"
},
{
"start": 251,
"end": 279,
"text": "(Schuster and Paliwal, 1997)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "h = {h 1 , . . . , h m , . . . , h M }, where h m = [ \u2212 \u2192 h m , \u2190 \u2212 h m ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "captures information about mth word with respect to the preceding and following words in the source sentence respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Although the introduction of RNNs as a decoder has resulted in substantial improvements in terms of translation quality, simultaneously it imposes a serious restriction on the capability of encoder-decoder framework caused by the structure of RNNs. That is, when the RNN decoder generates the t-th word y t in decoding phase, only y <t can be utilized, while the possible words y >t are directly neglected. Thus, it's difficult to capture global information especially the ungenerated words for the current dominant RNN decoder without new significant innovation. Under the premise of preserving the original structure, a promising alternative to address the aforementioned issue is to incorporate with auxiliary neural networks to extend the RNN decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Towards above goal, polishing mechanismbased methods first capture the global information through a complete draft created by SMT or NMT, and then take it as input to finally generate a translation. Compared with conventional NMT, polishing mechanism-based methods make a more accurate prediction at each time-step due to the extra global understanding, resulting in more fluent and grammatically correct translation. While these approaches have demonstrated the effectiveness, previous approaches follow pre-defined routes to perform the decoding procedure, not considering choosing a suitable decoding depth for the complexity of source sentences completely.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Therefore, it's important to develop a novel framework for making an accurate and effective choice about which decoding depth is appropriate for the source sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In this section, we present an adaptive multi-pass decoder for neural machine translation, as illustrated in Figure 1 . It could choose a proper decoding depth, depending on the complexity of the source sentence. As shown in Figure 1 , our model includes three major components: an encoder to summarize source sentences with parameter set \u03b8 e , a multi-pass decoder for multi-pass decoding with parameter set \u03b8 d , and a policy network to choose a suitable depth with parameter set \u03b8 p . The encoder of our model is identical to that of the dominant NMT which is modeled using a bidirectional RNN. Please refer to for more details. We will elaborate the multi-pass decoder and policy network for adaptive multi-pass decoding in the following subsections.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 117,
"text": "Figure 1",
"ref_id": null
},
{
"start": 225,
"end": 233,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adaptive Multi-pass Decoder",
"sec_num": "3"
},
{
"text": "The multi-pass decoder is extended from the one of the dominant NMT model to leverage the target-side context. Similar to the dominant NMT model, our multi-pass decoder also performs the decoding under the semantic guide of source-side context captured by the encoder, whereas more importantly and differently, the global understanding through the target-side context provided by last pass decoding, is able to strongly assist our model to produce a better translation. Given the source-side and target-side contexts separately captured by the encoder and last pass decoding, the multi-pass decoder learns to generate next target word, based on previous generated words. Using the multi-pass decoder with parameter set \u03b8 d , we calculate the conditional probability of the translation\u0177 l at the l-th decoding pass as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-pass Decoder",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (\u0177 l |x,\u0177 l\u22121 ; \u03b8 e , \u03b8 d ) = N l n=1 P (\u0177 l n |x,\u0177 l <n ,\u0177 l\u22121 ; \u03b8 e , \u03b8 d ) = N l n=1 g dec (\u0177 l n\u22121 , s l,dec n , c l,enc n , c l,dec n )",
"eq_num": "(5)"
}
],
"section": "Multi-pass Decoder",
"sec_num": "3.1"
},
{
"text": "where g dec (\u2022) is a non-linear function, and s l,dec n denotes the n-th decoding state within the l-th decoding pass. N l indicates the length of generated translation at the l-th decoding pass. The decoding state s l,dec n is obtained by RNNs as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-pass Decoder",
"sec_num": "3.1"
},
{
"text": "s l,dec n = f dec (s l,dec n\u22121 ,\u0177 l n\u22121 , c l,enc n , c l,dec n ) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-pass Decoder",
"sec_num": "3.1"
},
{
"text": "where f dec (\u2022) is the GRU activation function. c l,enc n and c l,dec n denote source-side and target-side contexts at the n-th time step within the l-th decoding pass, respectively. It should be noted that when the multi-pass decoder performs the first decoding, there doesn't exist any generated translation. To address this case, the first-pass target-side context c 1,dec is set to zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-pass Decoder",
"sec_num": "3.1"
},
{
"text": "Among the aforementioned contexts, c l,enc n is obtained as the weighted sum of the source-side hidden states {h m }, while we take the target-side hidden states {s l\u22121,dec n } produced by last pass decoding as input to compute c l,dec n . Similar to the dominant NMT model, we adopt the attention model Luong et al., 2015) to calculate the weights, which indicate the alignment probability. We assume that attn enc denotes the encoder-decoder attention model, which takes the source annotations {h m } as input, while attn dec are introduced to calculate the weight which measures how well the decoding state s l,dec attends the last-pass hidden states {s l\u22121,dec n }. Assuming s a indicates the decoding state, which attends the annotations {s b k } with a length K, our attention model calculates the context vector s c as follows:",
"cite_spans": [
{
"start": 304,
"end": 323,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-pass Decoder",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s c = K k=1 \u03b1 k s b k (7) \u03b1 k = exp(e k ) K k =1 exp(e k )",
"eq_num": "(8)"
}
],
"section": "Multi-pass Decoder",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e k = (v a ) T tanh(W a s a + U a s b k )",
"eq_num": "(9)"
}
],
"section": "Multi-pass Decoder",
"sec_num": "3.1"
},
{
"text": "where v a , W a and U a are the parameters of attention model. Given a training (x, y), the translation route can be demonstrated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-pass Decoder",
"sec_num": "3.1"
},
{
"text": "x \u2192\u0177 1 \u2192 . . . \u2192\u0177 l \u2192 . . . \u2192\u0177 L (x,y) \u22121 \u2192 y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-pass Decoder",
"sec_num": "3.1"
},
{
"text": "The intermediate translations {\u0177 l } are generated by decoding. Given a training corpus D = {x, y}, we define the object function using cross-entropy at last pass decoding as follows: Figure 1 : The architecture of our adaptive multi-pass decoder. Given the annotation sequence produced by the encoder, a policy network is adopted to choose a suitable action from the set {Continue, Stop}, which indicates continuing next pass decoding, or halt respectively. Different from the conventional decoder which only obtains the source-side context with the source attention model attn enc , our multi-pass decoder also captures the targetside context of last-pass decoding with decoder attention model attn dec . The policy network also use attn policy to collect useful information from the multi-pass decoding to choose an accurate and effective action to generate a good translation. Note that in this work the same parameters set of decoder and the corresponding attention is shared among different decoding passes. For this figure, we demonstrate a translation procedure with 3-pass decoding controlled by adaptive multi-pass decoder.",
"cite_spans": [],
"ref_spans": [
{
"start": 184,
"end": 192,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-pass Decoder",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J dec (\u03b8 e , \u03b8 d ) = \u2212 1 |D| arg min \u03b8e,\u03b8 d (x,y)\u2208D {log P (y|x,\u0177 L (x,y) \u22121 ; \u03b8 e , \u03b8 d )}",
"eq_num": "(10)"
}
],
"section": "Multi-pass Decoder",
"sec_num": "3.1"
},
{
"text": "where P (y|x,\u0177 L (x,y) \u22121 ; \u03b8 e , \u03b8 d ) is conditional probability computed by multi-pass decoder. L (x,y) indicates the decoding depth for the instance (x, y). For effectiveness, note that all the intermediate translations {\u0177 l } are generated by greedy search in training and testing phase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-pass Decoder",
"sec_num": "3.1"
},
{
"text": "The multi-pass decoding can be converted into sequential decision making process, in which a policy is adopted to choose next pass decoding or halt. It's expected to automatically choose an accurate and effective decoding depth to generate a good translation. For example, if the source sentence is exhausted to obtain the corresponding translation such as the long sentences, we assume more decoding passes are needed to improve the translation, while only one pass decoding is enough to tackle the simple case. Our main idea is to use reinforcement learning to control the decoding depth. We parameterize the available action a l \u2208 {Continue, Stop}, where Continue and Stop indicate continuing next decoding pass and halt respectively, by a policy network \u03c0(a l |s policy l ; \u03b8 p ), where s policy l represents the policy state at the l-th decoding pass. For the purpose of making a better choice about the decoding depth and direction, it's necessary to consider whether or not the source sentence is easy to obtain a good translation and compared with the last pass decoding, whether the quality of translation can be improved. Thus, supervised by this guideline, the policy state s policy l is calculated by GRU to model the difference between the consecutive two decoding passes as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Policy Network",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s policy l = f policy (s policy l\u22121 , m l )",
"eq_num": "(11)"
}
],
"section": "Policy Network",
"sec_num": "3.2"
},
{
"text": "where f policy is the activation function, and m l captures the useful information with respect to the policy network at the l-th decoding pass. In this work, we use the attention models attn policy to collect the decoding progress, denoted as m l of the l-th decoding pass. In order to take account of the complexity of source sentence itself, the initial policy state s policy 0 is computed by s policy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Policy Network",
"sec_num": "3.2"
},
{
"text": "0 = tanh(W init h M ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Policy Network",
"sec_num": "3.2"
},
{
"text": "where h M is last state source annotations, and W init is the parameters of initializing the policy state. Finally, we take the policy state s policy l as input to calculate the policy as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Policy Network",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c0(a l |s policy l ; \u03b8 p ) = sof tmax(W p s policy l + b p )",
"eq_num": "(12)"
}
],
"section": "Policy Network",
"sec_num": "3.2"
},
{
"text": "where W p and b p are the parameters of the policy network. In this work we use REINFORCE algorithm (Williams, 1992) , which is an instance of a broader class of algorithms called policy gradient methods (Sutton and Barto, 1998) , to learn the parameter set \u03b8 p such that the sequence of actions a = {a 1 , . . . , a l , . . . , a L (x,y) } maximizes the total expected reward. The expected reward for an instance is defined as:",
"cite_spans": [
{
"start": 100,
"end": 116,
"text": "(Williams, 1992)",
"ref_id": "BIBREF19"
},
{
"start": 204,
"end": 228,
"text": "(Sutton and Barto, 1998)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Policy Network",
"sec_num": "3.2"
},
{
"text": "J policy (\u03b8 p ) = E \u03c0(a|s policy ;\u03b8p) r(\u0177 L (x,y) ) (13)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Policy Network",
"sec_num": "3.2"
},
{
"text": "where r(\u0177 L (x,y) ) is the reward at the L (x,y) -th decoding pass. In this work, we use BLEU (Papineni et al., 2002) of the final translation\u0177 L (x,y) generated by greedy search as input to compute our reward as follows:",
"cite_spans": [
{
"start": 94,
"end": 117,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 12,
"end": 17,
"text": "(x,y)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Policy Network",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r(\u0177 L (x,y) ) = BLEU(\u0177 L (x,y) , y)",
"eq_num": "(14)"
}
],
"section": "Policy Network",
"sec_num": "3.2"
},
{
"text": "4 Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Policy Network",
"sec_num": "3.2"
},
{
"text": "In this section, we describe experimental settings and report empirical results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Policy Network",
"sec_num": "3.2"
},
{
"text": "We evaluated the proposed adaptive multipass decoder on Chinese-English translation task. The evaluation metric was case-insensitive BLEU (Papineni et al., 2002) To effectively train the NMT model, we trained each model with sentences of length up to 50 words. Besides, we limited vocabulary size to 30K for both languages and map all the out-ofvocabulary words in the Chinese-English corpus to a special token UNK. We applied Rmsprop (Graves, 2013) to train models and selected the best model parameters according to the model performance on the development set. During this procedure, we set the following hyper-parameters: word embedding dimension as 620, hidden layer size as 1000, learning rate as 5 \u00d7 10 \u22124 , batch size as 80, gradient norm as 1.0, and dropout rate as 0.3.",
"cite_spans": [
{
"start": 138,
"end": 161,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF15"
},
{
"start": 435,
"end": 449,
"text": "(Graves, 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "In the experiments, we compared our approach against the following state-of-the-art SMT and NMT systems:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "1 https://github.com/mosessmt/mosesdecoder/blob/master/scripts/generic/multibleu.perl 2 The training corpus includes LDC2002E18, LDC2003E07, LDC2003E14, part of LDC2004T07, LDC2004T08 and LDC2005T06 1. Moses 3 : an open source phrase-based translation system with default configuration and a 4-gram language model trained on the target portion of training data. Note that we used all data to train MOSES (Koehn et al., 2007) .",
"cite_spans": [
{
"start": 86,
"end": 87,
"text": "2",
"ref_id": null
},
{
"start": 404,
"end": 424,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "2. RNNSearch: a variant of the attention-based NMT system with slight changes from dl4mt tutorial 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "3. Deliberation Network 5 : a re-implementation of attention-based NMT system with two independent left-to-right decoders (Xia et al., 2017) . The first-pass decoder is identical to one of RNNSearch to generate a draft translation, while the second-pass decoder polishes it with an extra attention over the first pass decoder. The second-pass decoder is integrated with the first-pass decoder via reinforcement learning.",
"cite_spans": [
{
"start": 122,
"end": 140,
"text": "(Xia et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "As a comparison with the Deliberation Network, ABDNMT utilizes firstpass backward decoder to generate a translation with greedy search, and the secondpass forward decoder refines it with attention model (Zhang et al., 2018) . For fairness, we replace the first-pass backward decoder with a forward decoder.",
"cite_spans": [
{
"start": 203,
"end": 223,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ABDNMT:",
"sec_num": "4."
},
{
"text": "We set the beam size of all above-mentioned models as 10 in our work. Deliberation Network and ABDNMT were initialized with the pretrained RNNSearch as Xia et al. (2017) and Zhang et al. (2018) described. Our multi-pass decoder was also initialized with RNNSearch and other parameters were randomly initialized from a uniform distribution on [\u22120.1, 0.1]. Besides, for effectiveness, we set the maximum decoding depth of our adaptive multi-pass decoder as 5.",
"cite_spans": [
{
"start": 152,
"end": 169,
"text": "Xia et al. (2017)",
"ref_id": "BIBREF20"
},
{
"start": 174,
"end": 193,
"text": "Zhang et al. (2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ABDNMT:",
"sec_num": "4."
},
{
"text": "The experimental results of our model and baseline models on Chinese-English machine translation datasets are depicted in Table 2 . Table 2 : Evaluation of the NIST Chinese-English translation task. The BLEU scores are case-insensitive. \"Params\" denotes the number the parameters in each model. The \"Speed\" denotes the generation speed in seconds on the development set. RNNSearch is an attention-based neural machine translation model with one-pass left-to-right decoding. RNNSearch(R2L) is a variant of RNNSearch with one-pass right-to-left decoding. As a comparison, Deliberation Network (Xia et al., 2017) and ABDNMT (Zhang et al., 2018) involve two independent decoders to adopt polishing mechanism to extend the ability of conventional NMT. Deliberation Network utilizes two left-to-right decoders coupled with reinforcement learning. However, ABDNMT exploits a backward decoder to perform first-pass right-to-left decoding. {2,3,4,5}-pass decoder utilizes our multi-pass decoder with a fixed number of decoding passes. Furthermore, adaptive multi-pass decoder involves a policy network to enhance our multi-pass decoder to choose a proper decoding depth.",
"cite_spans": [
{
"start": 591,
"end": 609,
"text": "(Xia et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 621,
"end": 641,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 122,
"end": 129,
"text": "Table 2",
"ref_id": null
},
{
"start": 132,
"end": 139,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on Chinese-English Translation",
"sec_num": "4.2"
},
{
"text": "Parameters RNNSearch, Deliberation Network and ABDNMT have 83.99M, 125.16M and 122.86M parameters, respectively. And the parameter size of our {2,3,4,5}-pass decoder and adaptive multi-pass decoder are about 87.81M and 96.01M, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Chinese-English Translation",
"sec_num": "4.2"
},
{
"text": "Fixed Decoding Depth {2,3,4,5}-pass decoders perform the left-to-right decoding by the multipass decoder with a fixed number of decoding passes. In contrast to the related machine translation systems, our fixed number-pass decoder significantly outperforms Moses and RNNSearch by 7.53 and 1.05 BLEU points at least, as Table 2 presents. More importantly, our proposed multipass decoder obtains much better performance with an increase of only 3.82M parameters over RNNSearch. As a comparison with Deliberation Network involves two-pass decoding, the multipass decoder has a minimum increase of 0.24 BLEU score. Nevertheless, our multi-pass decoder proves its effectiveness due to the less parameters consumption of 37.35M in contrast to Deliberation Network. These results verify our hypothesis that the more decoding passes can polish the generated output to improve the translation quality. The underlying reason is that the attention component attn dec within our multi-pass decoder can capture the extra target-side contexts to obtain a global understanding to assist the translation procedure. Towards the effect of the decoding depth set {2,3,4,5}, our multi-pass decoders obtain the approximate results, but the whole curve of BLEU is on an upward trend. Specifically, the multi-pass decoder with decoding depth 5 achieves the best performance with 38.64 BLEU, while the one with decoding depth 3 performs the worst among the decoding depth set with 38.55 BLEU. Although the average results of {2,3,4,5}-pass decoder are approximate, the distinction of {2,3,4,5}-pass decoder on NIST03, NIST04, NIST05, NIST06 and NIST08 is not negligible. These results indirectly prove the necessity of flexibility mechanism.",
"cite_spans": [],
"ref_spans": [
{
"start": 319,
"end": 326,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on Chinese-English Translation",
"sec_num": "4.2"
},
{
"text": "Adaptive Decoding Depth our proposed adaptive multi-pass decoder involves an extra policy network which controls the decoding depth according to the complexity of the source sentence and the differences between the consecutive generated translations. As shown in Table 2 , the proposed adaptive multi-pass decoder obtains an improvement about 0.41 to 0.5 BLEU on average over the {2,3,4,5}-pass decoder, which demonstrates the effectiveness of the policy network. Specifically, the adaptive multi-pass decoder outperforms the multi-pass decoder with a fixed decoding depth by 0.69, 0.71, 0.68 and 0.45 BLEU scores on NIST03, NIST04, NIST05 and NIST06 datasets at most. In contrast to the Moses, RNNSearch, Deliberation Network and ABDNMT, the adaptive multi-pass decoder has the corresponding improvement about 8.03, 1.55, 0.74 and 0.34 BLEU points, respectively. More importantly, our adaptive multi-pass decoder outperforms ABDNMT, Deliberation Network model with a decrease of 26.85M, 29.15M parameters.",
"cite_spans": [],
"ref_spans": [
{
"start": 263,
"end": 271,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on Chinese-English Translation",
"sec_num": "4.2"
},
{
"text": "In order to further demonstrate the effective- ness of adaptively choosing the decoding depth, we investigate the ratio of decoding passes consumed by our multi-pass decoder on the development dataset, as shown in Table 3 . Our adaptive multi-pass decoder chooses one-pass decoding in a high ratio of 46.57%, while in most about 53.43% cases our model leverages more than one pass decoding to produce a translation. The average decoding depth of our model is calculated as: (1 \u00d7 46.36% + 2 \u00d7 20.84% + 3 \u00d7 13.10% + 4 \u00d7 13.55% + 5 \u00d7 6.15%) = 2.12. Moreover, our ratio of the samples tends to decrease as the decoding depth rises on a whole. Since time consumption correlates with decoding depth, our adaptive multi-pass decoder proves its superior performance due to fewer parameters and less decoding passes.",
"cite_spans": [],
"ref_spans": [
{
"start": 214,
"end": 221,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on Chinese-English Translation",
"sec_num": "4.2"
},
{
"text": "Depth 1 2 3 4 5 Ratio(%) 46.57 20.45 13.00 13.60 6.38 Table 3 : The ratio of decoding depth chosen by adaptive multi-pass decoder on the development dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 61,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on Chinese-English Translation",
"sec_num": "4.2"
},
{
"text": "Time Consumption Due to the multi-pass decoding mechanism, the major limitation of our proposed multi-pass decoder is time cost. In training phrase, we spend more time training the multipass decoder than RNNSearch, Deliberation Network and ABDNMT. However, in testing phrase, as illustrated in Table 2 , our adaptive multi-pass decoder spends about 180s completing the entire testing procedure, in comparison with the corresponding 87s, 162s, 132s of RNNSearch, Deliberation Network and ABDNMT, due to the auxiliary policy network. These results are consistent with above conclusion drew according to the decoding depth. Therefore, it's proven the necessity of our proposed auxiliary policy network to choose the decoding depth.",
"cite_spans": [],
"ref_spans": [
{
"start": 294,
"end": 301,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on Chinese-English Translation",
"sec_num": "4.2"
},
{
"text": "Following Bahdanau et al. 2014, we group sentences of similar lengths together and compute the BLEU score for each group, as shown in Figure 2 . Obviously, our proposed adaptive multi-pass decoder outperforms RNNSearch in all length segments. Compared with {2,3,4,5}-pass decoders, our adaptive multi-pass decoder outperforms most even all the multi-pass decoders with fixed decoding depth in the length segments.",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 142,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Effect of Source Sentence Length",
"sec_num": "4.3"
},
{
"text": "For the purpose of investigating the flexibility of policy network, we calculate the ratios of decoding depth set {1,2,3,4,5} on each sentence group with similar length, as illustrated in Figure 3 . The ratio of one-pass decoding remains high level on each length segment, but explicitly is dominant on the length segment [0, 10). In contrast, the ratios of remaining decoding depths show upwards trends on a whole. These results indirectly proves that our policy network has the capability of choosing the proper decoding depth. That is, when the source sentence is difficult to be translated such as some long sentences, more decoding passes are consumed to improve the translation quality, while in simple cases such as short source sentences, one-pass decoding is adequate.",
"cite_spans": [],
"ref_spans": [
{
"start": 188,
"end": 196,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Effect of Source Sentence Length",
"sec_num": "4.3"
},
{
"text": "To better understand the effectiveness of flexible polishing mechanism adopted by policy network, Table 4 provides a Chinese-English transla-",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 105,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "x\u012bnhu\u00e1sh\u00e8 b\u011bij\u012bng 4 yu\u00e8 5 r\u00ec di\u00e0n d\u00e0m\u00edngd\u01d0ngd\u01d0ng d\u0113 w\u0113iru\u01cen g\u014dngs\u012b z\u01d2ngc\u00e1i b\u01d0\u011br \u2022 g\u00e0ic\u00ed r\u00ecq\u0131\u00e1n b\u00e8i ji\u0101n\u00e1d\u00e0 y\u00ec ji\u0101 gu\u01cengb\u014d di\u00e0n t\u00e1i \" shu\u00e0n le \" y\u00ec b\u01ce , z\u00e0i y\u00far\u00e9nji\u00e9 n\u00e0ti\u0101n b\u00e8i k\u0101i le y\u012bg\u00e8 d\u00e0w\u00e1ngxi\u00e0o .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "xinhua news agency, beijing, april 5 , bill gates , the all -famous microsoft chairman, was duped by a canadian radio station the the other day and fell a victim to a big prank on april fools ' day.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference",
"sec_num": null
},
{
"text": "xihua news agency report of april 5 th from beijing (by staff reporter UNK UNK ) -the president of microsoft 's microsoft corporation , gates , was recently \" UNK \" by a radio station in canada and was hit by a UNK day on the day of the day .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1st-pass",
"sec_num": null
},
{
"text": "xinhua news agency, beijing, april 5, a fews days ago, microsoft 's president , microsoft corporation , was \" UNK \" by a radio station in canada .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2nd-pass",
"sec_num": null
},
{
"text": "xinhua news agency, beijing, april 5 , microsoft 's president bill gates, the president of microsoft , was \" UNK \" by a radio station in canada in few days ago .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3rd-pass",
"sec_num": null
},
{
"text": "xinhua news agency, beijing, april 5 , microsoft 's president bill gates, the president of microsoft , was \" UNK \" by a radio station in canada in few days ago .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4th-pass",
"sec_num": null
},
{
"text": "xinhua news agency, beijing, april 5 , microsoft 's president bill gates, the president of microsoft , was \" UNK \" by a radio station in canada in few days ago . tion example. Our proposed adaptive multi-pass decoder has the ability to polish the generated hypothesis again and again. As shown in Table 4 , we force our adaptive multi-pass decoder to perform the multi-pass decoding with fixed depth sets {1,2,3,4,5}. The translation quality has an upwards trend with decoding depth 1 to 3, and the decoding with depth set {4,5} generates the identical translation as the decoding depth 3. Moreover, given the same source sentence, we use the proposed adaptive multi-pass decoder to choose the decoding depth. As expected, our adaptive multi-pass chooses 3-pass decoding which generates best translation and consumes least time, rather than {4,5}-pass decoding. Therefore, these results proves the effectiveness of our adaptive multi-pass decoder.",
"cite_spans": [],
"ref_spans": [
{
"start": 297,
"end": 304,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "5th-pass",
"sec_num": null
},
{
"text": "In this work, we mainly focus on how to adopt adaptive polishing mechanism into NMT model, which has attracted intensive attention in recent years. We will elaborate polishing mechanismbased methods in the following pages. The polishing mechanism-based approaches first generate a complete draft, and then improve the quality of it based on the global understanding of the whole draft. A related work is post-editing (Niehues et al., 2016; Chatterjee et al., 2016; Zhou et al., 2017; Junczys Dowmunt and Grundkiewicz, 2017 ): a source sentence e is first translated to f , and then f is refined by another model. Niehues et al. (2016) used phrase-based statistical machine translation (PBMT) to pre-translate the source sentence into target language, which was taken as input of NMT to generate the final translation. Zhou et al. (2017) combined phrasebased statistical machine translation (PBMT), hi-erarchical phrase-based statistical machine translation (HPMT) and NMT with a unified architecture, similar to the dominant NMT model. Compared with the dominant NMT model, two attention models were involved to compute the context vectors. Specifically, an attention model is utilized to calculate the context vector for each machine system, while the other attention model obtains the context vector over the all context vectors of machine systems.",
"cite_spans": [
{
"start": 417,
"end": 439,
"text": "(Niehues et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 440,
"end": 464,
"text": "Chatterjee et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 465,
"end": 483,
"text": "Zhou et al., 2017;",
"ref_id": "BIBREF22"
},
{
"start": 484,
"end": 522,
"text": "Junczys Dowmunt and Grundkiewicz, 2017",
"ref_id": "BIBREF9"
},
{
"start": 613,
"end": 634,
"text": "Niehues et al. (2016)",
"ref_id": "BIBREF14"
},
{
"start": 818,
"end": 836,
"text": "Zhou et al. (2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In above works, the generating and refining are two separate processes. As a comparison, Xia et al. (2017) proposed deliberation network, which consists of two decoders: a first-pass decoder generates a draft, which is taken as input of secondpass decoder to obtain a better translation. All the components of deliberation network are coupled together and jointly optimized in an end-toend way via reinforcement learning. Instead of first-pass forward decoder, Zhang et al. (2018) adopted a backward decoder to capture the rightto-left target-side contexts, which is taken as input to assist the second-pass forward decoder to obtain a better translation. Besides, the another difference with deliberation network is the secondpass decoder is integrated with the first-pass decoder without reinforcement learning.",
"cite_spans": [
{
"start": 89,
"end": 106,
"text": "Xia et al. (2017)",
"ref_id": "BIBREF20"
},
{
"start": 461,
"end": 480,
"text": "Zhang et al. (2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "For the purpose of exploring polishing mechanism, our model adopts adaptive multi-pass decoding strategy. Compared with the previous works which consumes no more than two decoding passes, our multi-pass decoder makes an attempt to perform the multi-pass decoding. More importantly, we adopt adaptive decoding depth controlled by policy network to extend the capacity of our multi-pass decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this paper, we propose a novel architecture called adaptive multi-pass decoder to adopt polishing mechanism into the NMT model via reinforcement learning. Towards this goal, a novel multi-pass decoder is introduced to generate the translation, conditioned on the source-and targetside contexts. Simultaneously, the multi-pass decoding is supervised by a policy network which learns to choose a suitable action from continuing next pass decoding or halt at each time step to maximize the BLEU of the final translation. As a result, our model has the capability of controlling the decoding depth to generate a better translation. Extensive experiments on Chinese-English translation demonstrate the effectiveness of the proposed adaptive multi-pass decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In this paper, we focus on utilizing multi-pass decoder to polish the translation. Our proposed multi-pass decoder performs the multi-pass decoding mechanism with only forward decoding. One promising direction is to incorporate the backward decoding into our architecture. More specifically, we can extend the policy network to choose the backward decoding except for forward decoding and halting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://www.statmt.org/moses 4 https://github.com/nyu-dl/dl4mt-tutorial5 We reproduce the deliberation network based on REIN-FORCE and gumbel-softmax(Jang et al., 2016), separately, but there still exists a gap with its best performance. We attribute this to that our reimplementation may be different from the original model in some unknown details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their insightful comments. We also thank Heng Gong and Shuang Chen for helpful discussion. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61632011, 61772156 and 61502120.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv e-prints, abs/1409.0473.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "Peter",
"middle": [
"E"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter E. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Pa- rameter estimation. Computational Linguistics, 19(2).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The fbk participation in the wmt 2016 automatic post-editing shared task",
"authors": [
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "G",
"middle": [
"C"
],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "De Souza",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "745--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajen Chatterjee, Jos\u00e9 G. C. de Souza, Matteo Ne- gri, and Marco Turchi. 2016. The fbk participa- tion in the wmt 2016 automatic post-editing shared task. In Proceedings of the First Conference on Ma- chine Translation: Volume 2, Shared Task Papers, pages 745-750. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724- 1734. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Effective deep memory networks for distant supervised relation extraction",
"authors": [
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yongjie",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI",
"volume": "",
"issue": "",
"pages": "19--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaocheng Feng, Jiang Guo, Bing Qin, Ting Liu, and Yongjie Liu. 2017. Effective deep memory net- works for distant supervised relation extraction. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI, pages 19-25.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A language-independent neural network for event detection",
"authors": [
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Science China Information Sciences",
"volume": "61",
"issue": "9",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaocheng Feng, Bing Qin, and Ting Liu. 2018. A language-independent neural network for event detection. Science China Information Sciences, 61(9):092106.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Generating sequences with recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1308.0850"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Categorical reparameterization with gumbel-softmax",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Jang",
"suffix": ""
},
{
"first": "Shixiang",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Poole",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01144"
]
},
"num": null,
"urls": [],
"raw_text": "Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categor- ical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An exploration of neural sequence-tosequence architectures for automatic post-editing",
"authors": [
{
"first": "Junczys",
"middle": [],
"last": "Marcin",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Dowmunt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "120--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys Dowmunt and Roman Grundkiewicz. 2017. An exploration of neural sequence-to- sequence architectures for automatic post-editing. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 120-129. Asian Feder- ation of Natural Language Processing.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Recurrent continuous translation models",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1700--1709",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Nat- ural Language Processing, pages 1700-1709. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexan- dra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine transla- tion. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Com- panion Volume Proceedings of the Demo and Poster Sessions, pages 177-180. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Pre-translation for neural machine translation",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Eunah",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Thanh-Le",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1828--1836",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Niehues, Eunah Cho, Thanh-Le Ha, and Alex Waibel. 2016. Pre-translation for neural machine translation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1828-1836. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kuldip",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on Signal Processing",
"volume": "45",
"issue": "11",
"pages": "2673--2681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 27, pages 3104-3112. Curran Associates, Inc.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Introduction to reinforcement learning",
"authors": [
{
"first": "S",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"G"
],
"last": "Sutton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Barto",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "135",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard S Sutton and Andrew G Barto. 1998. Introduc- tion to reinforcement learning, volume 135. MIT press Cambridge.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Simple statistical gradientfollowing algorithms for connectionist reinforcement learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 1992,
"venue": "Reinforcement Learning",
"volume": "",
"issue": "",
"pages": "5--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. In Reinforcement Learning, pages 5-32. Springer.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Deliberation networks: Sequence generation beyond one-pass decoding",
"authors": [
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Lijun",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jianxin",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Nenghai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "1784--1794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass de- coding. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 1784-1794. Curran As- sociates, Inc.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Asynchronous bidirectional decoding for neural machine translation",
"authors": [
{
"first": "Xiangwen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Rongrong",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Hongji",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1801.05122"
]
},
"num": null,
"urls": [],
"raw_text": "Xiangwen Zhang, Jinsong Su, Yue Qin, Yang Liu, Ron- grong Ji, and Hongji Wang. 2018. Asynchronous bidirectional decoding for neural machine transla- tion. arXiv preprint arXiv:1801.05122.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Neural system combination for machine translation",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wenpeng",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "378--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Zhou, Wenpeng Hu, Jiajun Zhang, and Chengqing Zong. 2017. Neural system combina- tion for machine translation. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 378-384. Association for Computational Linguis- tics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Performance of the generated translations with respect to the lengths of the source sentences on the development dataset.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Ratio of decoding depth set {1,2,3,4,5} controlled by our adaptive multi-pass decoder with respect to each length segment of the source sentences on the development dataset.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table/>",
"type_str": "table",
"text": "Translation examples of more decoding passes with the proposed multi-pass decoder.",
"html": null,
"num": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"text": "Translation examples at each decoding depth of adaptive multi-pass decoder.",
"html": null,
"num": null
}
}
}
}