ACL-OCL / Base_JSON /prefixE /json /eacl /2021.eacl-main.105.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:47:29.557143Z"
},
"title": "Enriching Non-Autoregressive Transformer with Syntactic and Semantic Structures for Neural Machine Translation",
"authors": [
{
"first": "Ye",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Chicago",
"location": {
"settlement": "Chicago",
"region": "IL",
"country": "USA"
}
},
"email": ""
},
{
"first": "Yao",
"middle": [],
"last": "Wan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Huazhong University of Science and Technology",
"location": {
"settlement": "Wuhan",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Jian-Guo",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Chicago",
"location": {
"settlement": "Chicago",
"region": "IL",
"country": "USA"
}
},
"email": ""
},
{
"first": "Wenting",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Chicago",
"location": {
"settlement": "Chicago",
"region": "IL",
"country": "USA"
}
},
"email": ""
},
{
"first": "Philip",
"middle": [
"S"
],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Chicago",
"location": {
"settlement": "Chicago",
"region": "IL",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The non-autoregressive models have boosted the efficiency of neural machine translation through parallelized decoding at the cost of effectiveness, when comparing with the autoregressive counterparts. In this paper, we claim that the syntactic and semantic structures among natural language are critical for non-autoregressive machine translation and can further improve the performance. However, these structures are rarely considered in existing non-autoregressive models. Inspired by this intuition, we propose to incorporate the explicit syntactic and semantic structures of languages into a non-autoregressive Transformer, for the task of neural machine translation. Moreover, we also consider the intermediate latent alignment within target sentences to better learn the long-term token dependencies. Experimental results on two real-world datasets (i.e., WMT14 En-De and WMT16 En-Ro) show that our model achieves a significantly faster speed, as well as keeps the translation quality when compared with several stateof-the-art non-autoregressive models.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The non-autoregressive models have boosted the efficiency of neural machine translation through parallelized decoding at the cost of effectiveness, when comparing with the autoregressive counterparts. In this paper, we claim that the syntactic and semantic structures among natural language are critical for non-autoregressive machine translation and can further improve the performance. However, these structures are rarely considered in existing non-autoregressive models. Inspired by this intuition, we propose to incorporate the explicit syntactic and semantic structures of languages into a non-autoregressive Transformer, for the task of neural machine translation. Moreover, we also consider the intermediate latent alignment within target sentences to better learn the long-term token dependencies. Experimental results on two real-world datasets (i.e., WMT14 En-De and WMT16 En-Ro) show that our model achieves a significantly faster speed, as well as keeps the translation quality when compared with several stateof-the-art non-autoregressive models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recently, non-autoregressive models (Gu et al., 2018) , which aim to enable the parallel generation of output tokens without sacrificing translation quality, have attracted much attention. Although the non-autoregressive models have considerably sped up the inference process for real-time neural machine translation (NMT) (Gu et al., 2018) , their performance is considerably worse than that of autoregressive counterparts. Most previous works attribute the poor performance to the inevitable conditional independence issue when predicting target tokens, and many variants have been proposed to solve it. For example, several techniques in nonautoregressive models are investigated to mitigate the trade-off between speedup and performance, including iterative refinement (Lee et al., 2018) , insertion-based models , latent variable based models (Kaiser et al., 2018; Shu et al., 2020) , CTC models (Libovick\u00fd and Helcl, 2018; Saharia et al., 2020) , alternative loss function based models (Wei et al., 2019; Shao et al., 2020) , and masked language models (Ghazvininejad et al., 2019 (Ghazvininejad et al., , 2020 .",
"cite_spans": [
{
"start": 36,
"end": 53,
"text": "(Gu et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 323,
"end": 340,
"text": "(Gu et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 773,
"end": 791,
"text": "(Lee et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 848,
"end": 869,
"text": "(Kaiser et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 870,
"end": 887,
"text": "Shu et al., 2020)",
"ref_id": "BIBREF24"
},
{
"start": 901,
"end": 928,
"text": "(Libovick\u00fd and Helcl, 2018;",
"ref_id": "BIBREF14"
},
{
"start": 929,
"end": 950,
"text": "Saharia et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 992,
"end": 1010,
"text": "(Wei et al., 2019;",
"ref_id": "BIBREF29"
},
{
"start": 1011,
"end": 1029,
"text": "Shao et al., 2020)",
"ref_id": "BIBREF23"
},
{
"start": 1059,
"end": 1086,
"text": "(Ghazvininejad et al., 2019",
"ref_id": "BIBREF5"
},
{
"start": 1087,
"end": 1116,
"text": "(Ghazvininejad et al., , 2020",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although these works have tried to narrow the performance gap between autoregressive and nonautoregressive models, and have achieved some improvements on machine translation, the nonautoregressive models still suffer from syntactic and semantic limitations. That is, the translations of non-autoregressive models tend to contain incoherent phrases (e.g., repetitive words), and some informative tokens on the source side are absent. It is because in non-autoregressive models, each token in the target sentence is generated independently. Consequently, it will cause the multimodality issue, i.e., the non-autoregressive models cannot model the multimodal distribution of target sequences properly (Gu et al., 2018) .",
"cite_spans": [
{
"start": 698,
"end": 715,
"text": "(Gu et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One key observation to mitigate the syntactic and semantic error is that source and target translated sentences follow the same structure, which can be reflected from Part-Of-Speech (POS) tags and Named Entity Recognition (NER) labels. Briefly, POS, which aims to assign labels to words to indicate their categories by considering the longdistance structure of sentences, can help the model learn the syntactic structure to avoid generating the repetitive words. Likewise, NER, which discovers the proper nouns and verbs of sentences, naturally helps the model recognize some meaningful semantic tokens that may improve translation quality. This observation motivates us to leverage the syntactic as well as semantic structures of natural language to improve the performance of non- Table 1 : A motivating example on WMT14 En\u2192De dataset. English with POS|NER and its corresponding German translation with POS|NER. The Blue labels show the same tags, while the Red labels show the different tags in two languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 783,
"end": 790,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A republican strategy to counter the rel-election of Obama . | | | | | | | | | | EN POS: DET ADJ NOUN PART VERB DET NOUN ADP PROPN PUNCT EN NER: O B NORP O O O O O O B PERSON O DE: Eine republikanische strategie gegen die wiederwahl Obama .",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 202,
"text": ". | | | | | | | | | | EN POS: DET ADJ NOUN PART VERB DET NOUN ADP PROPN PUNCT EN NER: O B NORP O O O O O O B PERSON O DE:",
"ref_id": null
}
],
"eq_spans": [],
"section": "EN:",
"sec_num": null
},
{
"text": "| | | | | | | | DE POS: DET ADJ NOUN ADP DET NOUN PROPN PUNCT EN NER: O B NORP O O O O B PERSON O",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EN:",
"sec_num": null
},
{
"text": "autoregressive NMT. We present a motivating example in Table 1 to better illustrate our idea. From this table, we can find that although the words are altered dramatically from the English sentence to its German translation, the corresponding POS and NER tags still remain similar. For example, most POS tags are identical and follow the same pattern, except that PART, VERB, and ADP in the English do not match the German ADP, while the NER tags are exactly the same in both sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "EN:",
"sec_num": null
},
{
"text": "In this paper, we propose an end-to-end Syntactic and semantic structure-aware Non-Autoregressive Transformer model (SNAT) for NMT. We take the structure labels and words as inputs of the model. With the guidance of extra sentence structural information, the model greatly mitigates the multimodality issue's negative impact. The core contributions of this paper can be summarized as that we propose 1) a syntax and semantic structure-aware Transformer which takes sequential texts and the structural labels as input and generates words conditioned on the predicted structural labels, and 2) an intermediate alignment regularization which aligns the intermediate decoder layer with the target to capture coarse target information. We conduct experiments on four benchmark tasks over two datasets, including WMT14 En\u2192De and WMT16 En\u2192Ro. Experimental results indicate that our proposed method achieves competitive results compared with existing state-of-the-art nonautoregressive and autoregressive neural machine translation models, as well as significantly reduces the decoding time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EN:",
"sec_num": null
},
{
"text": "Regardless of its convenience and effectiveness, the autoregressive decoding methods suffer two major drawbacks. One is that they cannot generate multiple tokens simultaneously, leading to ineffi-cient use of parallel hardware such as GPUs. The other is that beam search has been found to output low-quality translation with large beam size and deteriorates when applied to larger search spaces. However, non-autoregressive transformer (NAT) could potentially address these issues. Particularly, they aim at speeding up decoding through removing the sequential dependencies within the target sentence and generating multiple target tokens in one pass, as indicated by the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P NAT (y|x; \u03c6) = m t=1 p (y t |x, x; \u03c6) ,",
"eq_num": "(1)"
}
],
"section": "Background",
"sec_num": "2"
},
{
"text": "wherex = {x 1 , . . . ,x m } is the copied source sentence. Since the conditional dependencies within the target sentence (y t depends on y <t ) are removed from the decoder input, the decoder is unable to leverage the inherent sentence structure for prediction. Hence the decoder is supposed to figure out such target-side information by itself given the source-side information during training. This is a much more challenging task compared to the autoregressive counterparts. From our investigation, we find the NAT models fail to handle the target sentence generation well. It usually generates repetitive and semantically incoherent sentences with missing words. Therefore, strong conditional signals should be introduced as the decoder input to help the model better learn internal dependencies within a sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In this section, we present our model SNAT to incorporate the syntactic and semantic structure information into a NAT model as well as an intermediate latent space alignment within the target. Figure 1 gives an overview of the network structure of our proposed SNAT. In SNAT, the input sequence is segmented into sub-words by byte-pair Figure 1 : An overview of the proposed SNAT for neural machine translation.",
"cite_spans": [],
"ref_spans": [
{
"start": 193,
"end": 201,
"text": "Figure 1",
"ref_id": null
},
{
"start": 336,
"end": 344,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "tokenizer (Sennrich et al., 2016) . In parallel, words in the input sequence are passed to POS and NER annotators to extract explicit syntactic and semantic structures, and the corresponding embeddings are aggregated by a linear layer to form the final syntax and semantic structure-aware embedding. The SNAT model copies the structured encoder input as the decoder input and generates the translated sentences and labels. One of the most important properties of SNAT is that it naturally introduces syntactic and semantic information when taking the structure-aware information as inputs and generating both words and labels. More precisely, given a source sentence x, as well as its label sequence L x , the conditional probability of a target translation y and its label sequence L y is:",
"cite_spans": [
{
"start": 10,
"end": 33,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P SNAT (y, L y |x, L x ; \u03d5) = m t=1 p y t , L yt |x,L x , x, L x ; \u03d5 ,",
"eq_num": "(2)"
}
],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "where x and L x are first fed into the encoder of SNAT model.x andL x with length m are syntactic and semantic structure-aware copying of word and label from encoder inputs, respectively. We show the details in the following sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "We use POS and NER to introduce the syntactic and semantic information existing in natural language, respectively. During the data pre-processing, each sentence is annotated into a semantic sequence using an open-source pre-trained semantic annotator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic and Semantic Labeling",
"sec_num": "3.1"
},
{
"text": "In particular, we take the Treebank style (Marcus et al., 1999) for POS and PropBank style (Palmer et al., 2005) for NER to annotate every token of input sequence with semantic labels. Given a specific sentence, there would be predicate-argument structures. Since the input sequence is segmented into subword units using byte-pair encoding (Sennrich et al., 2016), we assign the same label to all subwords tokenized from the same word. As shown in Figure 1 , the word \"Ancelotti\" is tokenized as \"An@@\" and \"Celotti\". The corresponding POS tags are PRON and PRON while the corresponding NER tags are B PERSON and I PERSON. For the text \"Is An@@ Celotti the man for the job ?\", the ",
"cite_spans": [
{
"start": 42,
"end": 63,
"text": "(Marcus et al., 1999)",
"ref_id": "BIBREF16"
},
{
"start": 91,
"end": 112,
"text": "(Palmer et al., 2005)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 448,
"end": 456,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Syntactic and Semantic Labeling",
"sec_num": "3.1"
},
{
"text": "Following Transformer (Vaswani et al., 2017) , we use a stack of 6 identical Transformer blocks as encoder. In addition to the word embedding and position embedding in the traditional Transformer, we add structure-aware label embedding. The input to the encoder block is the addition of the normalized word, labels (NER and POS) and position embedding, which is represented as",
"cite_spans": [
{
"start": 22,
"end": 44,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "3.2"
},
{
"text": "H 0 = [h 0 1 , . . . , h 0 n ]. The input representation H 0 = [h 0 1 , . . . , h 0 n ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "3.2"
},
{
"text": "is encoded into contextual layer representations through the Transformer blocks. For each layer, the layer representation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "3.2"
},
{
"text": "H l = [h l 1 , . . . , h l n ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "3.2"
},
{
"text": "is computed by the l-th layer Transformer block H l = Transformer l (H l\u22121 ), l \u2208 {1, 2, . . . , 6}. In each Transformer block, multiple self-attention heads are used to aggregate the output vectors of the previous layer. A general attention mechanism can be formulated as the weighted sum of the value vector V using the query vector Q and the key vector K:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Att(Q, K, V) = softmax QK T \u221a d model \u2022 V,",
"eq_num": "(3)"
}
],
"section": "Encoder",
"sec_num": "3.2"
},
{
"text": "where d model represents the dimension of hidden representations. For self-attention, Q, K, and V are mappings of previous hidden representation by different linear functions, i.e., Q =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "3.2"
},
{
"text": "H l\u22121 W l Q , K = H l\u22121 W l K , and V = H l\u22121 W l V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "3.2"
},
{
"text": ", respectively. At last, the encoder produces a final contextual representation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "3.2"
},
{
"text": "H 6 = [h 6 1 , . . . , h 6 n ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "3.2"
},
{
"text": ", which is obtained from the last Transformer block.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "3.2"
},
{
"text": "The decoder also consists of 6 identical Transformer blocks, but with several key differences from the encoder. More concretely, we denote the contextual representations in the i-th decoder layer is Z i (1 \u2264 i \u2264 6). The input to the decoder block as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "Z 0 = [z 0 1 , . . . , z 0 m ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": ", which is produced by the addition of the word, labels (NER and POS) copying from encoder input and positional embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "For the target side input [x,L x ], most previous works simply copied partial source sentence with the length ratio n m where n refers to the source length and m is the target length as the decoder input. More concretely, the decoder input y i at the i-th position is simply a copy of the n m \u00d7 i th contextual representation, i.e., x n m \u00d7i from the encoder. From our investigation, in most cases, the gap between source length and target length is relatively small (e.g. 2). Therefore, it deletes or duplicates the copy of the last a few tokens of the source. If the last token is meaningful, the deletion will neglect important information. Otherwise, if the last token is trivial, multiple copies will add noise to the model. Instead, we propose a syntactic and semantic structure-aware mapping method considering the POS and NER labels to construct the decoder inputs. Our model first picks out the informative words with NOUN and VERB POS tags, and the ones recognized as entities by the NER module. If the source length is longer than the target length, we retain all informative words, and randomly delete the rest of the words. On the other hand, if the source length is shorter than the target, we retain all words and randomly duplicate the informative words. The corresponding label of a word is also deleted or preserved. Moreover, by copying the similar structural words from the source, it can provide more information to the target input than just copying the source token, which is greatly different from the target token. The POS and NER labels of those structure-aware copied words from the source sentence are also copied as the decoder input. So by using the structure-aware mapping, we can assign [x,L x ] as decoder input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "For positional attention which aims to learn the local word orders within the sentence (Gu et al., 2018) , we set positional embedding (Vaswani et al., 2017) as both Q and K, and hidden representations of the previous layer as V.",
"cite_spans": [
{
"start": 87,
"end": 104,
"text": "(Gu et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 135,
"end": 157,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "For inter-attention, Q refers to hidden representations of the previous layer, whereas K and V are contextual vectors H 6 from the encoder. We modify the attention mask so that it does not mask out the future tokens, and every token is dependent on both its preceding and succeeding tokens in every layer. Therefore, the generation of each token can use bi-directional attention. The positionwise Feed-Forward Network (FFN) is applied after multi-head attention in both encoder and decoder. It consists of two fully-connected layers and a layer normalization (Ba et al., 2016) . The FFN takes Z 6 as input and calculates the final representation Z f , which is used to predict the whole target sentence and label:",
"cite_spans": [
{
"start": 559,
"end": 576,
"text": "(Ba et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "p y |x,L x , x, L x = f Z f W w + b w , (4) q L y |x,L x , x, L x = f Z f W l + b l , (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "where f is a GeLU activation function (Hendrycks and Gimpel, 2016) . W w and W l are the token embedding and structural label embedding in the input representation, respectively. We use different FFNs for POS and NER labels. To avoid redundancy, we just use q L y |x,L x , x, L x to represent the two predicted label likelihood in general.",
"cite_spans": [
{
"start": 38,
"end": 66,
"text": "(Hendrycks and Gimpel, 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "3.3"
},
{
"text": "We use (x, L x , y * , L * y ) to denote a training instance. To introduce the label information, our proposed SNAT contains a discrete sequential latent variable L y 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "m with conditional posterior dis- tribution p(L y 1:m |x,L x , x, L x ; \u03d5). It can be ap- proximated using a proposal distribution q(L y | x,L x , x, L x ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "The approximation also provides a variational bound for the maximum log-likelihood:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "log P SNAT = log m t=1 q L yt |x,L x , x, L x ; \u03d5 \u00d7 p y t |L yt ,x,L x , x, L x ; \u03d5 \u2265 E Ly 1:m \u223cq m t=1 log q L yt |x,L x , x, L x ; \u03d5 Label likelihood + m t=1 log p y t | L yt ,x,L x , x, L x ; \u03d5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "Structure-aware word likelihood + H(q).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "Note that, the resulting likelihood function, consisting of the two bracketed terms in Eq. (6), allows us to train the entire model in a supervised fashion. The inferred label can be utilized to train the label predicting model q and simultaneously supervise the structure-aware word model p. The label loss can be calculated by the cross-entropy H of L * yt and Eq. 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(6)",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L label = m t=1 H L * yt , q(L yt |x,L x , x, L x ) ,",
"eq_num": "(7)"
}
],
"section": "(6)",
"sec_num": null
},
{
"text": "The structure-aware word likelihood is conditioned on the generation result of the label. Since the Eq. (4) does not depend on the predicted label, we propose to bring the structure-aware word mask M wl \u2208 R |V word |\u00d7|V label | , where |V word | and |V label | are vocabulary sizes of word and label, respectively. The mask M w l is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(6)",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M w l (i, j) = 1, A(y i ) = label j , , A(y i ) = label j ,",
"eq_num": "(8)"
}
],
"section": "(6)",
"sec_num": null
},
{
"text": "which can be obtained at the preprocessing stage, and A denotes the open-source pre-trained POS or NER annotator mentioned above. It aims to penalize the case when the word y i does not belong to the label label j with , which is a small number defined within the range of (0, 1) and will be tuned in our experiments. For example, the word \"great\" does not belong to VERB. The structure-aware word likelihood can be reformulated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(6)",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(y t | L yt ,x,L x , x, L x ; \u03d5) = p(y t |x,L x , x, L x ) \u00d7 M w l \u00d7 q(L yt |x,L x , x, L x ).",
"eq_num": "(9)"
}
],
"section": "(6)",
"sec_num": null
},
{
"text": "Consequently, the structure-aware word loss L word is defined as the cross-entropy between true p (y * t |L * yt ) and predicted p(y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(6)",
"sec_num": null
},
{
"text": "t | L yt ,x,L x , x, L x ; \u03d5), where p (y * t |L * yt ) \u2208 R |V word |\u00d7|V label |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(6)",
"sec_num": null
},
{
"text": "is a matrix where only item at the index of (y * t , L * yt ) equals to 1, otherwise equals to 0. We reshape p (y * t |L * yt ) and p(y t |L yt ) to vectors when calculating the loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(6)",
"sec_num": null
},
{
"text": "Intermediate Alignment Regularization One main problem of NAT is that the decoder generation process does not depend on the previously generated tokens. Based on the bidirectional nature of SNAT decoder, the token can depend on every token of the decoder input. However, since the input of decoder [x,L x ] is the duplicate of encoder input [x, L x ], the generation depends on the encoder tokens rather than the target y * .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(6)",
"sec_num": null
},
{
"text": "To solve this problem, we align the output of the intermediate layer of the decoder with the target. The alignment makes the generation of following layers dependent on the coarse target-side information instead of the mere encoder input. This alignment idea is inspired by (Guo et al., 2019) , which directly feeds target-side tokens as inputs of the decoder by linearly mapping the source token embeddings to target embeddings. However, using one FFN layer to map different languages to the same space can hardly provide promising results. Thus, instead of aligning the input of decoder with the target, we use the intermediate layer of decoder to align with the target. In this case, our model avoids adding additional training parameters and manages to train the alignment together with SNAT model in an end-to-end fashion. Formally, we define the intermediate alignment regularization as cross-entropy loss between the predicted word and the true word:",
"cite_spans": [
{
"start": 274,
"end": 292,
"text": "(Guo et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(6)",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L reg = m t=1 H y * t , FFN(Z md t ) ,",
"eq_num": "(10)"
}
],
"section": "(6)",
"sec_num": null
},
{
"text": "where Z md (1 < md < 6) represents the output of each intermediate layer. Consequently, the final loss of SNAT can be represented with the coefficient \u03bb as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(6)",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L SNAT = L word + L label + \u03bbL reg .",
"eq_num": "(11)"
}
],
"section": "(6)",
"sec_num": null
},
{
"text": "In this section, we conduct experiments to evaluate the effectiveness and efficiency of our proposed model, with comprehensive analysis. (Wu et al., 2016) ; CNN-based results are from (Gehring et al., 2017) . \u2020 The Transformer (Vaswani et al., 2017) results are based on our own reproduction.",
"cite_spans": [
{
"start": 137,
"end": 154,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF30"
},
{
"start": 184,
"end": 206,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 227,
"end": 249,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "Latency Speedup LSTM Seq2Seq (Bahdanau et al., 2017) 24.60 -----Conv S2S (Gehring et al., 2017) 26.43 -30.02 ---Transformer \u2020 (Vaswani et al., 2017) 27.48 31.29 34.36 33.82 642ms 1.00X Non-autoregressive Models Latency Speedup NAT (Gu et al., 2018) 17.69 20.62 29.79 -39ms 15.6X NAT, rescoring 10 ( Gu et al., 2018) 18.66 22.41 --79ms 7.68X NAT, rescoring 100 (Gu et al., 2018) 19.17 23.20 --257ms 2.36X iNAT (Lee et al., 2018) 21.54 25.43 29.32 --5.78X Hint-NAT (Li et al., 2020) 21.11 25.24 --26ms 23.36X FlowSeq-base (Ma et al., 2019) 21 ",
"cite_spans": [
{
"start": 29,
"end": 52,
"text": "(Bahdanau et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 73,
"end": 95,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 126,
"end": 148,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 231,
"end": 248,
"text": "(Gu et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 299,
"end": 315,
"text": "Gu et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 360,
"end": 377,
"text": "(Gu et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 409,
"end": 427,
"text": "(Lee et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 463,
"end": 480,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 520,
"end": 537,
"text": "(Ma et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "En\u2192De De\u2192En En\u2192Ro Ro\u2192En Autoregressive Models",
"sec_num": null
},
{
"text": "Data We evaluate SNAT performance on both the WMT14 En-De (around 4.5M sentence pairs) and the WMT16 En-Ro (around 610k sentence pairs) parallel corpora. For the parallel data, we use the processed data from (Ghazvininejad et al., 2019) to be consistent with previous publications. The dataset is processed with Moses script (Hoang and Koehn, 2008) , and the words are segmented into subword units using byte-pair encoding (Sennrich et al., 2016) . The WMT14 En-De task uses newstest-2013 and newstest-2014 as development and test sets, and WMT16 En-Ro task uses newsdev-2016 and newstest-2016 as development and test sets. We report all results on test sets. The vocabulary is shared between source and target languages and has \u223c36k units and \u223c34k units in WMT14 En-De and WMT16 En-Ro, respectively.",
"cite_spans": [
{
"start": 208,
"end": 236,
"text": "(Ghazvininejad et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 325,
"end": 348,
"text": "(Hoang and Koehn, 2008)",
"ref_id": "BIBREF9"
},
{
"start": 423,
"end": 446,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Model Configuration Our implementation is based on the PyTorch sequence modeling toolkit Fairseq. 1 We follow the weights initialization scheme from BERT and follow the settings of the base Transformer configuration in (Vaswani et al., 1 https://github.com/pytorch/fairseq 2017) for all the models: 6 layers per stack, 8 attention heads per layer, 512 model dimensions and 2,048 hidden dimensions. The dimension of POS and NER embedding is set to 512 which is the same as the word embedding dimension. The autoregressive and non-autoregressive model have the same encoder-decoder structure, except for the decoder attention mask and the decoding input for the nonautoregressive model as described in Sec. 3. We try different values for the label mismatch penalty from {0.01, 0.1, 0.5} and find that 0.1 gives the best performance. The coefficient \u03bb is tested with different values from {0.25, 0.5, 0.75, 1}, and \u03bb = 0.75 outperforms other settings. We set the initial learning rate as values from {8e-6, 1e-5, 2e-5, 3e-5}, with a warm-up rate of 0.1 and L2 weight decay of 0.01. Sentences are tokenized and the maximum number of tokens in each step is set to 8,000. The maximum iteration step is set to 30,000, and we train the model with early stopping.",
"cite_spans": [
{
"start": 98,
"end": 99,
"text": "1",
"ref_id": null
},
{
"start": 219,
"end": 235,
"text": "(Vaswani et al.,",
"ref_id": null
},
{
"start": 236,
"end": 237,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Baselines We choose the following models as baselines: NAT is a vanilla non-autoregressive Transformer model for NMT which is first introduced in (Gu et al., 2018) . iNAT (Lee et al., 2018) extends the vanilla NAT model by iteratively read-ing and refining the translation. The number of iterations is set to 10 for decoding. Hint-NAT (Li et al., 2020) utilizes the intermediate hidden states from an autoregressive teacher to improve the NAT model. FlowSeq (Ma et al., 2019) adopts normalizing flows (Kingma and Dhariwal, 2018) as latent variables for generation. ENAT (Guo et al., 2019) proposes two ways to enhance the decoder inputs to improve NAT models. The first one leverages a phrase table to translate source tokens to target tokens ENAT-P. The second one transforms sourceside word embedding into target-side word embeddings ENAT-E. DCRF-NAT designs an approximation of CRF for NAT models and further uses a dynamic transition technique to model positional context in the CRF. NAR-MT (Zhou and Keung, 2020) uses a large number of source texts from monolingual corpora to generate additional teacher outputs for NAR-MT training. AXE CMLM (Ghazvininejad et al., 2020) trains the conditional masked language models using a differentiable dynamic program to assign loss based on the best possible monotonic alignment between target tokens and model predictions.",
"cite_spans": [
{
"start": 146,
"end": 163,
"text": "(Gu et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 171,
"end": 189,
"text": "(Lee et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 335,
"end": 352,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 458,
"end": 475,
"text": "(Ma et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 570,
"end": 588,
"text": "(Guo et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 1148,
"end": 1176,
"text": "(Ghazvininejad et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "To obtain the part-of-speech and named entity labels, we use industrial-strength spaCy 2 to acquire the label for English, German, and Romanian input. In our implementation, there are 17 labels for POS in total, i.e., ADJ (adjective), ADV (adverb), ADP (ad-position), AUX (auxiliary), CCONJ (coordinating conjunction), DET (determiner), INTJ (interjection), NOUN (noun), NUM (numeral), PART (particle), PRON (pronoun), PROPN (proper noun), PUNCT (punctuation), SCONJ (subordinating conjunction), SYM (symbol), VERB (verb), and X (other). The NER task is trained on OntoNotes v5.0 benchmark dataset (Pradhan et al., 2013) using formatted BIO labels and defines 18 entity types: CARDINAL, DATE, EVENT, FAC, GPE, LAN-GUAGE, LAW, LOC, MONEY, NORP, ORDINAL, ORG, PERCENT, PERSON, PRODUCT, QUAN-TITY, TIME, and WORK OF ART.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Inference Details",
"sec_num": "4.2"
},
{
"text": "Knowledge Distillation Similar to previous works on non-autoregressive translation (Gu et al., 2018; Shu et al., 2020; Ghazvininejad et al., 2019) , we use sequence-level knowledge distillation by training SNAT on translations generated by a standard left-to-right Transformer model (i.e., Transformer-large for WMT14 EN\u2192DE, and Transformer-base for WMT16 EN\u2192RO). Specifically, we use scaling NMT (Ott et al., 2018) as the teacher model. We report the performance of standard autoregressive Transformer trained on distilled data for WMT14 EN\u2192DE and WMT16 EN\u2192RO. We average the last 5 checkpoints to obtain the final model. We train the model with cross-entropy loss and label smoothing ( = 0.1).",
"cite_spans": [
{
"start": 83,
"end": 100,
"text": "(Gu et al., 2018;",
"ref_id": "BIBREF6"
},
{
"start": 101,
"end": 118,
"text": "Shu et al., 2020;",
"ref_id": "BIBREF24"
},
{
"start": 119,
"end": 146,
"text": "Ghazvininejad et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 397,
"end": 415,
"text": "(Ott et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Inference Details",
"sec_num": "4.2"
},
{
"text": "Inference During training, we do not need to predict the target length m since the target sentence is given. During inference, we use a simple method to select the target length for SNAT Li et al., 2020) . First, we set the target length to m = n + C, where n is the length of the source sentence and C is a constant bias term estimated from the overall length statistics of the training data. Then, we create a list of candidate target lengths with a range of [m \u2212 B, m + B] where B is the half-width of the interval. Finally, the model picks the best one from the generated 2B + 1 candidate sentences. In our experiments, we set the constant bias term C to 2 for WMT 14 EN\u2192DE, -2 for WMT 14 DE\u2192EN, 3 for WMT 16 EN\u2192RO, and -3 for WMT 14 RO\u2192EN according to the average lengths of different languages in the training sets. We set B to 4 or 9, and obtain corresponding 9 or 19 candidate translations for each sentence. Then we employ an autoregressive teacher model to rescore these candidates.",
"cite_spans": [
{
"start": 187,
"end": 203,
"text": "Li et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Inference Details",
"sec_num": "4.2"
},
{
"text": "Experimental results are shown in Table 2 . We first compare the proposed method against autoregressive counterparts in terms of translation quality, which is measured by BLEU (Papineni et al., 2002) . For all our tasks, we obtain results comparable with the Transformer, the state-of-the-art autoregressive model. Our best model achieves 27.50 (+0.02 gain over Transformer), 30.82 (-0.46 gap with Transformer), 35.19 (+0.82 gain), and 33.98 (+0.16 gain) BLEU score on WMT14 En\u2194De and WMT16 EN\u2194Ro, respectively. More importantly, our SNAT decodes much faster than the Transformer, which is a big improvement regarding the speed-accuracy trade-off in AT and NAT models.",
"cite_spans": [
{
"start": 176,
"end": 199,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 34,
"end": 41,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.3"
},
{
"text": "Comparing our models with other NAT models, we observe that the best SNAT model achieves a significant performance boost over NAT, iNAT, FlowSeq, ENAT, +5.96, +6.39, +6.05, +3.22, 3.93 and +3.97 in BLEU on WMT14 En\u2192De, respectively. This indicates that the incorporation of the syntactic and semantic structure greatly helps reduce the impact of the multimodality problem and thus narrows the performance gap between Autoregressive Transformer (AT) and Non-Autoregressive Transformer (NAT) models. In addition, we see a +0.69, +0.78, +0.68, and 0.52 gain of BLEU score over the best baselines on WMT14 En\u2192De, WMT14 De\u2192En, WMT16 En\u2192Ro and WMT16 Ro\u2192En, respectively.",
"cite_spans": [
{
"start": 126,
"end": 130,
"text": "NAT,",
"ref_id": null
},
{
"start": 131,
"end": 136,
"text": "iNAT,",
"ref_id": null
},
{
"start": 137,
"end": 145,
"text": "FlowSeq,",
"ref_id": null
},
{
"start": 146,
"end": 151,
"text": "ENAT,",
"ref_id": null
},
{
"start": 152,
"end": 158,
"text": "+5.96,",
"ref_id": null
},
{
"start": 159,
"end": 165,
"text": "+6.39,",
"ref_id": null
},
{
"start": 166,
"end": 172,
"text": "+6.05,",
"ref_id": null
},
{
"start": 173,
"end": 179,
"text": "+3.22,",
"ref_id": null
},
{
"start": 180,
"end": 184,
"text": "3.93",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.3"
},
{
"text": "From the result of our methods at the last group in Table 2 , we find that the rescoring technique substantially assists in improving the performance, and dynamic decoding significantly reduces the time spent on rescoring while further accelerating the decoding process. On En\u2192De, rescoring 9 candidates leads to a gain of +2.23 BLEU, and rescoring 19 candidates gives a +2.86 BLEU score increment.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 59,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.3"
},
{
"text": "Decoding Speed Following previous works (Gu et al., 2018; Lee et al., 2018; Guo et al., 2019) , we evaluate the average per-sentence decoding latency on WMT14 En\u2192De test sets with batch size being 1, under an environment of NVIDIA Titan RTX GPU for the Transformer model and the NAT models to measure the speedup. The latencies are obtained by taking an average of five runs. More clearly, We reproduce the Transformer on our machine. We copy the runtime of other models but the speedup ratio is between the runtime of their implemented Transformer and their proposed model. We think it's reasonable to compare the speedup ratio because it is independent of the influence caused by different implementation software or machines. And to clarify, the latency does not include preprocessing of tagging, because it's a very fast process as executing around 7000 sentences in one second.",
"cite_spans": [
{
"start": 40,
"end": 57,
"text": "(Gu et al., 2018;",
"ref_id": "BIBREF6"
},
{
"start": 58,
"end": 75,
"text": "Lee et al., 2018;",
"ref_id": "BIBREF12"
},
{
"start": 76,
"end": 93,
"text": "Guo et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.3"
},
{
"text": "We can see from Table 2 that the best SNAT gets a 9.3 times decoding speedup than the Transformer, while achieving comparable or even better performance. Compared to other NAT models, we observe that the SNAT model is almost the fastest (only a little bit behind of ENAT and Hint-NAT) in terms of latency, and is surprisingly faster than DCRF-NAT with better performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.3"
},
{
"text": "Effect of Syntactic and Semantic Structure Information We investigate the effect of using the syntactic and semantic tag on the model performance. Experimental results are shown in Table 3 . It demonstrates that incorporating POS information boosts the translating performance (+1.37 on WMT14 En\u2192De) and NER information can also enhance the translating performance (+1.25 on WMT14 En\u2192De). The POS label enriches the model with the syntactic structure, while the NER label supplements the semantic information to the model which are critical elements for SNAT model to exhibit better translation performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 188,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Ablation Analysis",
"sec_num": "4.4"
},
{
"text": "We conduct experiments for our SNAT model on WMT14 En\u2192De with various alignments between decoder layers and target. As shown in Table 4 , using the second layer Z 2 in the decoder as intermediate alignment can gain +1.21 improvement, while using the third layer Z 3 in the decoder as intermediate alignment can gain +1.46 improvement. This is in line with our expectation that aggregating layer-wise token information in intermediate layers can help improve the decoder's ability to capture token-token dependencies.",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 135,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Effect of Intermediate Representation Alignment",
"sec_num": null
},
{
"text": "Effect of Sentence Length To evaluate different models on different sentence lengths, we conduct experiments on the WMT14 En\u2192De development set and divide the sentence pairs into different length buckets according to the length of the reference sentences. As shown in Table 5 , the column of 100 calculates the BLEU score of sentences that the length of the reference sentence is larger than 50 but smaller or equal to 100. We can see that the performance of vanilla NAT drops quickly as the sentence length increases from 10 to 50, while AT model and the proposed SNAT model have relatively stable performance over different sen- tence lengths. This result confirms the power of the proposed model in modeling long-term token dependencies.",
"cite_spans": [],
"ref_spans": [
{
"start": 268,
"end": 275,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Effect of Intermediate Representation Alignment",
"sec_num": null
},
{
"text": "In this paper, we have proposed a novel syntactic and semantic structure-aware non-autoregressive Transformer model SNAT for NMT. The proposed model aims at reducing the computational cost in inference as well as keeping the quality of translation by incorporating both syntactic and semantic structures existing among natural languages into a non-autoregressive Transformer. In addition, we have also designed an intermediate latent alignment regularization within target sentences to better learn the long-term token dependencies. Comprehensive experiments and analysis on two realworld datasets (i.e., WMT14 En\u2192De and WMT16 En\u2192Ro) verify the efficiency and effectiveness of our proposed approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://spacy.io/usage/models",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported in part by NSF under grants III-1763325, III-1909323, and SaTC-1930941. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "An actor-critic algorithm for sequence prediction",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Philemon",
"middle": [],
"last": "Brakel",
"suffix": ""
},
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Philemon",
"middle": [],
"last": "Brakel",
"suffix": ""
},
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Dzmitry Bengio, Yoshua Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. An actor-critic algorithm for se- quence prediction. In 5th International Conference on Learning Representations, ICLR 2017.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Kermit: Generative insertion-based modeling for sequences",
"authors": [
{
"first": "William",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Kitaev",
"suffix": ""
},
{
"first": "Kelvin",
"middle": [],
"last": "Guu",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.01604"
]
},
"num": null,
"urls": [],
"raw_text": "William Chan, Nikita Kitaev, Kelvin Guu, Mitchell Stern, and Jakob Uszkoreit. 2019. Kermit: Gener- ative insertion-based modeling for sequences. arXiv preprint arXiv:1906.01604.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "1243--1252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pages 1243-1252. PMLR.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Aligned cross entropy for non-autoregressive machine translation",
"authors": [
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "9330--9338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, and Omer Levy. 2020. Aligned cross entropy for non-autoregressive machine translation. In Proceedings of the International Conference on Machine Learning, pages 9330-9338.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Mask-predict: Parallel decoding of conditional masked language models",
"authors": [
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "6114--6123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel de- coding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 6114- 6123.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Nonautoregressive neural machine translation",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, James Bradbury, Caiming Xiong, Vic- tor O.K. Li, and Richard Socher. 2018. Non- autoregressive neural machine translation. In Inter- national Conference on Learning Representations.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Non-autoregressive neural machine translation with enhanced decoder input",
"authors": [
{
"first": "Junliang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Linli",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "3723--3730",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, and Tie-Yan Liu. 2019. Non-autoregressive neural ma- chine translation with enhanced decoder input. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 33, pages 3723-3730.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Gaussian error linear units (gelus)",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.08415"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. Gaus- sian error linear units (gelus). arXiv preprint arXiv:1606.08415.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Design of the moses decoder for statistical machine translation",
"authors": [
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2008,
"venue": "Software Engineering, Testing, and Quality Assurance for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "58--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hieu Hoang and Philipp Koehn. 2008. Design of the moses decoder for statistical machine translation. In Software Engineering, Testing, and Quality Assur- ance for Natural Language Processing, pages 58- 65.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Fast decoding in sequence models using discrete latent variables",
"authors": [
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aurko",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning",
"volume": "80",
"issue": "",
"pages": "2390--2399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lukasz Kaiser, Samy Bengio, Aurko Roy, Ashish Vaswani, Niki Parmar, Jakob Uszkoreit, and Noam Shazeer. 2018. Fast decoding in sequence models using discrete latent variables. In Proceedings of the 35th International Conference on Machine Learn- ing, volume 80 of Proceedings of Machine Learn- ing Research, pages 2390-2399, Stockholmsm\u00e4ssan, Stockholm Sweden. PMLR.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Glow: Generative flow with invertible 1x1 convolutions",
"authors": [
{
"first": "P",
"middle": [],
"last": "Durk",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dhariwal",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "10215--10224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Durk P Kingma and Prafulla Dhariwal. 2018. Glow: Generative flow with invertible 1x1 convolutions. In Advances in neural information processing systems, pages 10215-10224.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Deterministic non-autoregressive neural sequence modeling by iterative refinement",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Elman",
"middle": [],
"last": "Mansimov",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1173--1182",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1149"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural se- quence modeling by iterative refinement. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 1173- 1182, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Hint-based training for non-autoregressive machine translation",
"authors": [
{
"first": "Zhuohan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2020. Hint-based training for non-autoregressive machine translation. In Pro- ceedings of the International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "End-toend non-autoregressive neural machine translation with connectionist temporal classification",
"authors": [
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Libovick\u00fd",
"suffix": ""
},
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Helcl",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3016--3021",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1336"
]
},
"num": null,
"urls": [],
"raw_text": "Jind\u0159ich Libovick\u00fd and Jind\u0159ich Helcl. 2018. End-to- end non-autoregressive neural machine translation with connectionist temporal classification. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 3016- 3021, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "FlowSeq: Nonautoregressive conditional sequence generation with generative flow",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Chunting",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4282--4292",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1437"
]
},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neu- big, and Eduard Hovy. 2019. FlowSeq: Non- autoregressive conditional sequence generation with generative flow. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 4282-4292, Hong Kong, China. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Treebank-3. Linguistic Data Consortium",
"authors": [
{
"first": "P",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Santorini",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P Marcus, Beatrice Santorini, Mary Ann Marcinkiewicz, and Ann Taylor. 1999. Treebank-3. Linguistic Data Consortium, Philadelphia, 14.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Scaling neural machine translation",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6301"
]
},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1-9, Brussels, Belgium. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The proposition bank: An annotated corpus of semantic roles",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational linguistics",
"volume": "31",
"issue": "1",
"pages": "71--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated cor- pus of semantic roles. Computational linguistics, 31(1):71-106.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Towards robust linguistic analysis using ontonotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Bj\u00f6rkelund",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhong",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "143--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj\u00f6rkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards ro- bust linguistic analysis using ontonotes. In Pro- ceedings of the Seventeenth Conference on Computa- tional Natural Language Learning, pages 143-152.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Non-autoregressive machine translation with latent alignments",
"authors": [
{
"first": "Chitwan",
"middle": [],
"last": "Saharia",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Saxena",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1098--1108",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.83"
]
},
"num": null,
"urls": [],
"raw_text": "Chitwan Saharia, William Chan, Saurabh Saxena, and Mohammad Norouzi. 2020. Non-autoregressive ma- chine translation with latent alignments. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1098-1108, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1715--1725",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics, pages 1715-1725, Berlin, Germany. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Minimizing the bag-ofngrams difference for non-autoregressive neural machine translation",
"authors": [
{
"first": "Chenze",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Jinchao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Fandong",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "198--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chenze Shao, Jinchao Zhang, Yang Feng, Fandong Meng, and Jie Zhou. 2020. Minimizing the bag-of- ngrams difference for non-autoregressive neural ma- chine translation. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 34, pages 198-205.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Latent-variable nonautoregressive neural machine translation with deterministic inference using a delta posterior",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2020,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "8846--8853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Shu, Jason Lee, Hideki Nakayama, and Kyunghyun Cho. 2020. Latent-variable non- autoregressive neural machine translation with deter- ministic inference using a delta posterior. In AAAI, pages 8846-8853.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Insertion transformer: Flexible sequence generation via insertion operations",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning",
"volume": "97",
"issue": "",
"pages": "5976--5985",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. 2019. Insertion transformer: Flexible se- quence generation via insertion operations. In Pro- ceedings of the 36th International Conference on Machine Learning, volume 97, pages 5976-5985.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Fast structured decoding for sequence models",
"authors": [
{
"first": "Zhiqing",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zhuohan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Haoqing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Zi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhihong",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3011--3020",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiqing Sun, Zhuohan Li, Haoqing Wang, Di He, Zi Lin, and Zhihong Deng. 2019. Fast structured de- coding for sequence models. In Advances in Neural Information Processing Systems, pages 3011-3020.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Non-autoregressive machine translation with auxiliary regularization",
"authors": [
{
"first": "Yiren",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "5377--5384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. 2019. Non-autoregressive machine translation with auxiliary regularization. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 33, pages 5377-5384.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Imitation learning for nonautoregressive neural machine translation",
"authors": [
{
"first": "Bingzhen",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Junyang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1304--1312",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1125"
]
},
"num": null,
"urls": [],
"raw_text": "Bingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin, and Xu Sun. 2019. Imitation learning for non- autoregressive neural machine translation. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1304- 1312, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine trans- lation. arXiv preprint arXiv:1609.08144.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Improving non-autoregressive neural machine translation with monolingual data",
"authors": [
{
"first": "Jiawei",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Phillip",
"middle": [],
"last": "Keung",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1893--1898",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiawei Zhou and Phillip Keung. 2020. Improving non-autoregressive neural machine translation with monolingual data. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 1893-1898, Online. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "corresponding POS tag set is {AUX, PRON, PRON, DET, NOUN, ADP, DET, NOUN, PUNCT} and the NER tag set is {O, B PERSON, I PERSON, O, O, O, O, O, O}. The data flow of the proposed model is also shown in Figure 1.",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"num": null,
"html": null,
"text": "Performance of BLEU score on WMT14 En\u2194De and WMT16 En\u2194Ro tasks. \"-\" denotes that the results are not reported. LSTM-based results are from",
"content": "<table/>",
"type_str": "table"
},
"TABREF3": {
"num": null,
"html": null,
"text": "The performance of different vision of SNAT models on WMT14 En\u2192De development set. means selecting the label tag.",
"content": "<table><tr><td>Model</td><td>POS tag NER tag</td><td>BLEU</td></tr><tr><td>SNAT-V1</td><td/><td>24.21</td></tr><tr><td>SNAT-V2</td><td/><td>24.09</td></tr><tr><td>SNAT-V3</td><td/><td>22.84</td></tr></table>",
"type_str": "table"
},
"TABREF4": {
"num": null,
"html": null,
"text": "The performance with respect to using different layer of intermediate interaction. Evaluated by the BLEU score on WMT14 En\u2192De|WMT14 De\u2192En.",
"content": "<table><tr><td colspan=\"3\">Method WMT14 En\u2192 De WMT14 De\u2192 En</td></tr><tr><td>w/o</td><td>23.11</td><td>27.03</td></tr><tr><td>w/ Z 2</td><td>24.32</td><td>28.21</td></tr><tr><td>w/ Z 3</td><td>24.57</td><td>28.42</td></tr></table>",
"type_str": "table"
},
"TABREF5": {
"num": null,
"html": null,
"text": "The performance with respect to different sentence lengths. Evaluated by the BLEU score on WMT14 En\u2192De.",
"content": "<table><tr><td>Model</td><td>10</td><td>20</td><td>30</td><td>50</td><td>100</td></tr><tr><td>AT</td><td>28.35</td><td>28.32</td><td>28.30</td><td>24.26</td><td>20.73</td></tr><tr><td>NAT</td><td>21.31</td><td>19.55</td><td>17.19</td><td>16.31</td><td>11.35</td></tr><tr><td>SNAT</td><td>28.67</td><td>28.50</td><td>27.33</td><td>25.41</td><td>17.69</td></tr></table>",
"type_str": "table"
}
}
}
}