|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:27:28.909240Z" |
|
}, |
|
"title": "Capturing document context inside sentence-level neural machine translation models with self-training", |
|
"authors": [ |
|
{ |
|
"first": "Elman", |
|
"middle": [], |
|
"last": "Mansimov", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "New York University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "G\u00e1bor", |
|
"middle": [], |
|
"last": "Melis", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "New York University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "New York University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Neural machine translation (NMT) has arguably achieved human level parity when trained and evaluated at the sentence-level. Document-level neural machine translation has received less attention and lags behind its sentence-level counterpart. The majority of the proposed document-level approaches investigate ways of conditioning the model on several source or target sentences to capture document context. These approaches require training a specialized NMT model from scratch on parallel document-level corpora. We propose an approach that doesn't require training a specialized model on parallel document-level corpora and is applied to a trained sentence-level NMT model at decoding time. We process the document from left to right multiple times and self-train the sentence-level model on pairs of source sentences and generated translations. Our approach reinforces the choices made by the model, thus making it more likely that the same choices will be made in other sentences in the document. We evaluate our approach on three document-level datasets: NIST Chinese-English, WMT19 Chinese-English and Open-Subtitles English-Russian. We demonstrate that our approach has higher BLEU score and higher human preference than the baseline. Qualitative analysis of our approach shows that choices made by model are consistent across the document.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Neural machine translation (NMT) has arguably achieved human level parity when trained and evaluated at the sentence-level. Document-level neural machine translation has received less attention and lags behind its sentence-level counterpart. The majority of the proposed document-level approaches investigate ways of conditioning the model on several source or target sentences to capture document context. These approaches require training a specialized NMT model from scratch on parallel document-level corpora. We propose an approach that doesn't require training a specialized model on parallel document-level corpora and is applied to a trained sentence-level NMT model at decoding time. We process the document from left to right multiple times and self-train the sentence-level model on pairs of source sentences and generated translations. Our approach reinforces the choices made by the model, thus making it more likely that the same choices will be made in other sentences in the document. We evaluate our approach on three document-level datasets: NIST Chinese-English, WMT19 Chinese-English and Open-Subtitles English-Russian. We demonstrate that our approach has higher BLEU score and higher human preference than the baseline. Qualitative analysis of our approach shows that choices made by model are consistent across the document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Neural machine translation (NMT) Kalchbrenner and Blunsom, 2013; Bahdanau et al., 2014) has achieved great success, arguably reaching the levels of human parity (Hassan et al., 2018) on Chinese to English news translation that led to its popularity and adoption in academia and industry. These models are predominantly trained and evaluated on sentence-level parallel corpora. Document-level machine translation that requires capturing the context to accurately translate sentences has been recently gaining more popularity and was selected as one of the main tasks in the premier machine translation conference WMT19 (Barrault et al., 2019) and WMT20 (Barrault et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 64, |
|
"text": "Kalchbrenner and Blunsom, 2013;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 65, |
|
"end": 87, |
|
"text": "Bahdanau et al., 2014)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 161, |
|
"end": 182, |
|
"text": "(Hassan et al., 2018)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 618, |
|
"end": 641, |
|
"text": "(Barrault et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 652, |
|
"end": 675, |
|
"text": "(Barrault et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A straightforward solution to translate documents by translating sentences in isolation leads to inconsistent but syntactically valid text. The inconsistency is the result of the model not being able to resolve ambiguity with consistent choices across the document. For example, the recent NMT system that achieved human parity (Hassan et al., 2018) inconsistently used three different names \"Twitter Move Car\", \"WeChat mobile\", \"WeChat move\" when referring to the same entity (Sennrich, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 328, |
|
"end": 349, |
|
"text": "(Hassan et al., 2018)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 477, |
|
"end": 493, |
|
"text": "(Sennrich, 2018)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To tackle this issue, the majority of the previous approaches (Jean et al., 2017; Wang et al., 2017; Kuang et al., 2017; Tiedemann and Scherrer, 2017; Maruf and Haffari, 2018; Agrawal et al., 2018; Xiong et al., 2018; Miculicich et al., 2018; Voita et al., 2019a,b; Jean et al., 2019; Junczys-Dowmunt, 2019) proposed contextconditional NMT models trained on documentlevel data. However, none of the previous approaches are able to exploit trained NMT models on sentence-level parallel corpora and require training specialized context-conditional NMT models for document-level machine translation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 81, |
|
"text": "(Jean et al., 2017;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 82, |
|
"end": 100, |
|
"text": "Wang et al., 2017;", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 101, |
|
"end": 120, |
|
"text": "Kuang et al., 2017;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 121, |
|
"end": 150, |
|
"text": "Tiedemann and Scherrer, 2017;", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 151, |
|
"end": 175, |
|
"text": "Maruf and Haffari, 2018;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 176, |
|
"end": 197, |
|
"text": "Agrawal et al., 2018;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 198, |
|
"end": 217, |
|
"text": "Xiong et al., 2018;", |
|
"ref_id": "BIBREF57" |
|
}, |
|
{ |
|
"start": 218, |
|
"end": 242, |
|
"text": "Miculicich et al., 2018;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 265, |
|
"text": "Voita et al., 2019a,b;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 284, |
|
"text": "Jean et al., 2019;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 307, |
|
"text": "Junczys-Dowmunt, 2019)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We propose a way of incorporating context into a trained sentence-level neural machine translation model at decoding time. We process each document monotonically from left to right one sentence at a time and self-train the sentence-level NMT model on its own generated translation. This procedure reinforces choices made by the model and hence increases the chance of making the same choices in the remaining sentences in the document. Our approach does not require training a separate context-conditional model on parallel document-Algorithm 1: Document-level NMT with self-training at decoding time Input: Document D = (X 1 , ..., X n ), pretrained sentence-level NMT model f (\u03b8), learning rate \u03b1, decay prior \u03bb and number of passes over document P Output:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Translated sentences (Y 1 , ..., Y n )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Backup original values of parameters\u03b8 \u2190 \u03b8 for p = 1 to P {multi-pass document} do for i = 1 to n do Translate sentence X i using sentence-level model f (\u03b8) into target sentence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Y i . Calculate cross-entropy loss L(X i , Y i ) using Y i as target. for j = 1 to m do \u03b8 \u2190 \u03b8 \u2212 \u03b1\u2207 \u03b8 L(X i , Y i ) + \u03bb(\u03b8 \u2212 \u03b8)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "end for end for end for level data and allows us to capture context in documents using a trained sentence-level model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We make the key contribution in the paper by introducing the document-level neural machine translation approach that does not require training a context-conditional model on document data and does not require separate document-level language model to rank the outputs of the NMT model according to consistency of translated document. We show how to adapt a trained sentence-level neural machine translation model to capture context in the document during decoding. We evaluate and demonstrate improvements of our proposed approach measured by BLEU score and preferences of human annotators on several document-level machine translation tasks including NIST Chinese-English, WMT19 Chinese-English and OpenSubtitles English-Russian datasets. We qualitatively analyze the decoded sentences produced using our approach and show that they indeed capture the context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We translate a document D consisting of n source sentences X 1 , X 2 , ..., X n into the target language, given a well-trained sentence-level neural machine translation model f \u03b8 . The sentencelevel model parametrizes a conditional distribution p(Y |X) = T i=1 p(y t |Y <t , X) of each target word y t given the preceding words Y <t and the source sentence X. Decoding is done by approximately finding arg max Y p(Y |X) using greedy decoding or beam-search. f is typically a recurrent neural network with attention (Bahdanau et al., 2014) or a Transformer model (Vaswani et al., 2017) with parameters \u03b8.", |
|
"cite_spans": [ |
|
{ |
|
"start": 515, |
|
"end": 538, |
|
"text": "(Bahdanau et al., 2014)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 562, |
|
"end": 584, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF50" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We start by translating a first source sentence X 1 in the document D into the target sentence Y 1 . We then self-train the model on the sentence pair (X 1 , Y 1 ), which maximizes the log probabilities of each word in the generated sentence Y 1 given source sentence X 1 . The self-training procedure runs gradient descent steps for a fixed number of steps with a weight decay. Weight decay keeps the updated values of weights closer to original values. We repeat the same update process for the remaining sentences in the document. The detailed implementation of self-training procedure during decoding is shown in Algorithm 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-training", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Since the document is processed in the left-to-right, monotonic order, our self-training procedure does not incorporate the choices of the model yet to be made on unprocessed sentences. In order to leverage global information from the full document and to further reinforce the choices made by the model across all generated sentences, we propose multipass document decoding with self-training. Specifically, we process the document multiple times monotonically from left to right while continuing self-training of the model. Multi-pass self-training only requires adding additional parameter P to selftraining Algorithm 1. This parameter specifies the number of passes over the entire document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-pass self-training", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Since generated sentences are likely to contain some errors, our self-training procedure can reinforce those errors and thus potentially hurt the performance of the model on unprocessed sentences in the document. In order to isolate the effect of imperfect translations and estimate the upper bound of performance, we evaluate our self-training procedure with ground-truth translations as targets, which we call oracle self-training. Running oracle self-training makes it similar to the dynamic evaluation approach introduced in language modeling (Mikolov, 2012; Graves, 2013; Krause et al., 2018) , where input text to the language model is the target used to train the neural language model during evaluation. Oracle self-training is also related to domain adaptation in machine translation (Axelrod et al., 2011; Freitag and Al-Onaizan, 2016; Chu and Wang, 2018) . Unlike domain adaptation in MT, oracle self-training only runs adaptation within a single document and does not rely on the entire in-domain document-level test data. We do not use the oracle in multi-pass self-training since this would make it equivalent to memorizing the correct translation for each sentence in the document and regenerating it again.", |
|
"cite_spans": [ |
|
{ |
|
"start": 547, |
|
"end": 562, |
|
"text": "(Mikolov, 2012;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 563, |
|
"end": 576, |
|
"text": "Graves, 2013;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 577, |
|
"end": 597, |
|
"text": "Krause et al., 2018)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 793, |
|
"end": 815, |
|
"text": "(Axelrod et al., 2011;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 816, |
|
"end": 845, |
|
"text": "Freitag and Al-Onaizan, 2016;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 846, |
|
"end": 865, |
|
"text": "Chu and Wang, 2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Oracle self-training to upper bound performance", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Although there have been some attempts at tackling document-level neural machine translation (for example see proceedings of discourse in machine translation workshop (Popescu-Belis et al., 2019)), it has largely received less attention compared to sentence-level neural machine translation. Prior document-level NMT approaches (Jean et al., 2017; Wang et al., 2017; Kuang et al., 2017; Tiedemann and Scherrer, 2017; Maruf and Haffari, 2018; Agrawal et al., 2018; Miculicich et al., 2018) proposed different ways of conditioning NMT models on several source sentences in the document. Perhaps closest of those document NMT approaches to our work is the approach by Kuang et al. (2017) , where they train a NMT model with a separate non-parametric cache (Kuhn and Mori, 1990 ) that incorporates topic information about the document. Recent approaches (Jean et al., 2019; Junczys-Dowmunt, 2019; Voita et al., 2019a) use only partially available parallel document data or monolingual document data. These approaches proposed to fill in missing context in the documents with random or generated sentences. Another line of document-level NMT work (Xiong et al., 2018; Voita et al., 2019b) proposed a twopass document decoding model inspired by the deliberation network (Xia et al., 2017) in order to incorporate target side document context. A parallel line of work (Garcia et al., 2017 (Garcia et al., , 2019 Yu et al., 2019) introduced document-level approaches that do not require training the context-conditional NMT model by introducing a separate language model to enforce the consistency in the outputs of sentence-level NMT model. Garcia et al. (2019) used a simple n-gram based semantic space language model (Hardmeier et al., 2012) to re-rank the outputs of the sentence-level NMT model inside the beam-search algorithm to enforce documentlevel consistency. Yu et al. (2019) proposed a novel beam search method that incorporates document context inside noisy channel model (Shannon, 1948; Yee et al., 2019) that uses a powerful GPT2 language model (Radford et al., 2019) . Similar to our work, their approach doesn't require training context-conditional models on parallel document corpora, but relies on separate target-to-source NMT model and unconditional language model to re-rank hypotheses of the sourceto-target NMT model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 328, |
|
"end": 347, |
|
"text": "(Jean et al., 2017;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 348, |
|
"end": 366, |
|
"text": "Wang et al., 2017;", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 367, |
|
"end": 386, |
|
"text": "Kuang et al., 2017;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 387, |
|
"end": 416, |
|
"text": "Tiedemann and Scherrer, 2017;", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 417, |
|
"end": 441, |
|
"text": "Maruf and Haffari, 2018;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 463, |
|
"text": "Agrawal et al., 2018;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 464, |
|
"end": 488, |
|
"text": "Miculicich et al., 2018)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 665, |
|
"end": 684, |
|
"text": "Kuang et al. (2017)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 753, |
|
"end": 773, |
|
"text": "(Kuhn and Mori, 1990", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 850, |
|
"end": 869, |
|
"text": "(Jean et al., 2019;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 870, |
|
"end": 892, |
|
"text": "Junczys-Dowmunt, 2019;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 893, |
|
"end": 913, |
|
"text": "Voita et al., 2019a)", |
|
"ref_id": "BIBREF51" |
|
}, |
|
{ |
|
"start": 1142, |
|
"end": 1162, |
|
"text": "(Xiong et al., 2018;", |
|
"ref_id": "BIBREF57" |
|
}, |
|
{ |
|
"start": 1163, |
|
"end": 1183, |
|
"text": "Voita et al., 2019b)", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 1264, |
|
"end": 1282, |
|
"text": "(Xia et al., 2017)", |
|
"ref_id": "BIBREF56" |
|
}, |
|
{ |
|
"start": 1361, |
|
"end": 1381, |
|
"text": "(Garcia et al., 2017", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1382, |
|
"end": 1404, |
|
"text": "(Garcia et al., , 2019", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1405, |
|
"end": 1421, |
|
"text": "Yu et al., 2019)", |
|
"ref_id": "BIBREF60" |
|
}, |
|
{ |
|
"start": 1634, |
|
"end": 1654, |
|
"text": "Garcia et al. (2019)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1712, |
|
"end": 1736, |
|
"text": "(Hardmeier et al., 2012)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1863, |
|
"end": 1879, |
|
"text": "Yu et al. (2019)", |
|
"ref_id": "BIBREF60" |
|
}, |
|
{ |
|
"start": 1978, |
|
"end": 1993, |
|
"text": "(Shannon, 1948;", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 1994, |
|
"end": 2011, |
|
"text": "Yee et al., 2019)", |
|
"ref_id": "BIBREF59" |
|
}, |
|
{ |
|
"start": 2053, |
|
"end": 2075, |
|
"text": "(Radford et al., 2019)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Closest to our work is the dynamic evaluation approach proposed by Mikolov (2012) and further extended by Graves (2013) ; Krause et al. (2018) , where a neural language model is trained at evaluation time. However unlike language modeling where inputs are ground-truth targets used both during training and evaluation, in machine translation ground-truth translation are not available at decoding time in practical settings. The general idea of storing memories in the weights of the neural network rather than storing memories as copies of neural network activations, that is behind our approach and dynamic evaluation, goes back to 1970s and 1980s work on associative memory models (Willshaw et al., 1969; Kohonen, 1972; Anderson and Hinton, 1981; Hopfield, 1982) and to more recent work on fast weights (Ba et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 81, |
|
"text": "Mikolov (2012)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 106, |
|
"end": 119, |
|
"text": "Graves (2013)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 122, |
|
"end": 142, |
|
"text": "Krause et al. (2018)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 684, |
|
"end": 707, |
|
"text": "(Willshaw et al., 1969;", |
|
"ref_id": "BIBREF54" |
|
}, |
|
{ |
|
"start": 708, |
|
"end": 722, |
|
"text": "Kohonen, 1972;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 723, |
|
"end": 749, |
|
"text": "Anderson and Hinton, 1981;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 750, |
|
"end": 765, |
|
"text": "Hopfield, 1982)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 806, |
|
"end": 823, |
|
"text": "(Ba et al., 2016)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our work belongs to the broad category of selftraining or pseudo-labelling approaches (Scudder, 1965; Lee, 2013) proposed to annotate the unlabeled data to train supervised classifiers. Selftraining has been successfully applied to NLP tasks such as word-sense disambiguation (Yarowsky, 1995) and parsing (McClosky et al., 2006; Reichart and Rappoport, 2007; Huang and Harper, 2009) . Self-training has also been used to label monolingual data to improve the performance of sentencelevel statistical and neural machine translation models (Ueffing, 2006; Zhang and Zong, 2016) . Recently, proposed noisy version of self-training and showed improvement over classical self-training on machine translation and text summarization tasks. Backtranslation (Sennrich et al., 2016a) is another popular pseudo-labelling technique that utilizes target-side monolingual data to improve performance of NMT models. Table 1 : Results on NIST evaluation sets. The first four rows show the performance of the previous document-level NMT models from (Wang et al., 2017; Kuang et al., 2017; . The last four rows show performance of our baseline sentence-level Transformer models with and without self-training. BT: backtranslation. Table 2 : Ablation study on NIST evaluation sets measuring the effect on multiple passes of decoding and the oracle on self-training procedure. BT: backtranslation. ST: self-training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 101, |
|
"text": "(Scudder, 1965;", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 102, |
|
"end": 112, |
|
"text": "Lee, 2013)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 276, |
|
"end": 292, |
|
"text": "(Yarowsky, 1995)", |
|
"ref_id": "BIBREF58" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 328, |
|
"text": "(McClosky et al., 2006;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 358, |
|
"text": "Reichart and Rappoport, 2007;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 359, |
|
"end": 382, |
|
"text": "Huang and Harper, 2009)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 538, |
|
"end": 553, |
|
"text": "(Ueffing, 2006;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 554, |
|
"end": 575, |
|
"text": "Zhang and Zong, 2016)", |
|
"ref_id": "BIBREF63" |
|
}, |
|
{ |
|
"start": 749, |
|
"end": 773, |
|
"text": "(Sennrich et al., 2016a)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 1032, |
|
"end": 1051, |
|
"text": "(Wang et al., 2017;", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 1052, |
|
"end": 1071, |
|
"text": "Kuang et al., 2017;", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 901, |
|
"end": 908, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1213, |
|
"end": 1220, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We use the NIST Chinese-English (Zh-En), the WMT19 Chinese-English (Zh-En) and the Open-Subtitles English-Russian (En-Ru) datasets in our experiments. The NIST training set consists of 1.5M sentence pairs from LDC-distributed news. We use MT06 set as validation set. We use MT03, MT04, MT05 and MT08 sets as held out test sets. The MT06 validation set consists of 1649 sentences with 21 sentences per document. MT03, MT04, MT05 and MT08 consist of 919, 1788, 1082 and 1357 sentences with 9, 9, 11 and 13 sentences on average per document respectively. We follow previous work when preprocessing NIST dataset. We preprocess the NIST dataset with punctuation normalization, tokenization, and lowercasing. Sentences are encoded using byte-pair encoding (Sennrich et al., 2016b) with source and target vocabularies of roughly 32K tokens. We use the case-insensitive multi-bleu.perl script with 4 reference files to evaluate the model. The WMT19 dataset includes the UN corpus, CWMT, and news commentary. We filter the training data by removing duplicate sentences and sen-tences longer than 250 words. The training dataset consits of 18M sentence pairs. We use news-dev2017 as a validation set and use newstest2017, newstest2018 and newstest2019 as held out test sets. newsdev2017, newstest2017, newstest2018 and newstest2019 consist of total of 2002, 2001, 3981 and 2000 sentences with average of 14, 12, 15 and 12 sentences per document respectively. We similarly follow previous work (Xia et al., 2019) when preprocessing the dataset. Chinese sentences are preprocessed by segmenting and normalizing punctuation. English sentences are preprocessed by tokenizing and true casing. We learn a byte-pair encoding (Sennrich et al., 2016b) with source and target vocabularies of roughly 32K tokens. We use sacreBLEU (Post, 2018) for evaluation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 750, |
|
"end": 774, |
|
"text": "(Sennrich et al., 2016b)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 1483, |
|
"end": 1501, |
|
"text": "(Xia et al., 2019)", |
|
"ref_id": "BIBREF55" |
|
}, |
|
{ |
|
"start": 1708, |
|
"end": 1732, |
|
"text": "(Sennrich et al., 2016b)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 1809, |
|
"end": 1821, |
|
"text": "(Post, 2018)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The OpenSubtitles English-Russian dataset, consisting of movie and TV subtitles, was prepared by (Voita et al., 2019b) . 1 The training dataset consists of 6M parallel sentence pairs. We use the context aware sets provided by the authors consisting of 10000 documents both in validation and test sets. Due to the way the dataset is processed, each document only contains 4 sentences. The dataset is preprocessed by tokenizing and lower casing. We use byte-pair encoding (Sennrich et al., 2016b) to prepare source and target vocabularies of roughly 32K tokens. We use multi-bleu.perl script for evaluation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 118, |
|
"text": "(Voita et al., 2019b)", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 121, |
|
"end": 122, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 470, |
|
"end": 494, |
|
"text": "(Sennrich et al., 2016b)", |
|
"ref_id": "BIBREF43" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We train a Transformer (Vaswani et al., 2017) on all datasets. Following previous Voita et al., 2019b; Xia et al., 2019) work we use the Transformer base configuration (transformer_base) on the NIST Zh-En and the OpenSubtitles En-Ru datasets and use the Transformer big configuration (transformer_big) on the WMT19 Zh-En dataset. Transformer base consists of 6 layers, 512 hidden units and 8 attention heads. Transformer big consists of 6 layers, 1024 hidden units and 16 attention heads. We use a dropout rate (Srivastava et al., 2014) of 0.1 and label smoothing to regularize our models. We train our models with the Adam optimizer (Kingma and Ba, 2014) using the same warm-up learning rate schedule as in (Vaswani et al., 2017) . During decoding we use beam search with beam size 4 and length penalty 0.6. We additionally train backtranslated models (Sennrich et al., 2016a) on the NIST Zh-En and the OpenSubtitles En-Ru datasets. We use the publicly available English gigaword dataset (Graff et al., 2003) to create synthetic parallel data for the NIST Zh-En dataset and use synthetic parallel data provided by (Voita et al., 2019a) for the OpenSubtitles En-Ru dataset. When training backtranslated models, we oversample the original parallel data to make the ratio of synthetic data to original data equal to 1 (Edunov et al., 2018) . We tune the number of update steps, learning rate, decay rate, and number of passes over the document of our selftraining approach with a random search on a validation set. We use the range of (5 \u00d7 10 \u22125 , 5 \u00d7 10 \u22121 ) for learning rate, range of (0.001, 0.999) for decay rate, number of update steps (2, 4, 8) and number of passes over the document (2, 4) for random search. We found that best performing models required a small number of update steps (either 2 or 4) with a relatively large learning rate (\u223c 0.005 \u2212 0.01) and small decay rate (\u223c 0.2 \u2212 0.5). We use 3 random seeds to train each model in our experiments and report the average results. We use the Ten-sor2Tensor library (Vaswani et al., 2018) to train baseline models and to implement our method.", |
|
"cite_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 45, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 82, |
|
"end": 102, |
|
"text": "Voita et al., 2019b;", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 103, |
|
"end": 120, |
|
"text": "Xia et al., 2019)", |
|
"ref_id": "BIBREF55" |
|
}, |
|
{ |
|
"start": 708, |
|
"end": 730, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 853, |
|
"end": 877, |
|
"text": "(Sennrich et al., 2016a)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 989, |
|
"end": 1009, |
|
"text": "(Graff et al., 2003)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1115, |
|
"end": 1136, |
|
"text": "(Voita et al., 2019a)", |
|
"ref_id": "BIBREF51" |
|
}, |
|
{ |
|
"start": 1316, |
|
"end": 1337, |
|
"text": "(Edunov et al., 2018)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 2026, |
|
"end": 2048, |
|
"text": "(Vaswani et al., 2018)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperparameters", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We present translation quality results measured by BLEU on NIST dataset on Table 1 . The selftraining procedure improves the results of our sentence-level baseline by the average of 0.53 BLEU for non-backtranslated model and by 0.93 BLEU for backtranslated model for all evaluation sets. Our baseline sentence-level Transformer model trained without backtranslation outperforms previous document-level models by Wang et al. (2017) and Kuang et al. (2017) and is comparable to the document-level model proposed by . Backtranslation further improves the results of our sentence-level model leading to higher BLEU score compared to the Document Transformer .", |
|
"cite_spans": [ |
|
{ |
|
"start": 412, |
|
"end": 430, |
|
"text": "Wang et al. (2017)", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 435, |
|
"end": 454, |
|
"text": "Kuang et al. (2017)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 82, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In Table 2 , we show a detailed study of effects of multi-pass self-training and oracle self-training on BLEU scores on NIST evaluation sets. First, multiple decoding passes over the document give an additional average improvement of 0.25\u22120.45 BLEU points compared to the single decoding pass over the document. Using oracle self-training procedure gives an average of 0.86 and 1.63 BLEU improvement over our non-backtranslated and backtranslated sentence-level baseline models respectively. Compared to using generated translations by the model, oracle self-training gives an improvement of 0.3 and 0.7 BLEU points for non-backtranslated and backtranslated models respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The results on the WMT19 evaluation sets are presented on Table 3 . Compared to the NIST dataset our self-training procedure shows an improvement of 0.1 BLEU over a sentence-level baseline model. Oracle self-training outperforms sentence-level baselines by a significant margin of 2.5 BLEU. We hypothesize that such a large gap between performance of oracle and non-oracle selftraining is due to the more challenging nature of the WMT dataset which is reflected in the worse performance of sentence-level baseline on WMT compared to NIST. We investigate this claim by measuring the relationship between BLEU achieved by self-training and the relative quality of the sentencelevel model on the NIST dataset. Figure 1 shows that the BLEU difference between self-training and sentence-level models monotonically increases as the quality of the sentence-level model gets better on the NIST dataset. This implies that we can expect a larger improvement from applying selftraining as we improve the sentence-level model (Xia et al., 2019) . All models were trained without additional monolingual data and without pretraining. ST: self-training. on the WMT dataset. Preliminary experiments on training back-translated models didn't improve results on the WMT dataset. We leave further investigation of ways to improve the sentence-level model on the WMT dataset for future work. The results on OpenSubtitles evaluation sets are in Table 4 . Our self-training and oracle self-training approaches give the performance improvement of 0.1 and 0.3 BLEU respectively. We hypothesize that the small improvement of self-training is due to relatively small number of sentences in the documents in the OpenSubtitles dataset. We validate this claim by varying the number of sentences in the document used for self-training on NIST dataset. Figure 2 shows that the self-training approach achieves higher BLEU improvement as we increase the number of sentences in documents used for self-training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1014, |
|
"end": 1032, |
|
"text": "(Xia et al., 2019)", |
|
"ref_id": "BIBREF55" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 65, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 707, |
|
"end": 715, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 1424, |
|
"end": 1431, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 1822, |
|
"end": 1830, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We conduct a human evaluation study on the NIST Zh-En and the OpenSubtitles En-Ru datasets. For both datasets we sample 50 documents from the test set where translated documents generated by the self-training approach are not exact copies of the translated documents generated by the sentencelevel baseline model. For the NIST Zh-En dataset we present reference documents, translated documents generated by the sentence-level baseline, and translated documents generated by self-training approach to 4 native English speakers. For the Open-Subtitles En-Ru dataset we follow a similar setup, where we present reference documents, translated documents generated by sentence-level baseline, and translated documents generated by self-training approach to 4 native Russian speakers. All translated documents are presented in random order with no indication of which approach was used to generate them. We highlight the differences between translated documents when presenting them to human evaluators. The human evaluators are asked to pick one of two translations as their preferred option for each document. We ask the human evaluators to consider fluency, idiomaticity and correctness of the translation relative to the reference when entering their preferred choices.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We follow the setup of Yu et al. (2019) when performing human evaluation. We collect a total of 200 annotations for 50 documents from all 4 human evaluators and show results in Table 5 . For both datasets, human evaluators prefer translated documents generated by the self-training approach to translated documents generated by the sentence-level model. For NIST Zh-En, 122 out of 200 annotations indicate a preference towards translations generated by self-training approach. For OpenSubtitles En-Ru, 118 out of 200 annotations similarly show a preference towards translations generated by our self-training approach. This is a statistically significant preference p < 0.05 according to two-sided Binomial test. When aggregated for each document by majority vote, for NIST Zh-En, translations generated by the selftraining approach are considered better in 25 documents, worse in 12 documents, and the same in 13 documents. For OpenSubtitles En-Ru, translations generated by self-training approach are considered better in 23 documents, worse in 15 documents, and the same in 12 documents. The agreement between annotators for NIST Zh-En and OpenSub- Table 5 : Human evaluation results on the NIST Zh-En and the OpenSubtitles En-Ru datasets. \"Total\" denotes total number of annotations collected from humans. \"Self-train\" denotes number of times evaluators preferred documents by the self-training approach. \"Baseline\" denotes number of times evaluators preferred documents by sentence-level baseline.", |
|
"cite_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 39, |
|
"text": "Yu et al. (2019)", |
|
"ref_id": "BIBREF60" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 184, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1152, |
|
"end": 1159, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "titles En-Ru is \u03ba = 0.293 and \u03ba = 0.320 according to Fleiss' kappa (Fleiss, 1971) . For both datasets, the inter-annotator agreement rate is considered fair. The agreement rate is similar to the inter-annotator agreement found in the previous WMT human evaluation studies (Bojar et al., 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 81, |
|
"text": "(Fleiss, 1971)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 292, |
|
"text": "(Bojar et al., 2014)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In Table 6 , we show four reference document pairs together with translated documents generated by the baseline sentence-level model and by our selftraining approach. We emphasize the underlined words in all documents.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Qualitative Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In the first two examples we emphasize the gender of the person marked on verbs and adjectives in translated Russian sentences. In the first example, the baseline sentence-level model inconsistenly produces different gender markings on the underlined verb \u0441\u043a\u0430\u0437\u0430\u043b (masculine told) and underlined adjective \u0441\u0438\u043b\u044c\u043d\u043e\u0439 (feminine strong). The selftraining approach correctly generates a translation with consistent male gender markings on both the underlined verb \u0441\u043a\u0430\u0437\u0430\u043b and the underlined adjective \u0441\u0438\u043b\u044c\u043d\u044b\u043c. Similarly, in the second example, the baseline model inconsistenly produces different gender markings on the underlined verbs \u043f\u0440\u0438\u0433\u043b\u0430\u0448\u0435\u043d\u0430 (feminine invited) and \u043f\u043e\u0440\u0443\u0433\u0430\u043b\u0441\u044f (masculine fought). Self-training consistently generates female gender markings on both the underlined verbs \u043f\u0440\u0438\u0433\u043b\u0430\u0448\u0435\u043d\u0430 (feminine invited) and \u043f\u043e\u0441\u0441\u043e\u0440\u0438\u043b\u0430\u0441\u044c (feminine fought).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In the third example, we emphasize the underlined named entity in reference and generated translations. The baseline sentence-level model inconsistently generates the names \"doyle\" and \"du\" when referring to the same entity across two sentences in the same document. The self-training approach consistently uses the name \"doyle\" across two sentences when referring to the same entity. In the fourth example, we emphasize the plurality of the underlined words. The baseline model inconsistenly generates both singular and plural forms when referring to same noun in consecutive sentences. Self-training generates the noun \"pilots\" in correct plural form in both sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In this paper, we propose a way of incorporating the document context inside a trained sentencelevel neural machine translation model using selftraining. We process documents from left to right multiple times and self-train the sentence-level NMT model on the pair of source sentence and generated target sentence. This reinforces the choices made by the NMT model thus making it more likely that the choices will be repeated in the rest of the document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We demonstrate the feasibility of our approach on three machine translation datasets: NIST Zh-En, WMT19 Zh-En and OpenSubtitles En-Ru. We show that self-training improves sentence-level Ref we are actively seeking a local partner to set up a joint fund company , \" duchateau said . duchateau said that the chinese market still has ample potentials . Baseline we are actively looking for a local partner to establish a joint venture fund company , \" doyle said .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "du said that there is still a lot of room for the chinese market . Ours we are actively looking for a local partner to establish a joint venture fund company , \" doyle said . doyle said that there is still great room for the chinese market .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Ref in may this year , 13 pilots with china eastern airlines wuhan company in succession handed in their resignations , which were rejected by the company . soon afterwards , the pilots applied one after another at the beginning of june to the labor dispute arbitration commission of hubei province for labor arbitration , requesting for a ruling that their labor relationship with china eastern airlines wuhan company be terminated . Baseline in may this year , 13 pilots of china eastern 's wuhan company submitted their resignations one after another , but the company refused . the pilot then applied for labor arbitration with the hubei province labor dispute arbitration committee in early june , requesting the ruling to terminate the labor relationship with the wuhan company of china eastern airlines . Ours in may this year , 13 pilots of china eastern 's wuhan company submitted their resignations one after another , but the company refused . subsequently , in early june , the pilots successively applied for labor arbitration with the hubei province labor dispute arbitration committee , requesting that the labor relationship with china eastern airlines be terminated . Table 6 : Four reference documents together with translations generated by the baseline sentence-level model and by our self-training approach. First two documents are taken from the OpenSubtitles English-Russian and second two documents are taken from the NIST Chinese-English dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1185, |
|
"end": 1192, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "baselines by up to 0.93 BLEU. We also conduct a human evaluation study and show a strong preference of the annotators to the translated documents generated by our self-training approach. Our analysis demonstrates that self-training achieves higher improvement on longer documents and using better sentence-level models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "In this work, we only use self-training on sourceto-target NMT models in order to capture the target side document context. One extension could investigate the application of self-training on both target-to-source and source-to-target sentence-level models to incorporate both source and target document context into generated translations. Overall, we hope that our work would motivate novel approaches of making trained sentence-level models better suited for document translation at decoding time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "https://github.com/lena-voita/ good-translation-wrong-in-context", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Phil Blunsom, Kris Cao, Kyunghyun Cho, Chris Dyer, Wojciech Stokowiec and members of the Language team for helpful suggestions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": "9" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Contextual handling in neural machine translation: Look behind, ahead and on both sides", |
|
"authors": [ |
|
{ |
|
"first": "Ruchit", |
|
"middle": [], |
|
"last": "Agrawal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Turchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matteo", |
|
"middle": [], |
|
"last": "Negri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ruchit Agrawal, Marco Turchi, and Matteo Negri. 2018. Contextual handling in neural machine trans- lation: Look behind, ahead and on both sides.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Models of information processing in the brain. Parallel models of associative memory", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "James", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Anderson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1981, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James A Anderson and Geoffrey E Hinton. 1981. Mod- els of information processing in the brain. Parallel models of associative memory.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Domain adaptation via pseudo in-domain data selection", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Axelrod", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Axelrod, X. He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Using fast weights to attend to the recent past", |
|
"authors": [ |
|
{ |
|
"first": "Joel", |
|
"middle": [ |
|
"Z" |
|
], |
|
"last": "Leibo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Catalin", |
|
"middle": [], |
|
"last": "Ionescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joel Z. Leibo, and Catalin Ionescu. 2016. Using fast weights to attend to the recent past. In NIPS.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1409.0473" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Proceedings of the Fifth Conference on Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Lo\u00efc", |
|
"middle": [], |
|
"last": "Barrault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Magdalena", |
|
"middle": [], |
|
"last": "Biesialska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marta", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Costa-Juss\u00e0", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Federmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yvette", |
|
"middle": [], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Grundkiewicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Huck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Joanis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Kocmi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chi-Kiu", |
|
"middle": [], |
|
"last": "Lo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikola", |
|
"middle": [], |
|
"last": "Ljube\u0161i\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Morishita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masaaki", |
|
"middle": [], |
|
"last": "Nagata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Toshiaki", |
|
"middle": [], |
|
"last": "Nakazawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lo\u00efc Barrault, Magdalena Biesialska, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljube\u0161i\u0107, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshi- aki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20). In Proceedings of the Fifth Conference on Machine Translation, pages 1-55, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In ACL", |
|
"authors": [ |
|
{ |
|
"first": "Lo\u00efc", |
|
"middle": [], |
|
"last": "Barrault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marta", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Costa-Juss\u00e0", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Federmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Fishel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yvette", |
|
"middle": [], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Huck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shervin", |
|
"middle": [], |
|
"last": "Malmasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mathias", |
|
"middle": [], |
|
"last": "M\u00fcller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lo\u00efc Barrault, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M\u00fcller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine transla- tion (WMT19). In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Findings of the 2014 workshop on statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Ondrej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Buck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Federmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Leveling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Pecina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herve", |
|
"middle": [], |
|
"last": "Saint-Amand", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Soricut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Tamchyna", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "WMT@ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ondrej Bojar, C. Buck, C. Federmann, B. Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and A. Tamchyna. 2014. Findings of the 2014 workshop on statistical ma- chine translation. In WMT@ACL.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A survey of domain adaptation for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Chenhui", |
|
"middle": [], |
|
"last": "Chu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1304--1319", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chenhui Chu and Rui Wang. 2018. A survey of do- main adaptation for neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1304-1319, Santa Fe, New Mexico, USA. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Understanding back-translation at scale", |
|
"authors": [ |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Measuring nominal scale agreement among many raters", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Fleiss", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1971, |
|
"venue": "Psychological Bulletin", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph L. Fleiss. 1971. Measuring nominal scale agree- ment among many raters. Psychological Bulletin.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Fast domain adaptation for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Freitag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Al-Onaizan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Freitag and Y. Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. ArXiv, abs/1612.06897.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Context-aware neural machine translation decoding", |
|
"authors": [ |
|
{ |
|
"first": "Eva Mart\u00ednez", |
|
"middle": [], |
|
"last": "Garcia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Creus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Espa\u00f1a-Bonet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eva Mart\u00ednez Garcia, C. Creus, and C. Espa\u00f1a-Bonet. 2019. Context-aware neural machine translation de- coding. In DiscoMT@EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Using word embeddings to enforce document-level lexical consistency in machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Eva Mart\u00ednez", |
|
"middle": [], |
|
"last": "Garcia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Creus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Espa\u00f1a-Bonet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llu\u00eds", |
|
"middle": [], |
|
"last": "M\u00e0rquez I Villodre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "The Prague Bulletin of Mathematical Linguistics", |
|
"volume": "108", |
|
"issue": "", |
|
"pages": "85--96", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eva Mart\u00ednez Garcia, C. Creus, C. Espa\u00f1a-Bonet, and Llu\u00eds M\u00e0rquez i Villodre. 2017. Using word embed- dings to enforce document-level lexical consistency in machine translation. The Prague Bulletin of Math- ematical Linguistics, 108:85 -96.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "English gigaword. Linguistic Data Consortium", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Graff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junbo", |
|
"middle": [], |
|
"last": "Kong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ke", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kazuaki", |
|
"middle": [], |
|
"last": "Maeda", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2003. English gigaword. Linguistic Data Consortium, Philadelphia, 4(1):34.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Generating sequences with recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Graves", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1308.0850" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Document-wide decoding for phrase-based statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Hardmeier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "EMNLP-CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christian Hardmeier, Joakim Nivre, and J. Tiedemann. 2012. Document-wide decoding for phrase-based statistical machine translation. In EMNLP-CoNLL.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Achieving human parity on automatic chinese to english news translation", |
|
"authors": [ |
|
{ |
|
"first": "Hany", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Aue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chang", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishal", |
|
"middle": [], |
|
"last": "Chowdhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Federmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuedong", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys-Dowmunt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mu", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shujie", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tie-Yan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Renqian", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arul", |
|
"middle": [], |
|
"last": "Menezes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Seide", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Tian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lijun", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuangzhi", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yingce", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongdong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhirui", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1803.05567" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Feder- mann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, and Ming Zhou. 2018. Achieving human parity on automatic chinese to english news translation. arXiv preprint arXiv:1803.05567.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Revisiting self-training for neural sequence generation", |
|
"authors": [ |
|
{ |
|
"first": "Junxian", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiatao", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiajun", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.13788" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Junxian He, Jiatao Gu, Jiajun Shen, and Marc'Aurelio Ranzato. 2019. Revisiting self-training for neural sequence generation. arXiv preprint arXiv:1909.13788.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Neural networks and physical systems with emergent collective computational abilities", |
|
"authors": [ |
|
{ |
|
"first": "J J", |
|
"middle": [], |
|
"last": "Hopfield", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1982, |
|
"venue": "Proceedings of the National Academy of Sciences", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J J Hopfield. 1982. Neural networks and physical sys- tems with emergent collective computational abili- ties. Proceedings of the National Academy of Sci- ences.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Selftraining pcfg grammars with latent annotations across languages", |
|
"authors": [ |
|
{ |
|
"first": "Zhongqiang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [], |
|
"last": "Harper", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhongqiang Huang and Mary Harper. 2009. Self- training pcfg grammars with latent annotations across languages. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Fill in the blanks: Imputing missing sentences for larger-context neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Sebastien", |
|
"middle": [], |
|
"last": "Jean", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankur", |
|
"middle": [], |
|
"last": "Bapna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.14075" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastien Jean, Ankur Bapna, and Orhan Firat. 2019. Fill in the blanks: Imputing missing sentences for larger-context neural machine translation. arXiv preprint arXiv:1910.14075.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Does neural machine translation benefit from larger context? arXiv preprint", |
|
"authors": [ |
|
{ |
|
"first": "Sebastien", |
|
"middle": [], |
|
"last": "Jean", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stanislas", |
|
"middle": [], |
|
"last": "Lauly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1704.05135" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastien Jean, Stanislas Lauly, Orhan Firat, and Kyunghyun Cho. 2017. Does neural machine trans- lation benefit from larger context? arXiv preprint arXiv:1704.05135.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Microsoft translator at wmt 2019: Towards large-scale document-level neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys-Dowmunt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "WMT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcin Junczys-Dowmunt. 2019. Microsoft translator at wmt 2019: Towards large-scale document-level neural machine translation. In WMT.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Recurrent continuous translation models", |
|
"authors": [ |
|
{ |
|
"first": "Nal", |
|
"middle": [], |
|
"last": "Kalchbrenner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6980" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Correlation matrix memories", |
|
"authors": [ |
|
{ |
|
"first": "Teuvo", |
|
"middle": [], |
|
"last": "Kohonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1972, |
|
"venue": "IEEE Transactions on Computers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Teuvo Kohonen. 1972. Correlation matrix memories. IEEE Transactions on Computers.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Dynamic evaluation of neural sequence models", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Krause", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emmanuel", |
|
"middle": [], |
|
"last": "Kahembwe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iain", |
|
"middle": [], |
|
"last": "Murray", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Renals", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ben Krause, Emmanuel Kahembwe, Iain Murray, and Steve Renals. 2018. Dynamic evaluation of neural sequence models. In ICML.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Modeling coherence for neural machine translation with dynamic and topic caches", |
|
"authors": [ |
|
{ |
|
"first": "Shaohui", |
|
"middle": [], |
|
"last": "Kuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deyi", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weihua", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guodong", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1711.11221" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shaohui Kuang, Deyi Xiong, Weihua Luo, and Guodong Zhou. 2017. Modeling coherence for neural machine translation with dynamic and topic caches. arXiv preprint arXiv:1711.11221.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "A cachebased natural language model for speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Kuhn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Renato", |
|
"middle": [ |
|
"De" |
|
], |
|
"last": "Mori", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "PAMI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roland Kuhn and Renato De Mori. 1990. A cache- based natural language model for speech recogni- tion. In PAMI.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Dong-Hyun", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ICML 2013 Workshop : Challenges in Representation Learning (WREPL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dong-Hyun Lee. 2013. Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks. ICML 2013 Workshop : Chal- lenges in Representation Learning (WREPL).", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Document context neural machine translation with memory networks", |
|
"authors": [ |
|
{ |
|
"first": "Sameen", |
|
"middle": [], |
|
"last": "Maruf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sameen Maruf and Gholamreza Haffari. 2018. Docu- ment context neural machine translation with mem- ory networks. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Effective self-training for parsing", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mcclosky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Document-level neural machine translation with hierarchical attention networks", |
|
"authors": [ |
|
{ |
|
"first": "Lesly", |
|
"middle": [], |
|
"last": "Miculicich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dhananjay", |
|
"middle": [], |
|
"last": "Ram", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikolaos", |
|
"middle": [], |
|
"last": "Pappas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Henderson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention net- works. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Statistical language models based on neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov. 2012. Statistical language models based on neural networks. Ph.D. thesis, Brno Uni- versity of Technology.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Proceedings of the Fourth Workshop on Discourse in Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Andrei", |
|
"middle": [], |
|
"last": "Popescu-Belis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sharid", |
|
"middle": [], |
|
"last": "Lo\u00e1iciga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Hardmeier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deyi", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrei Popescu-Belis, Sharid Lo\u00e1iciga, Christian Hardmeier, and Deyi Xiong. 2019. Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019). https://www. aclweb.org/anthology/D19-65.pdf.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "A call for clarity in reporting BLEU scores", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "WMT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In WMT.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Language models are unsupervised multitask learners", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Child", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Luan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dario", |
|
"middle": [], |
|
"last": "Amodei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Radford, Jeffrey Wu, R. Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language mod- els are unsupervised multitask learners.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets", |
|
"authors": [ |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rai", |
|
"middle": [], |
|
"last": "Rappoport", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roi Reichart and Rai Rappoport. 2007. Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Probability of error of some adaptive pattern-recognition machines", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Scudder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1965, |
|
"venue": "IEEE Trans. Inf. Theor", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Scudder. 1965. Probability of error of some adap- tive pattern-recognition machines. IEEE Trans. Inf. Theor.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Why the Time Is Ripe for Discourse in Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich. 2018. Why the Time Is Ripe for Discourse in Machine Translation. http://homepages.inf.ed.ac.uk/ rsennric/wnmt2018.pdf.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Improving neural machine translation models with monolingual data", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "A mathematical theory of communication", |
|
"authors": [ |
|
{ |
|
"first": "Claude", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Shannon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1948, |
|
"venue": "Bell Syst. Tech. J", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Claude E. Shannon. 1948. A mathematical theory of communication. Bell Syst. Tech. J.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Dropout: A simple way to prevent neural networks from overfitting", |
|
"authors": [ |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Krizhevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Neural machine translation with extended context", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Scherrer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Third Workshop on Discourse in Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00f6rg Tiedemann and Yves Scherrer. 2017. Neural ma- chine translation with extended context. In Proceed- ings of the Third Workshop on Discourse in Machine Translation.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Using monolingual sourcelanguage data to improve mt performance", |
|
"authors": [ |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Ueffing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "IWSLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicola Ueffing. 2006. Using monolingual source- language data to improve mt performance. In IWSLT.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Context-aware monolingual repair for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Voita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elena Voita, Rico Sennrich, and Ivan Titov. 2019a. Context-aware monolingual repair for neural ma- chine translation. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion", |
|
"authors": [ |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Voita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elena Voita, Rico Sennrich, and Ivan Titov. 2019b. When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "Exploiting cross-sentence context for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Longyue", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhaopeng", |
|
"middle": [], |
|
"last": "Tu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017. Exploiting cross-sentence context for neural machine translation. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "Nonholographic associative memory", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Willshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugh", |
|
"middle": [ |
|
"Christopher" |
|
], |
|
"last": "Peter Buneman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Longuet-Higgins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1969, |
|
"venue": "Nature", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David J Willshaw, O Peter Buneman, and Hugh Christopher Longuet-Higgins. 1969. Non- holographic associative memory. Nature.", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "Microsoft research asia's systems for WMT19", |
|
"authors": [ |
|
{ |
|
"first": "Yingce", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Tian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Di", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weicong", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linyuan", |
|
"middle": [], |
|
"last": "Gong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yichong", |
|
"middle": [], |
|
"last": "Leng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Renqian", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiren", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lijun", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinhua", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tie-Yan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "WMT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yingce Xia, Xu Tan, Fei Tian, Fei Gao, Di He, Weicong Chen, Yang Fan, Linyuan Gong, Yichong Leng, Ren- qian Luo, Yiren Wang, Lijun Wu, Jinhua Zhu, Tao Qin, and Tie-Yan Liu. 2019. Microsoft research asia's systems for WMT19. In WMT.", |
|
"links": null |
|
}, |
|
"BIBREF56": { |
|
"ref_id": "b56", |
|
"title": "Deliberation networks: Sequence generation beyond one-pass decoding", |
|
"authors": [ |
|
{ |
|
"first": "Yingce", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Tian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lijun", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianxin", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nenghai", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tie-Yan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass de- coding. In NIPS.", |
|
"links": null |
|
}, |
|
"BIBREF57": { |
|
"ref_id": "b57", |
|
"title": "Modeling coherence for discourse neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhongjun", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haifeng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1811.05683" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hao Xiong, Zhongjun He, Hua Wu, and Haifeng Wang. 2018. Modeling coherence for discourse neural ma- chine translation. arXiv preprint arXiv:1811.05683.", |
|
"links": null |
|
}, |
|
"BIBREF58": { |
|
"ref_id": "b58", |
|
"title": "Unsupervised word sense disambiguation rivaling supervised methods", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Yarowsky. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF59": { |
|
"ref_id": "b59", |
|
"title": "Simple and effective noisy channel modeling for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kyra", |
|
"middle": [], |
|
"last": "Yee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Dauphin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyra Yee, Nathan Ng, Yann N. Dauphin, and Michael Auli. 2019. Simple and effective noisy channel mod- eling for neural machine translation. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF60": { |
|
"ref_id": "b60", |
|
"title": "Better document-level machine translation with bayes' rule", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Sartran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wojciech", |
|
"middle": [], |
|
"last": "Stokowiec", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lingpeng", |
|
"middle": [], |
|
"last": "Kong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "346--360", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Yu, Laurent Sartran, Wojciech Stokowiec, Wang Ling, Lingpeng Kong, P. Blunsom, and Chris Dyer. 2019. Better document-level machine translation with bayes' rule. Transactions of the Association for Computational Linguistics, 8:346-360.", |
|
"links": null |
|
}, |
|
"BIBREF61": { |
|
"ref_id": "b61", |
|
"title": "The neural noisy channel", |
|
"authors": [ |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Kocisky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefen- stette, and Tomas Kocisky. 2017. The neural noisy channel. In ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF62": { |
|
"ref_id": "b62", |
|
"title": "Improving the transformer translation model with document-level context", |
|
"authors": [ |
|
{ |
|
"first": "Jiacheng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huanbo", |
|
"middle": [], |
|
"last": "Luan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Feifei", |
|
"middle": [], |
|
"last": "Zhai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfang", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018. Improving the transformer translation model with document-level context. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF63": { |
|
"ref_id": "b63", |
|
"title": "Exploiting source-side monolingual data in neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Jiajun", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengqing", |
|
"middle": [], |
|
"last": "Zong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiajun Zhang and Chengqing Zong. 2016. Exploit- ing source-side monolingual data in neural machine translation. In EMNLP.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Relationship between relative performance of the sentence-level model and BLEU difference of self-training on the NIST dataset." |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Relationship between number of sentences and BLEU improvement of self-training on the NIST dataset." |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"html": null, |
|
"text": "Results on WMT'19 Chinese-English evaluation sets. The first row shows the performance of the Transformer Big model by", |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"html": null, |
|
"text": "Ref \u043c\u044b \u0441 \u044d\u0439\u043f\u0440\u0438\u043b \u0440\u0430\u0437\u0432\u0435\u043b\u0438\u0441\u044c . \u043a\u0430\u043a \u044f \u0438 \u0441\u043a\u0430\u0437\u0430\u043b ... \u0438\u0433\u0440\u0430 \u0432 \u043e\u0436\u0438\u0434\u0430\u043d\u0438\u0435 . \u0431\u0443\u0434\u044c \u0441\u0438\u043b\u044c\u043d\u044b\u043c . \u0438 \u0432\u0441\u0451 \u043f\u043e\u043b\u0443\u0447\u0438\u0442\u0441\u044f . Baseline \u043c\u044b \u0441 \u044d\u0439\u043f\u0440\u0438\u043b \u0440\u0430\u0437\u0432\u0435\u043b\u0438\u0441\u044c . \u043d\u0443 , \u043a\u0430\u043a \u044f \u0443\u0436\u0435 \u0441\u043a\u0430\u0437\u0430\u043b ... \u0438\u0433\u0440\u0430 \u043e\u0436\u0438\u0434\u0430\u043d\u0438\u044f . \u0431\u0443\u0434\u044c \u0441\u0438\u043b\u044c\u043d\u043e\u0439 . \u0442\u044b \u0441\u043f\u0440\u0430\u0432\u0438\u0448\u044c\u0441\u044f . Ours \u043c\u044b \u0441 \u044d\u0439\u043f\u0440\u0438\u043b \u0440\u0430\u0437\u0432\u0435\u043b\u0438\u0441\u044c . \u043d\u0443 , \u043a\u0430\u043a \u044f \u0443\u0436\u0435 \u0441\u043a\u0430\u0437\u0430\u043b ... \u0438\u0433\u0440\u0430 \u043e\u0436\u0438\u0434\u0430\u043d\u0438\u044f . \u0431\u0443\u0434\u044c \u0441\u0438\u043b\u044c\u043d\u044b\u043c . \u0442\u044b \u0441\u043f\u0440\u0430\u0432\u0438\u0448\u044c\u0441\u044f . Ref \u0441\u0451\u0440\u0435\u043d \u0443\u0441\u0442\u0440\u0430\u0438\u0432\u0430\u0435\u0442 \u0432\u0435\u0447\u0435\u0440\u0438\u043d\u043a\u0443 \u043f\u043e \u043f\u043e\u0432\u043e\u0434\u0443 \u0441\u0432\u043e\u0435\u0433\u043e \u0434\u043d\u044f \u0440\u043e\u0436\u0434\u0435\u043d\u0438\u044f \u0432 \u0441\u0443\u0431\u0431\u043e\u0442\u0443 , \u0430 \u044f \u043d\u0435 \u0437\u043d\u0430\u044e , \u043f\u043e\u0439\u0434\u0443 \u043b\u0438 \u044f . \u043f\u043e\u0447\u0435\u043c\u0443 \u0431\u044b \u0442\u0435\u0431\u0435 \u043d\u0435 \u043f\u043e\u0439\u0442\u0438 ? \u043f\u0440\u043e\u0441\u0442\u043e \u0432\u0441\u0451 \u043f\u043e\u0448\u043b\u043e \u043d\u0435 \u0442\u0430\u043a . -\u0438 \u044f \u043f\u043e\u0441\u0441\u043e\u0440\u0438\u043b\u0441\u044f \u0441 \u043a\u043d\u0443\u0434\u043e\u043c . Baseline \u0432 \u0441\u0443\u0431\u0431\u043e\u0442\u0443 \u0434\u0435\u043d\u044c \u0440\u043e\u0436\u0434\u0435\u043d\u0438\u044f \u0441\u0451\u0440\u0435\u043d\u0430 \u0438 \u044f \u043d\u0435 \u0437\u043d\u0430\u044e , \u043f\u0440\u0438\u0433\u043b\u0430\u0448\u0435\u043d\u0430 \u043b\u0438 \u044f . \u043f\u043e\u0447\u0435\u043c\u0443 \u0442\u0435\u0431\u044f \u043d\u0435 \u043f\u0440\u0438\u0433\u043b\u0430\u0441\u0438\u043b\u0438 ? \u0432\u0441\u0435 \u043f\u0440\u043e\u0441\u0442\u043e \u043f\u043e\u0448\u043b\u043e \u043d\u0435 \u0442\u0430\u043a . -\u0438 \u044f \u043f\u043e\u0440\u0443\u0433\u0430\u043b\u0441\u044f \u0441 \u043a\u043d\u0443\u0434\u043e\u043c .", |
|
"content": "<table><tr><td>Ours</td><td>\u0432 \u0441\u0443\u0431\u0431\u043e\u0442\u0443 \u0434\u0435\u043d\u044c \u0440\u043e\u0436\u0434\u0435\u043d\u0438\u044f \u0441\u0451\u0440\u0435\u043d\u0430 \u0438 \u044f \u043d\u0435 \u0437\u043d\u0430\u044e , \u043f\u0440\u0438\u0433\u043b\u0430\u0448\u0435\u043d\u0430 \u043b\u0438 \u044f .</td></tr><tr><td/><td>\u043f\u043e\u0447\u0435\u043c\u0443 \u0442\u0435\u0431\u044f \u043d\u0435 \u043f\u0440\u0438\u0433\u043b\u0430\u0441\u0438\u043b\u0438 ? \u0432\u0441\u0435 \u043f\u0440\u043e\u0441\u0442\u043e \u043f\u043e\u0448\u043b\u043e \u043d\u0435 \u0442\u0430\u043a . -\u0438 \u044f \u043f\u043e\u0441\u0441\u043e\u0440\u0438\u043b\u0430\u0441\u044c \u0441 \u043a\u043d\u0443\u0434\u043e\u043c .</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |