ACL-OCL / Base_JSON /prefixV /json /vardial /2021.vardial-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:14:36.330723Z"
},
"title": "Efficient Unsupervised NMT for Related Languages with Cross-Lingual Language Models and Fidelity Objectives",
"authors": [
{
"first": "Rami",
"middle": [],
"last": "Aly",
"suffix": "",
"affiliation": {
"laboratory": "Computer Laboratory",
"institution": "University of Cambridge",
"location": {
"country": "U.K"
}
},
"email": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Caines",
"suffix": "",
"affiliation": {
"laboratory": "Computer Laboratory & ALTA Institute",
"institution": "University of Cambridge",
"location": {
"country": "U.K"
}
},
"email": ""
},
{
"first": "Paula",
"middle": [],
"last": "Buttery",
"suffix": "",
"affiliation": {
"laboratory": "Computer Laboratory & ALTA Institute",
"institution": "University of Cambridge",
"location": {
"country": "U.K"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The most successful approach to Neural Machine Translation (NMT) when only monolingual training data is available, called unsupervised machine translation, is based on backtranslation where noisy translations are generated to turn the task into a supervised one. However, back-translation is computationally very expensive and inefficient. This work explores a novel, efficient approach to unsupervised NMT. A transformer, initialized with cross-lingual language model weights, is finetuned exclusively on monolingual data of the target language by jointly learning on a paraphrasing and denoising autoencoder objective. Experiments are conducted on WMT datasets for German\u2192English, French\u2192English, and Romanian\u2192English. Results are competitive to strong baseline unsupervised NMT models, especially for closely related source languages (German) compared to more distant ones (Romanian, French), while requiring about a magnitude less training time.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The most successful approach to Neural Machine Translation (NMT) when only monolingual training data is available, called unsupervised machine translation, is based on backtranslation where noisy translations are generated to turn the task into a supervised one. However, back-translation is computationally very expensive and inefficient. This work explores a novel, efficient approach to unsupervised NMT. A transformer, initialized with cross-lingual language model weights, is finetuned exclusively on monolingual data of the target language by jointly learning on a paraphrasing and denoising autoencoder objective. Experiments are conducted on WMT datasets for German\u2192English, French\u2192English, and Romanian\u2192English. Results are competitive to strong baseline unsupervised NMT models, especially for closely related source languages (German) compared to more distant ones (Romanian, French), while requiring about a magnitude less training time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "While traditional end-to-end neural machine translation (NMT) approaches have shown highly promising results when abundant parallel data is available (Barrault et al., 2019) , the task remains a considerable challenge when only monolingual training data is available, also called unsupervised MT (Artetxe et al., 2018a; Lample et al., 2018a) . Unsupervised NMT systems tend to combine backtranslation (Sennrich et al., 2016a) with crosslingual embeddings (Artetxe et al., 2018b; Lample et al., 2018a,b) or, more recently, with weights of a pre-trained cross-lingual language model (XLM) (Conneau and Lample, 2019) . Back-translation uses noisy translations, generated by a source-totarget model, as input for a target-to-source model (and vice versa) . Although shown to perform well if plenty of monolingual data is available, backtranslation is computationally very expensive. It is also highly inefficient as the inference performed to generate the noisy translations is of sequential nature, slowing down the training substantially.",
"cite_spans": [
{
"start": 150,
"end": 173,
"text": "(Barrault et al., 2019)",
"ref_id": null
},
{
"start": 296,
"end": 319,
"text": "(Artetxe et al., 2018a;",
"ref_id": "BIBREF0"
},
{
"start": 320,
"end": 341,
"text": "Lample et al., 2018a)",
"ref_id": "BIBREF20"
},
{
"start": 401,
"end": 425,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF30"
},
{
"start": 455,
"end": 478,
"text": "(Artetxe et al., 2018b;",
"ref_id": "BIBREF2"
},
{
"start": 479,
"end": 502,
"text": "Lample et al., 2018a,b)",
"ref_id": null
},
{
"start": 587,
"end": 613,
"text": "(Conneau and Lample, 2019)",
"ref_id": "BIBREF13"
},
{
"start": 734,
"end": 750,
"text": "(and vice versa)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents a novel unsupervised NMT method that does not require back-translation. Instead, it jointly fine-tunes a transformer (Vaswani et al., 2017) , initialized with weights of crosslingual language models (Conneau and Lample, 2019) , on a denoising autoencoder (Vincent et al., 2008; Artetxe et al., 2018b) and paraphrasing objective exclusively on data in the target language. The alignment of the languages in the transformer's encoder means we can learn similar hidden representations for sentences of similar meaning but from different languages. The decoder, fine-tuned to generate a sentence in the target language, can thus generate a translation based on the encoder's representation of a source-language input sentence. Naturally, this method is more suitable for related languages, as the alignment of languages and hidden representations in the cross-lingual encoder is of particular importance to this approach.",
"cite_spans": [
{
"start": 137,
"end": 159,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 219,
"end": 245,
"text": "(Conneau and Lample, 2019)",
"ref_id": "BIBREF13"
},
{
"start": 275,
"end": 297,
"text": "(Vincent et al., 2008;",
"ref_id": "BIBREF33"
},
{
"start": 298,
"end": 320,
"text": "Artetxe et al., 2018b)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our experiments with the WMT datasets for German\u2192English, French\u2192English, and Romanian\u2192English (Bojar et al., 2016) show that the proposed approach outperforms competitive models -namely Artetxe et al. (2018b) , Lample et al. (2018a) and Lample et al. (2018b) -highlighting that the alignment quality achieved by the highquality cross-lingual language model as a translation signal is superior to aligned embeddings and back-translation. Results for German are substantially higher than for French and Romanian, highlighting that our approach works particularly well for more closely related languages. While achieving competitive results, the proposed approach is substantially more efficient. It converges much quicker while requiring less than 50% time per epoch during fine-tuning which results in around a magnitude less floating point operations for the proposed approach than for an equivalent setup when using back-translation. We further show that the paraphrasing objective improves translation quality considerably compared to using the autoencoder objective in isolation.",
"cite_spans": [
{
"start": 95,
"end": 115,
"text": "(Bojar et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 187,
"end": 209,
"text": "Artetxe et al. (2018b)",
"ref_id": "BIBREF2"
},
{
"start": 212,
"end": 233,
"text": "Lample et al. (2018a)",
"ref_id": "BIBREF20"
},
{
"start": 238,
"end": 259,
"text": "Lample et al. (2018b)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given an input sequence X s in source language s the objective is to generate a sequence Y t in the target language t, which is semantically equivalent. A model NMT s\u2192t models the target function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "arg max V t m u=1 p(y t u |y t <u ; x s 1 , ..., x s n ),",
"eq_num": "(1)"
}
],
"section": "Method",
"sec_num": "2"
},
{
"text": "with V t being the set of all possible sequences in the target language. This paper focuses on the transformer model (Vaswani et al., 2017) to solve Equation 1. The transformer consists of an encoder and a decoder module: both are initialized with weights W s\u2194t of a cross-lingual language model and a shared subword vocabulary to align languages s and t ( \u00a7 2.3).",
"cite_spans": [
{
"start": 117,
"end": 139,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H s = ENC W (X s )",
"eq_num": "(2)"
}
],
"section": "Method",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Y t = DEC W (H s )",
"eq_num": "(3)"
}
],
"section": "Method",
"sec_num": "2"
},
{
"text": "The encoder transforms the input into a latent space while the decoder iteratively generates the output sequence\u0176 t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "We propose a fine-tuning approach for this model that solely relies on monolingual data of the target language and the alignment W between s and t. Due to the cross-lingual weights W , the hidden representation of the encoder for both source and target language are aligned and thus expected to be similar 1 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning Approach",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ENC W (X s ) \u223c ENC W (Y t )",
"eq_num": "(4)"
}
],
"section": "Fine-tuning Approach",
"sec_num": "2.1"
},
{
"text": "This assumption is the essence to our approach and it applies more to closely related source and target languages than to more distant ones. Based on 1 This expectation is based on results for zero-shot classification to highlight the sentence similarity across different languages as well as the high cosine-similarity between word translation pairs shown in Conneau and Lample (2019) . Languages more similar to the one the model has been trained on have higher sentence similarity and thus achieved higher scores in their experiments.",
"cite_spans": [
{
"start": 360,
"end": 385,
"text": "Conneau and Lample (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning Approach",
"sec_num": "2.1"
},
{
"text": "the assumption, our hypothesis is that it is sufficient to train the initialized encoder and decoder on sentence generation tasks in only the target language. More specifically, we explore meaning-preserving training objectives, that focus on monolingual sentence generation objectives so that the meaning of the input sequence is preserved for the generated sequence. We call these fidelity objectives. Thus, given a sentence P t in the target language (specified in \u00a7 2.2) with very similar/identical meaning to a sentence Q t of the same language in the monolingual training data, we optimize the NMT model by calculating the cross-entropy loss of the fidelity task over the shared subword vocabulary:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning Approach",
"sec_num": "2.1"
},
{
"text": "L fid = \u2212 <P t ,Q t >\u2208D fid log(p(Q t |P t )), (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning Approach",
"sec_num": "2.1"
},
{
"text": "where D fid is a fidelity training dataset. When confronted with an input sequence X s in the source language during inference, EN C W (X s ) generates a hidden representation H s , which is expected to be similar to the representation H t for a semantically identical sentence in the target language due to the cross-lingual LM weights W . The similarity between H s and H t trains the decoder to generate a meaning-preserving sequence based on the hidden representation of the encoder and enables DEC W (H s ) to generate a sentence in the target language similar to DEC W (H t ) while preserving the meaning of the source sentence X s .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning Approach",
"sec_num": "2.1"
},
{
"text": "We focus on two learning objectives that are solved by the model for the target language: a denoising autoencoder (Artetxe et al., 2018b) and paraphrase generation in the target language. The objectives are illustrated in Figure 1 and are learned using Eq.",
"cite_spans": [
{
"start": 114,
"end": 137,
"text": "(Artetxe et al., 2018b)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 222,
"end": 230,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Fidelity Objectives for Fine-tuning",
"sec_num": "2.2"
},
{
"text": "Denoising Autoencoder We use a straightforward autoencoder objective (Vincent et al., 2008; Artetxe et al., 2018b) to fine-tune the model so that it reconstructs the input Q t from a noisy version P t denoise (the noise prevents the model from simply copying the input). We add noise to Q t by either swapping, omitting or replacing words with a padding token. The number of noise operations on a sentence is a hyperparameter. The denoising autoencoder objective is used in most unsupervised NMT systems in combination with back-translation (Conneau and Lample, 2019) , however, in these settings, the autoencoder objective only serves the ",
"cite_spans": [
{
"start": 69,
"end": 91,
"text": "(Vincent et al., 2008;",
"ref_id": "BIBREF33"
},
{
"start": 92,
"end": 114,
"text": "Artetxe et al., 2018b)",
"ref_id": "BIBREF2"
},
{
"start": 541,
"end": 567,
"text": "(Conneau and Lample, 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "5.",
"sec_num": null
},
{
"text": "The auction seems extremely fascinating.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrasing System",
"sec_num": null
},
{
"text": "Auctions can seem very interesting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrasing System",
"sec_num": null
},
{
"text": "The public auction can be very exciting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrasing System",
"sec_num": null
},
{
"text": "= Auctions can seem highly intriguing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrasing System",
"sec_num": null
},
{
"text": "Auctions highly seem intriguing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrasing System",
"sec_num": null
},
{
"text": "Auctions highly PAD intriguing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrasing System",
"sec_num": null
},
{
"text": "can seem highly PAD. Figure 1 : Illustration of the joint fine-tuning approach with denoising autoencoder and paraphrasing objectives. Note that for each sentence multiple paraphrases and noisy inputs are generated for fine-tuning the transformer.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 29,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Paraphrasing System",
"sec_num": null
},
{
"text": "function of making the model familiar with noise in the input that back-translated texts naturally have. Explicitly using alignment in the encoder for unsupervised translation in the way we propose has not yet been explored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artifical Noise",
"sec_num": null
},
{
"text": "Paraphrasing In previous work the autoencoder objective has only been shown to be effective in combination with back-translation as a means to make the model familiar with noise in the input (Conneau and Lample, 2019; Artetxe et al., 2018b) . This might be attributed to the limitation of the denoising autoencoder that the added noise is artificial and results in ungrammatical sentences. Thus, we additionally explore the task of reconstructing a sentence Q t from a paraphrased version P t pp which is complemented by the denoising task. Automatically generated paraphrases are expected to be more grammatical and diverse than the simple rulebased variations used in the autoencoder objective.",
"cite_spans": [
{
"start": 191,
"end": 217,
"text": "(Conneau and Lample, 2019;",
"ref_id": "BIBREF13"
},
{
"start": 218,
"end": 240,
"text": "Artetxe et al., 2018b)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Artifical Noise",
"sec_num": null
},
{
"text": "To initialize the weights W of the word embeddings, the encoder, and decoder of the transformer model we use the weights of the state-of-the-art pre-trained cross-lingual language model (XLM) of Conneau and Lample (2019) . The pre-trained XLM is essentially the encoder part of the transformer model trained on the masked language modelling (MLM) objective on a stream of text. Furthermore, XLM uses language embeddings to assist the network in recognizing different languages. Finally, XLM and subsequently the translation model make use of subword tokenized inputs, specifically bytepair encoding (BPE) (Sennrich et al., 2016b) to reduce the vocabulary size as it is shared by all languages in the model.",
"cite_spans": [
{
"start": 195,
"end": 220,
"text": "Conneau and Lample (2019)",
"ref_id": "BIBREF13"
},
{
"start": 605,
"end": 629,
"text": "(Sennrich et al., 2016b)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization",
"sec_num": "2.3"
},
{
"text": "Data: Our approach uses WMT 2007-8 training data for German\u2192English, as well as French\u2192English (Callison-Burch et al., 2007 , and the training data of WMT 2015 for Romanian\u2192English (Bojar et al., 2015) . All languages are Indo-European and therefore related to some extent, but there are differences of relatedness. French and Romanian are Italic whereas German and English are West Germanic: we therefore hold German and English to be the most closely related of our language pairs, with much that is similar in terms of lexicon and morpho-syntax and a recent shared history; followed by French and English (due to extensive lexical borrowings from language contact), and lastly Romanian\u2192English which is the most distantly related language pair.",
"cite_spans": [
{
"start": 95,
"end": 123,
"text": "(Callison-Burch et al., 2007",
"ref_id": null
},
{
"start": 181,
"end": 201,
"text": "(Bojar et al., 2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.1"
},
{
"text": "Note that for all language pairs, our approach converged in the first epoch, with only a few steps difference. Therefore, our approach was implicitly fine-tuned on comparable amount of data for all language pairs. Similar to previous unsupervised MT approaches, we evaluate the models on the WMT 2016 test sets for German\u2192English and Roman\u2192English (Bojar et al., 2016) and use the WMT 2014 test set for French\u2192English (Bojar et al., 2014) .",
"cite_spans": [
{
"start": 348,
"end": 368,
"text": "(Bojar et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 418,
"end": 438,
"text": "(Bojar et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.1"
},
{
"text": "Implementation details: The experiments were run in Python 3.7 and Python 3.5 for the NMT model and paraphrasing system, respectively. The openly accessible repository of the work described in Conneau and Lample (2019) 2 was used as the basis for the implementation of this paper, and we use their provided pre-trained models 3 . The data is preprocessed into BPE tokens using FastBPE 4 with a vocabulary size of 60, 000. All models were finetuned on one GPU (NVIDIA Tesla P100). Since our experiments are conducted in the exact same ecosystem as Conneau and Lample (2019); Lample et al. 2018b, our results are directly comparable to theirs. We use paraphrases created by the model proposed by Wieting et al. (2017) 5, due to its opensource access. It uses a Seq2Seq architecture to back-translate bilingual sentence pairs. For each sentence in the training data, the most probable three paraphrases are used. It would be preferable to use a fully unsupervised paraphrasing system, e.g. (Roy and Grangier, 2019) . Nonetheless, we argue that the employed system does not violate the assumption of an unsupervised NMT system, since the paraphrases are only generated for the target language. Thus, while the target language must be part of at least one bilingual corpus, the source language can be arbitrary (as long as the cross-lingual language model between the source and target language exists).",
"cite_spans": [
{
"start": 193,
"end": 218,
"text": "Conneau and Lample (2019)",
"ref_id": "BIBREF13"
},
{
"start": 987,
"end": 1011,
"text": "(Roy and Grangier, 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.1"
},
{
"text": "Hyperparameters: Since for unsupervised NMT the assumption is made that only monolingual data is available, selecting the model or hyperparameters on a parallel dataset contradicts this premise. Therefore, the default hyperparameter settings from related work (Conneau and Lample, 2019) and the underlying XLM model are 2 https://github.com/facebookresearch/ XLM 3 mlm ende 1024, mlm enfr 1024, and mlm enro 1024 4 https://github.com/glample/fastBPE 5 https://github.com/vsuthichai/ paraphraser used 6 . The number of training epochs is based on the perplexity scores on the WMT 2013 test sets or, for Romanian, the WMT 2015 development set.",
"cite_spans": [
{
"start": 260,
"end": 286,
"text": "(Conneau and Lample, 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.1"
},
{
"text": "Evaluation metrics While perplexity is used as the metric during training, the performance on the test sets are reported on the commonly used BLEU metric (Papineni et al., 2002) , specifically the MOSES evaluation script (Hoang and Koehn, 2008) . Table 1 shows the BLEU scores on the test sets for source and target language. The scores for model (1) to (5) are taken from the respective papers with our re-evaluations producing almost identical results when using the respective openly accessible repository. Although the proposed model (7) uses exclusively data of the target language, it performs competitively than the sophisticated approaches in (1), (2), (3), and (4) which all use backtranslation (Lample et al., 2018a,b; Artetxe et al., 2018b) , especially for German\u2192English. Since (3) uses the transformer architecture as well, the results highlight that the alignment achieved by the high-quality cross-lingual language models of XLM is superior to the gains achieved by the backtranslation algorithm. When combining both backtranslation and XLM, the results can, however, be further improved substantially, as shown by (5).",
"cite_spans": [
{
"start": 154,
"end": 177,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF27"
},
{
"start": 221,
"end": 244,
"text": "(Hoang and Koehn, 2008)",
"ref_id": "BIBREF17"
},
{
"start": 704,
"end": 728,
"text": "(Lample et al., 2018a,b;",
"ref_id": null
},
{
"start": 729,
"end": 751,
"text": "Artetxe et al., 2018b)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 247,
"end": 254,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.1"
},
{
"text": "While previous attempts to using the denoising objective without back-translation resulted in unusable results (Artetxe et al., 2018b) , we observe that our model performs reasonably well already when 6 emb dim: 1024, #layers 6, #heads 8, dropout: 0.1, attention dropout: 0.1, tokens per batch: 2000, optimizer: adam inverse sqrt.",
"cite_spans": [
{
"start": 111,
"end": 134,
"text": "(Artetxe et al., 2018b)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Denoising autoencoder parameters: word shuffle: 3, word dropout: 0.1, word blank: 0.1, lr: 7 \u2022 10 \u22124",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "(3) (5) Table 2: Comparison of average training time between different methods on a single Tesla P100 when fine-tuning for German\u2192English. A step consists of 100K samples. Total cost reports floating-point operations for the entire training process. We use the value 9.5 TFLOP/s for the P100. Costs are shown when trained on the entire training corpus and when exclusively training on 100K sentences. The generation of paraphrases is included in (7) total cost. training exclusively on this objective (6). Joint modelling of the paraphrasing and denoising objective (7) improves scores of the unsupervised system by about 3 BLEU points over (6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Furthermore, our approach (7) achieves particularly high results when a source language is related (German) to the training language, compared to a more distant one (French, Romanian). While our model (7) outperforms (3) for German\u2192English by around 3 BLEU points, it scores 2.1 points less for French\u2192English. Moreover, scores between (7) and (3) are only comparable for German, while (3) performs much better for both the less related source languages. This observation is even amplified when the source language is Romanian. Our model appears to be more susceptible to the relatedness of the source language than (5) which uses the same cross-lingual weights, scoring 2.1 points less on French than German, compared to only 1.0 for (5). While our approach solely relies on the alignments based on these weights, the backtranslation of (5) adds an important signal especially for more distant languages. This confirms the assumption made for our model: the BLEU score of our model is particularly high for the closely related German\u2192English, compared to the more distant language pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "One major advantage of the proposed method over existing methods is its efficiency in terms of computational time. In Table 2 we report the average time and cost for the German\u2192English experiments. The time to fine-tune on a full epoch is measured in the same environment under identical conditions. For our model, the measured times include the computational cost to generate the paraphrases 7 . We find that by using the paraphrasing objective instead of back-translation the computational time can be reduced by a factor of three. Moreover, the entire training process requires around an order of magnitude less floating point operations due to our approach converging much quicker.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Efficiency",
"sec_num": "5"
},
{
"text": "Our approach outperforms Lample et al. (2018b) while being much more efficient, however, one might suggest that the shown efficiency of the proposed model can also be achieved with the better scoring Conneau and Lample (2019) by trading off some of its performance advantages. Therefore, we explored to which extent either less training time or training data to achieve faster convergence improves its efficiency. Regarding training times, our model for English is already openly accessible. Moreover, the paraphrasing model uses a (Bi-LSTM) that was trained for 3 epochs on only 24,000 sentence pairs, which is less than 2% of the translation data used for the NMT models. proposed approach (7) scored consistently higher 8 than the other models until it's convergence, see figure 2. Only after our model converged, the model of Conneau and Lample (2019) surpasses its scores. Thus, stopping the training process earlier would not lead to better efficiency than our model. We then also analyzed the models' scores when modifying the amount of data used for fine-tuning. Results using 10K, 25K, 50K, 100K, and 5M training sentences are shown in Figure 3 . As seen, when using little training data, Conneau and Lample (2019) performs much worse than the proposed model since errors and noise from translating into one direction are propagated when translating back. The less monolingual data used, the stronger the effect of this error-propagation issue. The back-translation model of Conneau and Lample (2019) starts to outperform the proposed model starting from 100K",
"cite_spans": [
{
"start": 1198,
"end": 1223,
"text": "Conneau and Lample (2019)",
"ref_id": "BIBREF13"
},
{
"start": 1484,
"end": 1509,
"text": "Conneau and Lample (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 1145,
"end": 1153,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Efficiency",
"sec_num": "5"
},
{
"text": "8 Excluding the initial phase as our model computes the paraphrases first. Since our model catches up at a BLEU of around 1.7, we ignored this special case. training sentences. Yet, even in this setting the proposed model remains much more efficient, as seen in the column cost (FLOPs) @ 100K in Table 2 . The back-translation model is thus required to train on more data and ultimately longer until convergence and cannot achieve similar efficiency with scores comparable to our approach.",
"cite_spans": [],
"ref_spans": [
{
"start": 296,
"end": 303,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Efficiency",
"sec_num": "5"
},
{
"text": "Alignment and translation quality: To investigate whether the alignment extends from a wordlevel to a phrase or even sentence level, a qualitative analysis was conducted. Example reference texts and respective translations for German\u2192English are shown in Figure 4 . It can be seen that the translation quality is high for shorter sentences but it declines with increasing sentence length. Many words and phrases are translated correctly, which generally leads to preservation of sentence meaning for simple sentences in closely-related languages like English and German. While many simple phrases are grammatical, many longer more complex structures are not. Furthermore, the model hallucinates content, especially when confronted with numbers and named entities. For example, in sentence 5 the model generates a made-up destination to northern Croatia while New Lloyd in sentence 6 also never occurs in the source sentence. In the translation for sentence 3, the name is simply omitted and the currency, as well as the amount, is wrong. This observation can be transferred to numbers as well: in sentence 3, the number of 7.5 million was changed to 67.5 million. In an extreme example, the model hallucinates an entire clause for sentence 7. We also observed some artifacts in the output: in sentence 7, oberfl\u00e4chlich is translated into Oberpublic, instead of superficial, merging an German and English term.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 263,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Model Analysis & Discussion",
"sec_num": "6"
},
{
"text": "Wrong translations frequently contain words that are still closely related to the correct translation. For instance, in sentence 2 the model generates the former prime minister of Israel Olmert, instead of Netanyahu and translates meets instead of receives. Or in sentence 5, the model translated into hurdles instead of obstacles. These very subtle differences are difficult for the model to capture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Analysis & Discussion",
"sec_num": "6"
},
{
"text": "Paraphrase quality Although the paraphrasing objective improves the performance of the system substantially, the generated paraphrases are still far from optimal. Figure 5 shows example paraphrases.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 171,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Model Analysis & Discussion",
"sec_num": "6"
},
{
"text": "X-1: Die Vorbereitung lief gut. Y: The preparation went well. Y : The training season went well. X-2: Obama empf\u00e4ngt Netanyahu. Y: Obama receives Netanyahu. Y : Obama meets Olmert. X-3: Spaniens Nationaltorh\u00fcter Iker Casillas hat f\u00fcr 7,5 Millionen Euro seine H\u00e4nde versichern lassen. Y: Spain's national goalkeeper Iker Casillas insured his hands for 7.5 million euro. Y : Spain's King has asked for \u00a3 67.5 million to reinsure his hands.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Analysis & Discussion",
"sec_num": "6"
},
{
"text": "Wir wollten gewinnen und wir haben gewonnen, obwohl ich es bedaure, dass wir nicht noch ein oder zwei weitere Tore erzielt haben\". Y: We wanted to win and we did, although I regret that we did not score one or two goals more. Y : We wanted to win and we have won, although I regret it not yet that we have scored one or two more goals.\" X-5: Zagreb k\u00fcndigt an, die Durchreise\u00fcber Slowenien nach Norden ohne H\u00fcrden zu gew\u00e4hren. Y: Zagreb has announced that they will allow passage over Slovenia towards the north without obstacles. Y : Zagreb has announced it will allow the migration through Slovenia to northern Croatia without hurdles. X-6: Er k\u00f6nnte sich vorstellen, in Startup-Firmen der Neuen Werf zu investieren und sp\u00e4ter auch welche zu akquirieren. Y: He can imagine investing in start-up business in Neuen Werft, and also later acquiring these. Y : He could also imagine investing in startup-owned companies like the New Lloyd's to invest and later in companies to quiquire. X-7: Dieser oberfl\u00e4chliche Erfolg wird von dem dominiert, was Psychologen extrinsische Werte nennen; Geld ,Image, sozialer Status, alles auf Kosten intrinsischer Werteunsere innere Sehnsucht nach pers\u00f6nlicher Entwicklung und Freundschaft, die sich auf unsere geistige Gesundheit viel tiefgehender auswirkt. Y: This superficial success is dominated by what psychologists term extrinsic values; money, image, social status, all at the expense of intrinsic values -our inner yearning for personal growth and friendship -that more deeply impacts our mental health. Y : This OberPublic success is dominated by the things that economists call extraterrestrial values to characterize; money, image, social status all on the planet of intrinsatiable values -our inner desire for personal development and friendship, which can be felt on our mental health much deeper than it is in the physical world of human development and friendship, which is a real concern for our human health and friendship, which is in effect a greater word. They show clear noise and are not always grammatical either. The quality of the paraphrases degrades R: They have not been charged or formally arrested. P1: they were not charged or officially arrested. P2: they didn't have an arrest or official.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "X-4:",
"sec_num": null
},
{
"text": "The Japanese-made tin robots have blocky heads and moveable arms and legs. P1: the japanese robots have blocky robots, and their arms and feet. P2: the japanese robots have blocky heads, their hands and feet. R: The A.P. said it hoped for a resolution so it could return to full coverage of the six-week tournament before the opening match Friday between France and Argentina. P1: the organisers said they hoped to find a resolution so he could return to full coverage of the six-week tournament before the opening -up friday between france and argentina. P2: the panasonic said that it hoped for a resolution for an order to return to full coverage of the six-week tournament to keep an eye on friday between france and argentina. substantially when using more paraphrases per sentence. More sophisticated paraphrasing systems (e.g. Witteveen and Andrews (2019) ), might further improve results. We experimented with one, three and five extracted paraphrases per reference sentence. Using more than one paraphrase per sentence boosts performance substantially, highlighting that the paraphrasing objective also serves a data augmentation function. However, there was no noticeable difference between 3 and 5 paraphrases.",
"cite_spans": [
{
"start": 834,
"end": 862,
"text": "Witteveen and Andrews (2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "R:",
"sec_num": null
},
{
"text": "7 Literature Review",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R:",
"sec_num": null
},
{
"text": "Initial approaches for unsupervised MT focus on NMT systems in combination with backtranslation, denoising autoencoding, and crosslingual embeddings (Lample et al., 2018b; Artetxe et al., 2018b) . Artetxe et al. (2018a) ; Lample et al. (2018b) improve over NMT approaches by focusing on phrase-based Statistical Machine Translation (PBSMT) with phrase tables from crosslingual embedding mappings and iterative backtranslation. Lample et al. (2018b) improves on these attempts by more careful initialization and language models. Lample et al. (2018b) attempt to combine both PBSMT and NMT, by tuning the NMT model on data generated by the PBSMT model and explore additional tweaks, e.g. bytepair encodings (Sennrich et al., 2016b) . Artetxe et al. (2019) also focus on PBSMT for unsupervised machine translation. They propose a more sophisticated hybridization approach and unsupervised optimization technique for the PBSMT model. Comparable performance to Artetxe et al. (2019) was achieved by Conneau and Lample (2019) by simply initializing the NMT model of Lample et al. (2018b) with the weights of a pre-trained crosslingual transformer. An analysis on the the practicality of unsupervised machine translation systems by Kim et al. (2020) concludes that linguistic dissimilarity and a domain mismatch between source and target data pose a substantial challenge for current state-of-the-art systems. They attribute these challenges to a lack of sufficient monolingual corpora for these domains, especially if one of the languages is under-resourced. The success of unsupervised methods has led to the first WMT shared subtask on unsupervised MT in 2019 on German-Czech with system submissions being very similar to existing approaches adapted for Czech (Kvapil\u00edkov\u00e1 et al., 2019; Liu et al., 2019) . Moreover, since 2019 WMT organizes a similar language translation task for Spanish\u2192Portuguese, Czech\u2192Polish, and Hindi\u2192Nepali (Barrault et al., 2019 (Barrault et al., , 2020 .",
"cite_spans": [
{
"start": 149,
"end": 171,
"text": "(Lample et al., 2018b;",
"ref_id": "BIBREF21"
},
{
"start": 172,
"end": 194,
"text": "Artetxe et al., 2018b)",
"ref_id": "BIBREF2"
},
{
"start": 197,
"end": 219,
"text": "Artetxe et al. (2018a)",
"ref_id": "BIBREF0"
},
{
"start": 222,
"end": 243,
"text": "Lample et al. (2018b)",
"ref_id": "BIBREF21"
},
{
"start": 427,
"end": 448,
"text": "Lample et al. (2018b)",
"ref_id": "BIBREF21"
},
{
"start": 528,
"end": 549,
"text": "Lample et al. (2018b)",
"ref_id": "BIBREF21"
},
{
"start": 705,
"end": 729,
"text": "(Sennrich et al., 2016b)",
"ref_id": "BIBREF31"
},
{
"start": 732,
"end": 753,
"text": "Artetxe et al. (2019)",
"ref_id": "BIBREF1"
},
{
"start": 956,
"end": 977,
"text": "Artetxe et al. (2019)",
"ref_id": "BIBREF1"
},
{
"start": 994,
"end": 1019,
"text": "Conneau and Lample (2019)",
"ref_id": "BIBREF13"
},
{
"start": 1060,
"end": 1081,
"text": "Lample et al. (2018b)",
"ref_id": "BIBREF21"
},
{
"start": 1225,
"end": 1242,
"text": "Kim et al. (2020)",
"ref_id": "BIBREF18"
},
{
"start": 1756,
"end": 1782,
"text": "(Kvapil\u00edkov\u00e1 et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 1783,
"end": 1800,
"text": "Liu et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 1929,
"end": 1951,
"text": "(Barrault et al., 2019",
"ref_id": null
},
{
"start": 1952,
"end": 1976,
"text": "(Barrault et al., , 2020",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Machine Translation",
"sec_num": "7.1"
},
{
"text": "The success of fine-tuning pre-trained language models, such as GPT or BERT (Radford et al., 2019; Devlin et al., 2019) has also led to various cross-lingual versions of these pre-trained models, such as M-BERT (Devlin et al., 2019) and XLM (Conneau and Lample, 2019) . These cross-lingual transformers learn to align multiple languages in their latent space by concatenating the monolingual training data on a shared subword vocabulary. They have created state-of-the-art results in multiple lowresource languages. Although monolingual models still outperform these cross-lingual ones if enough training data are available (Virtanen et al., 2019) , a large-scale version of XLM has been shown to produce comparable results to monolingual LMs for high-resource languages (Conneau et al., 2020) .",
"cite_spans": [
{
"start": 76,
"end": 98,
"text": "(Radford et al., 2019;",
"ref_id": "BIBREF28"
},
{
"start": 99,
"end": 119,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 211,
"end": 232,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 241,
"end": 267,
"text": "(Conneau and Lample, 2019)",
"ref_id": "BIBREF13"
},
{
"start": 624,
"end": 647,
"text": "(Virtanen et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 771,
"end": 793,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Learning with Transformers",
"sec_num": "7.2"
},
{
"text": "Paraphrase generation is concerned with the creation of phrases/sentences that use different words to express similar information (Bhagat and Hovy, 2013) . It is a heavily researched task, with recent approaches focusing on the use of deep learning treating it as a Seq2Seq problem, by either using paraphrase databases (Cao et al., 2017; Li et al., 2018; Witteveen and Andrews, 2019) , exploring NMT by pivoting between languages or pairs Wieting et al., 2017; Federmann et al., 2019) , or by treating it as a sentence simplification task (Zhang and Lapata, 2017; Niu et al., 2019) . Most paraphrasing systems create a list of k most probable paraphrases. While paraphrase generation focuses on English, paraphrase datasets for other languages exist (Ganitkevitch and Callison-Burch, 2014) . Extremely low-resource settings for paraphrasing have also been explored using parallel corpora (Maruyama and Yamamoto, 2019) or in a fully unsupervised setting (Roy and Grangier, 2019) .",
"cite_spans": [
{
"start": 130,
"end": 153,
"text": "(Bhagat and Hovy, 2013)",
"ref_id": "BIBREF5"
},
{
"start": 320,
"end": 338,
"text": "(Cao et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 339,
"end": 355,
"text": "Li et al., 2018;",
"ref_id": "BIBREF22"
},
{
"start": 356,
"end": 384,
"text": "Witteveen and Andrews, 2019)",
"ref_id": "BIBREF36"
},
{
"start": 440,
"end": 461,
"text": "Wieting et al., 2017;",
"ref_id": "BIBREF35"
},
{
"start": 462,
"end": 485,
"text": "Federmann et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 540,
"end": 564,
"text": "(Zhang and Lapata, 2017;",
"ref_id": "BIBREF37"
},
{
"start": 565,
"end": 582,
"text": "Niu et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 751,
"end": 790,
"text": "(Ganitkevitch and Callison-Burch, 2014)",
"ref_id": "BIBREF16"
},
{
"start": 889,
"end": 918,
"text": "(Maruyama and Yamamoto, 2019)",
"ref_id": "BIBREF25"
},
{
"start": 954,
"end": 978,
"text": "(Roy and Grangier, 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Generation",
"sec_num": "7.3"
},
{
"text": "This work presented a simple fine-tuning method for unsupervised NMT that solely relies on the underlying alignment of a cross-lingual language model and monolingual data in the target language. Joint learning on a denoising autoencoder and paraphrasing objective creates a competitive system to strong baselines, especially for related language pairs, while requiring much shorter training times. While this work has explored the proposed approach on commonly used language pairs for benchmarking unsupervised MT, future work includes testing the proposed method on other language pairs. This includes i) even more closely related languages (e.g. German\u2192Dutch or Spanish\u2192Portuguese), ii) language pairs without any parallel data, iii) translations between dialects. Moreover, we aim to further explore to which extent the proposed method can be combined with back-translation, especially in the context of distant languages, e.g. by adding backtranslation iterations on top of the proposed approach, similar to PBSMT in (Lample et al., 2018b) . Other training objectives should be explored, such as sentence simplification, translating between dialects, or even abstractive summarization when scaling this approach to document-level translation. Besides higher efficiency, this approach appears promising when training data for a language pair origins from different domains (e.g. Wikipedia versus News). Since our approach only requires data in the target language, domain mismatches in the training data for the language pair do not affect the proposed method. We are further aiming for human evaluation of translation quality.",
"cite_spans": [
{
"start": 1021,
"end": 1043,
"text": "(Lample et al., 2018b)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions & Future work",
"sec_num": "8"
}
],
"back_matter": [
{
"text": "We wish to thank Dr Andreas Vlachos for his support. We also thank the anonymous reviewers for their time and effort giving us feedback on our paper. This work was supported by the Engineering and Physical Sciences Research Council Doctoral Training Partnership. The second and third authors are supported by Research England via the University of Cambridge Global Challenges Research Fund.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised Statistical Machine Translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Unsupervised Statistical Machine Transla- tion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An effective approach to unsupervised machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1019"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. An effective approach to unsupervised machine translation. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, Florence, Italy.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised neural machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "6th International Conference on Learning Representations, ICLR 2018, Vancouver",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised neural ma- chine translation. In 6th International Conference on Learning Representations, ICLR 2018, Vancou- ver, Canada.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 Conference on Machine Translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation",
"authors": [
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Mathias",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lo\u00efc Barrault, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M\u00fcller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 Conference on Machine Trans- lation (WMT19). In Proceedings of the Fourth Con- ference on Machine Translation (Volume 2: Shared Task Papers, Day 1), Florence, Italy.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Toshiaki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (wmt20)",
"authors": [
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Magdalena",
"middle": [],
"last": "Biesialska",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Joanis",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kocmi",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Chi-Kiu",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Morishita",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "1--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lo\u00efc Barrault, Magdalena Biesialska, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljube\u0161i\u0107, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshi- aki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (wmt20). In Proceedings of the Fifth Conference on Machine Translation, pages 1-55, Online.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Squibs: What Is a Paraphrase?",
"authors": [
{
"first": "Rahul",
"middle": [],
"last": "Bhagat",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "3",
"pages": "463--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rahul Bhagat and Eduard Hovy. 2013. Squibs: What Is a Paraphrase? Computational Linguistics, 39(3):463-472.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Proceedings of the Ninth Workshop on Statistical Machine Translation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Buck",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, and Lucia Specia, editors. 2014. Proceedings of the Ninth Workshop on Statistical Machine Trans- lation. Association for Computational Linguistics, Baltimore, Maryland, USA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Specia, Marco Turchi, Karin Verspoor, and Marcos Zampieri",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Antonio",
"middle": [
"Jimeno"
],
"last": "Yepes",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aure- lie Neveol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Spe- cia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 Conference on Machine Translation. In Proceedings of the First Conference on Machine Translation, Berlin, Ger- many.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Proceedings of the Tenth Workshop on Statistical Machine Translation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Hokamp",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/W15-3001"
]
},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 Workshop on Statistical Machine Translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, Lisbon, Portugal.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Proceedings of the Second Workshop on Statistical Machine Translation",
"authors": [],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Philipp Koehn, Cameron Shaw Fordyce, and Christof Monz, editors. 2007. Pro- ceedings of the Second Workshop on Statistical Ma- chine Translation. Association for Computational Linguistics, Prague, Czech Republic.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Proceedings of the Third Workshop on Statistical Machine Translation. Association for Computational Linguistics",
"authors": [],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Philipp Koehn, Christof Monz, Josh Schroeder, and Cameron Shaw Fordyce, editors. 2008. Proceedings of the Third Workshop on Statis- tical Machine Translation. Association for Compu- tational Linguistics, Columbus, Ohio.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Joint copying and restricted generation for paraphrase",
"authors": [
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Chuwei",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"https://dl.acm.org/doi/10.5555/3298023.3298027"
]
},
"num": null,
"urls": [],
"raw_text": "Ziqiang Cao, Chuwei Luo, Wenjie Li, and Sujian Li. 2017. Joint copying and restricted generation for paraphrase. In Thirty-First AAAI Conference on Ar- tificial Intelligence, San Francisco, CA, USA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, Online.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Crosslingual Language Model Pretraining",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual Language Model Pretraining. In Advances in Neural Information Processing Systems 32, Van- couver, Canada.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1, Minneapolis, Minnesota.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multilingual whispers: Generating paraphrases with translation",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Oussama",
"middle": [],
"last": "Elachqar",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 5th Workshop on Noisy User-generated Text",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Federmann, Oussama Elachqar, and Chris Quirk. 2019. Multilingual whispers: Generating paraphrases with translation. In Proceedings of the 5th Workshop on Noisy User-generated Text (W- NUT 2019), Hong Kong.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Multilingual Paraphrase Database",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch and Chris Callison-Burch. 2014. The Multilingual Paraphrase Database. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), Reykjavik, Iceland.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Design of the Moses Decoder for Statistical Machine Translation",
"authors": [
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2008,
"venue": "Software Engineering, Testing, and Quality Assurance for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "58--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hieu Hoang and Philipp Koehn. 2008. Design of the Moses Decoder for Statistical Machine Translation. In Software Engineering, Testing, and Quality Assur- ance for Natural Language Processing, pages 58- 65, Columbus, Ohio.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "When and why is unsupervised neural machine translation useless?",
"authors": [
{
"first": "Yunsu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Gra\u00e7a",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yunsu Kim, Miguel Gra\u00e7a, and Hermann Ney. 2020. When and why is unsupervised neural machine trans- lation useless? In Proceedings of the 22nd An- nual Conference of the European Association for Machine Translation, Lisboa, Portugal.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "CUNI systems for the unsupervised news translation task in WMT 2019",
"authors": [
{
"first": "Ivana",
"middle": [],
"last": "Kvapil\u00edkov\u00e1",
"suffix": ""
},
{
"first": "Dominik",
"middle": [],
"last": "Mach\u00e1\u010dek",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5323"
]
},
"num": null,
"urls": [],
"raw_text": "Ivana Kvapil\u00edkov\u00e1, Dominik Mach\u00e1\u010dek, and Ond\u0159ej Bojar. 2019. CUNI systems for the unsupervised news translation task in WMT 2019. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), Florence, Italy.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Unsupervised machine translation using monolingual corpora only",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Represen- tations, Vancouver, Canada.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Phrase-Based & Neural Unsupervised Machine Translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc'Aurelio Ranzato. 2018b. Phrase-Based & Neural Unsupervised Machine Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, Brussels, Belgium.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Paraphrase generation with deep reinforcement learning",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1421"
]
},
"num": null,
"urls": [],
"raw_text": "Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. 2018. Paraphrase generation with deep reinforce- ment learning. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, Brussels, Belgium.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Incorporating word and subword units in unsupervised machine translation using language model rescoring",
"authors": [
{
"first": "Zihan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Genta",
"middle": [],
"last": "Indra Winata",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5327"
]
},
"num": null,
"urls": [],
"raw_text": "Zihan Liu, Yan Xu, Genta Indra Winata, and Pascale Fung. 2019. Incorporating word and subword units in unsupervised machine translation using language model rescoring. In Proceedings of the Fourth Con- ference on Machine Translation (Volume 2: Shared Task Papers, Day 1), Florence, Italy.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Paraphrasing revisited with neural machine translation",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Mallinson",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Com- putational Linguistics: Volume 1, Long Papers, Va- lencia, Spain.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Extremely low resource text simplification with pre-trained transformer language model",
"authors": [
{
"first": "T",
"middle": [],
"last": "Maruyama",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 International Conference on Asian Language Processing (IALP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Maruyama and K. Yamamoto. 2019. Extremely low resource text simplification with pre-trained trans- former language model. In 2019 International Conference on Asian Language Processing (IALP), Shanghai, China.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Deleter: Leveraging BERT to perform unsupervised successive text compression",
"authors": [
{
"first": "Tong",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 1909,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tong Niu, Caiming Xiong, and Richard Socher. 2019. Deleter: Leveraging BERT to perform unsupervised successive text compression. arXiv, 1909.03223.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "BLEU: A Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, Pennsylva- nia, USA.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Language Models are Unsupervised Multitask Learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "Ope-nAI Blog",
"volume": "",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. Ope- nAI Blog, 1(8).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Unsupervised paraphrasing without translation",
"authors": [
{
"first": "Aurko",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1605"
]
},
"num": null,
"urls": [],
"raw_text": "Aurko Roy and David Grangier. 2019. Unsupervised paraphrasing without translation. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, Florence, Italy.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Improving Neural Machine Translation Models with Monolingual Data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving Neural Machine Translation Models with Monolingual Data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Neural Machine Translation of Rare Words with Subword Units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Ger- many.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Attention is All you Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Pro- cessing Systems 30, Vancouver, Canada.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Extracting and composing robust features with denoising autoencoders",
"authors": [
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Pierre-Antoine",
"middle": [],
"last": "Manzagol",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th International Conference on Machine Learning, ICML '08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"https://dl.acm.org/doi/10.1145/1390156.1390294"
]
},
"num": null,
"urls": [],
"raw_text": "Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising au- toencoders. In Proceedings of the 25th Interna- tional Conference on Machine Learning, ICML '08, Helsinki, Finland.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Multilingual is not enough: BERT for Finnish",
"authors": [
{
"first": "Antti",
"middle": [],
"last": "Virtanen",
"suffix": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Ilo",
"suffix": ""
},
{
"first": "Jouni",
"middle": [],
"last": "Luoma",
"suffix": ""
},
{
"first": "Juhani",
"middle": [],
"last": "Luotolahti",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Salakoski",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.07076"
]
},
"num": null,
"urls": [],
"raw_text": "Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: BERT for Finnish. arXiv:1912.07076 [cs].",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Learning Paraphrastic Sentence Embeddings from Back-Translated Bitext",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Mallinson",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting, Jonathan Mallinson, and Kevin Gimpel. 2017. Learning Paraphrastic Sentence Embeddings from Back-Translated Bitext. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Paraphrasing with Large Language Models",
"authors": [
{
"first": "Sam",
"middle": [],
"last": "Witteveen",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Andrews",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 3rd Workshop on Neural Generation and Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sam Witteveen and Martin Andrews. 2019. Paraphras- ing with Large Language Models. In Proceedings of the 3rd Workshop on Neural Generation and Trans- lation, Hong Kong.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Sentence simplification with deep reinforcement learning",
"authors": [
{
"first": "Xingxing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1062"
]
},
"num": null,
"urls": [],
"raw_text": "Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copen- hagen, Denmark.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Training the paraphrasing generator(Wieting et al., 2017) is not included in the costs as the employed paraphrasing BLEU scores over number of FLOPs. FLOPs include the computation of the paraphrases before our model is trained.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "BLEU scores of the proposed model as well as the state-of-the-art approach of Conneau and Lample (2019) for varying number of monolingual training sentences: 10K, 25K, 50K 100K, and 5M.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"text": "Example translations (\u0176 ) by model (7) of sentences (X) from German to English with gold-standard translations (Y).",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF4": {
"text": "Example references (R) and respective paraphrases (P) of the employed paraphrasing system(Wieting et al., 2017).",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF1": {
"text": "Table 1: BLEU scores on test sets. (1)-(5) are taken from the respective papers.(6)and (7) refers to the proposed approach. Bold numbers indicate the language pair on which the model performs best.",
"content": "<table><tr><td>Model</td><td colspan=\"2\">de-en fr-en ro-en</td></tr><tr><td>(1) Lample et al. (2018a)</td><td>13.3 14.3</td><td>-</td></tr><tr><td>(2) Artetxe et al. (2018b)</td><td>-15.6</td><td>-</td></tr><tr><td>(3) Lample et al. (2018b) Transformer</td><td colspan=\"2\">21.0 24.2 19.4</td></tr><tr><td>(4) Lample et al. (2018b) Transformer + PBSMT</td><td colspan=\"2\">25.1 27.7 23.9</td></tr><tr><td>(5) Conneau and Lample (2019)</td><td colspan=\"2\">34.3 33.3 31.8</td></tr><tr><td>(6) Transf. + autoencoder</td><td colspan=\"2\">20.9 18.9 18.7</td></tr><tr><td>(7) Transf. + autoencoder + paraphrases</td><td colspan=\"2\">24.2 22.1 21.2</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF2": {
"text": "\u2022 10 18 6.31 \u2022 10 17 8.07 \u2022 10 16 cost (FLOPs) @ 100K 5.10 \u2022 10 17 3.15 \u2022 10 17 7.47 \u2022 10 16",
"content": "<table><tr><td>time per step (minutes)</td><td>89.55</td><td>92.24</td><td>29.54</td></tr><tr><td>total cost (FLOPs)</td><td>1.78</td><td/><td/></tr></table>",
"type_str": "table",
"num": null,
"html": null
}
}
}
}