ACL-OCL / Base_JSON /prefixN /json /naacl /2021.naacl-industry.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:18:10.478143Z"
},
"title": "Cross-lingual Supervision Improves Unsupervised Neural Machine Translation",
"authors": [
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ByteDance AI Lab",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Hongxiao",
"middle": [],
"last": "Bai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {}
},
"email": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ByteDance AI Lab",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "En",
"middle": [],
"last": "De",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Fr",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "De",
"middle": [],
"last": "En",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose to improve unsupervised neural machine translation with cross-lingual supervision (CUNMT), which utilizes supervision signals from high resource language pairs to improve the translation of zero-source languages. Specifically, for training En-Ro system without parallel corpus, we can leverage the corpus from En-Fr and En-De to collectively train the translation from one language into many languages under one model. Simple and effective, CUNMT significantly improves the translation quality with a big margin in the benchmark unsupervised translation tasks, and even achieves comparable performance to supervised NMT. In particular, on WMT'14 En-Fr tasks CUNMT achieves 37.6 and 35.18 BLEU score, which is very close to the large scale supervised setting and on WMT'16 En-Ro tasks CUNMT achieves 35.09 BLEU score which is even better than the supervised Transformer baseline.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose to improve unsupervised neural machine translation with cross-lingual supervision (CUNMT), which utilizes supervision signals from high resource language pairs to improve the translation of zero-source languages. Specifically, for training En-Ro system without parallel corpus, we can leverage the corpus from En-Fr and En-De to collectively train the translation from one language into many languages under one model. Simple and effective, CUNMT significantly improves the translation quality with a big margin in the benchmark unsupervised translation tasks, and even achieves comparable performance to supervised NMT. In particular, on WMT'14 En-Fr tasks CUNMT achieves 37.6 and 35.18 BLEU score, which is very close to the large scale supervised setting and on WMT'16 En-Ro tasks CUNMT achieves 35.09 BLEU score which is even better than the supervised Transformer baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural machine translation (NMT) has achieved great success and reached satisfactory translation performance for several language pairs (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017) . Such breakthroughs heavily depend on the availability of colossal amounts of bilingual sentence pairs, such as the some 40 million parallel sentence pairs used in the training of WMT14 English French Task. As bilingual sentence pairs are costly to collect, the success of NMT has not been fully duplicated in the vast majority of language pairs, especially for zero-resource languages. Recently, (Artetxe et al., 2018b; Lample et al., 2018a; ?) tackled this challenge by training unsupervised neural machine translation (UNMT) models using only monolingual data, which achieves considerably high accuracy, but still not on par with that of the state of the art supervised models. Most previous works focused on modeling the architecture through parameter sharing or proper initialization to improve UNMT. We argue that the drawback of UNMT mainly stems from the lack of supervised signals, and it is beneficial to transfer multilingual information across languages. In this paper, we take a step towards practical unsupervised NMT with cross-lingual supervision (CUNMT) -making the most of the signal from other language. We investigate two variants of multilingual supervision for UNMT. a) CUNMT w/o Para.: a general setting where unrelated monolingual data can be introduced. For example, using monolingual Fr data to help the training of En-De (Figure 1(c) ). b) CUNMT w/ Para.: a relatively strict setting where other bilingual language pairs can be introduced. For example, we can naturally leverage parallel En-Fr data to facilitate the unsupervised En-De transla-tion ( Figure 1 ",
"cite_spans": [
{
"start": 136,
"end": 159,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF5"
},
{
"start": 160,
"end": 181,
"text": "Gehring et al., 2017;",
"ref_id": "BIBREF9"
},
{
"start": 182,
"end": 203,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 602,
"end": 625,
"text": "(Artetxe et al., 2018b;",
"ref_id": "BIBREF4"
},
{
"start": 626,
"end": 647,
"text": "Lample et al., 2018a;",
"ref_id": "BIBREF14"
},
{
"start": 648,
"end": 650,
"text": "?)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1553,
"end": 1565,
"text": "(Figure 1(c)",
"ref_id": "FIGREF0"
},
{
"start": 1783,
"end": 1791,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(d)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We introduce cross-lingual supervision which aims at modeling explicit translation probabilities across languages. Taking three languages as an example, suppose the target unsupervised direction is En \u2192 De and the auxiliary language is Fr. Our target is to model the translation probability p(De|En) with the support of p(Fr|En) and p(De|Fr). For forward cross-lingual supervision, the system NMT Fr\u2192De serves as a teacher, translating the Fr part of parallel data (En, Fr) to De. The resulted synthetic data (En, Fr, De) can be used to improve our target system NMT En\u2192De . For backward cross-lingual supervision, we translate the monolingual De to Fr with NMT De\u2192Fr , and then translate Fr to En with NMT Fr\u2192En . The resulted synthetic bilingual data (De, En) can be used for NMT En\u2192De as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions can be summarized as follow: a) Empirical evaluation of CUNMT on six benchmarks verifies that it surpassed individual MT models by a large margin of more than 3.0 BLEU points on average, and also bested several strong competitors. Particularly, on WMT'16 En-Ro tasks, CUNMT surpass the supervised baseline by 0.7 BLEU, showing the great potential for UNMT. b) CUNMT is very effective in the use of additional training data. MBART or MASS introduces billions of sentences, while CUNMT only introduces tens of millions of sentences and achieves super or comparable results. It shows the importance of introducing explicit supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "CUNMT is based on a multilingual machine translation model involving supervised and unsupervised methods with a triangular training structure. The original unsupervised NMT depends only on monolingual corpus, therefore the performances of these translation directions cannot be guaranteed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed CUNMT",
"sec_num": "2"
},
{
"text": "Formally, given n different languages L i , x i de- notes a sentence in language L i . D i denotes a monolingual dataset of L i , and D i,j denotes a par- allel dataset of (L i , L j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed CUNMT",
"sec_num": "2"
},
{
"text": "We use E to indicate the set of all translation directions with parallel data and W to indicate the set of all unsupervised translation directions respectively. The goal of CUNMT is to minimize the log likelihood of both unsuper- vised and supervised directions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed CUNMT",
"sec_num": "2"
},
{
"text": "Backward Crosslingual Supervision Forward Crosslingual Supervision T S J T S T S T S J P(t | s) P(t | g (s)) P(s | g (t)) P(s | g s\u2192j (s)) P( j | g j\u2192t ( j)) P( f s\u2192j (s) | s) P( f j\u2192t ( j) | j) Direct Supervision",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed CUNMT",
"sec_num": "2"
},
{
"text": "L CUNMT = i,j\u2208W L U i\u2192j + i,j\u2208E L S i\u2192j + i,j\u2208W+EL i\u2192j (1) where L U",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed CUNMT",
"sec_num": "2"
},
{
"text": "i\u2192j is the unsupervised direct supervision, and L S i\u2192j is the direct supervised supervision, andL i\u2192j is the indirect supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed CUNMT",
"sec_num": "2"
},
{
"text": "Direct supervision We will first introduce the notion of direct supervision loss, which only consider the translation probability between two different languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct & Cross-lingual Supervision",
"sec_num": "2.1"
},
{
"text": "For supervised machine translation models, given parallel dataset D s,t with source language L s and target language L t , we use L S s\u2192t to denote the supervised training loss from language L s to language L t . The training loss for a single sentence can be defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct & Cross-lingual Supervision",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L S s\u2192t = E (xs,xt)\u223cDs,t [\u2212 log P (x t |x s )].",
"eq_num": "(2)"
}
],
"section": "Direct & Cross-lingual Supervision",
"sec_num": "2.1"
},
{
"text": "For unsupervised machine translation models, only monolingual dataset D s and D t are given. We use L U s\u2192t to denote the unsupervised training loss from language L s to language L t . We use B s\u2192t to denote this back translation procedure. After that, we can use these data to train the model with supervised method from L s to L t . The losses of the dual structural are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct & Cross-lingual Supervision",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L B t\u2192s =E xs\u223cDs [\u2212 log P (x s |g s\u2192t (x s )], L B s\u2192t =E xt\u223cDt [\u2212 log P (x t |g t\u2192s (x t )],",
"eq_num": "(3)"
}
],
"section": "Direct & Cross-lingual Supervision",
"sec_num": "2.1"
},
{
"text": "where g s\u2192t (x s ) translate the sentence in language L s to L t , that is, the back translation of x s . Then the total loss of an unsupervised machine translation is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct & Cross-lingual Supervision",
"sec_num": "2.1"
},
{
"text": "L U = L B t\u2192s + L B s\u2192t . (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct & Cross-lingual Supervision",
"sec_num": "2.1"
},
{
"text": "Cross-lingual supervision When extend to the multilingual scenario, it is natural to introduce indirect supervision across languages. Given n different languages, for each language pair (L i , L j ), we can easily obtain the translation probability of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct & Cross-lingual Supervision",
"sec_num": "2.1"
},
{
"text": "P (x i |x j ) through the direct supervised model L S or L U .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct & Cross-lingual Supervision",
"sec_num": "2.1"
},
{
"text": "We useL s\u2192t to indicate the indirect supervised loss, which can be defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct & Cross-lingual Supervision",
"sec_num": "2.1"
},
{
"text": "L s\u2192t = n i=0,i =s,t \u03bb iLs\u2192i\u2192t (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct & Cross-lingual Supervision",
"sec_num": "2.1"
},
{
"text": "where \u03bb is the coefficient. T Due to the lack of triples data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct & Cross-lingual Supervision",
"sec_num": "2.1"
},
{
"text": "(L i , L k , L j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct & Cross-lingual Supervision",
"sec_num": "2.1"
},
{
"text": ", it is difficult to directly estimate the cross translation lossL s\u2192i\u2192t . We therefor propose the backward and forward indirect supervision to calculate the cross loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct & Cross-lingual Supervision",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L s\u2192j\u2192t = E xt\u223cDt [\u2212 log P (x t |g t\u2192j\u2192s (x t ))] + E xs\u223cDs [\u2212 log P (f s\u2192j\u2192t (x s )|x s )]",
"eq_num": "(6)"
}
],
"section": "Direct & Cross-lingual Supervision",
"sec_num": "2.1"
},
{
"text": "where g t\u2192j\u2192s (x t ) is the indirect backward translation which translate x t to language L s and f s\u2192j\u2192s (x t ) is the indirect forward translation which translate x s to language L t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Direct & Cross-lingual Supervision",
"sec_num": "2.1"
},
{
"text": "The procedure of CUNMT includes two main steps: multi-lingual pre-training and iterative multi-lingual training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure of CUNMT",
"sec_num": "2.2"
},
{
"text": "Multi-lingual Pre-training Due to the ill-posed nature, it is also important to find a good initialization to associate the source side languages and the target side languages. We propose a Multi-lingual Pre-training approach, which jointly train the unsupervised auto-encoder and supervised machine translation. Intuitively, the multi-lingual joint pretraining can take advantage of transfer learning and thus benefit the low resource languages. Apart form the monolingual data, pre-training can also leverage the bilingual parallel data. We suggest the supervised data provides strong signal to optimize the network, which also advantage the unrelated unsupervised NMT pre-training. For example, it is beneficial to use the supervised En-Fr model to initialize the unsupervised De-Fr model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure of CUNMT",
"sec_num": "2.2"
},
{
"text": "The goal is to train a single system that minimize the jointly loss function of L CUNMT . Generally, CUNMT can be applied to a restrict unsupervised scenario where only monolingual are provided, and also can be extended to a unrestricted scenario where parallel data are introduced. For the sake of simplicity, we describe our method on three language pairs, which can be easily extended to more language pairs. Suppose that the three languages are denoted as the triad (En, Fr, De), and we have monolingual data for all the three languages and also bilingual data for En-Fr. The target is to train an unsupervised En \u2192Fr system. The detailed method is as follows: For indirect or direct supervision, we follow the Equation (6), which will adopts one step forward translation if parallel data is provided. Since we train all directions in one model, the pseudo data will include all directions. In this setting, it contains: En \u2194 Fr, En \u2194 De, Fr \u2194 De with both direct and indirect directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Indirect Supervised Training",
"sec_num": null
},
{
"text": "1. Sample batch of monolingual x En , x Fr , x De sentences from D En , D Fr , D",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Indirect Supervised Training",
"sec_num": null
},
{
"text": "We conduct experiments including (De, En, Fr), (Fr, En, De), and (Ro, En, Fr). For monolingual data of English, French and German, 20 million sentences from available WMT monolingual News Crawl datasets were randomly selected. For Romanian monolingual data, all of the available Romanian sentences from News Crawl dataset were used and and were supplemented with WMT16 monolingual data to yield a total of in 2.9 million sentences. For parallel data, we use the standard WMT 2014 English-French dataset consisting of about 36M sentence pairs, and the standard WMT 2014 English-German dataset consisting of about 4.5M sentence pairs. For analyses, we also introduce the standard WMT 2017 English-Chinese dataset consisting of 20M sentence pairs. Consist with previous work, we report results on newstest 2014 for English-French pair, and on newstest 2016 for English-German and English-Romanian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Settings",
"sec_num": "3.1"
},
{
"text": "In the experiments, CUNMT is built upon Transformer models. We use the Transformer with 6 layers, 1024 hidden units, 16 heads. We train our models with the Adam optimizer, a linear warm-up and learning rates varying from 10 \u22124 to 5 \u00d7 10 \u22124 . The model is trained on 8 NVIDIA V100 GPUs. We implement all our models in Py-Torch based on the code of (Lample and Conneau, 2019) 1 . All the results are evaluated on BLEU score with Moses scripts, which is in consist with the previous studies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Settings",
"sec_num": "3.1"
},
{
"text": "The main results of similar pairs are shown in Table 1. We make comparison with three strong unsupervised methods: CUNMT is very efficient in the use of multi-lingual data. While the pretrained language model is obtained through several hundred times larger monolingual or cross-lingual corpus, CUNMT achieves superior or comparable results with much less cost. The model was improved by using synthetic data of cross translation that is based on the jointly trained model. The results of \"CUNMT + Forward\" are from the model tuned by only 1 epoch with about 100K sentences. This method is fast and the performances are surprisingly effective. The \"CUNMT + Forward + Backward\" denotes that, besides forward translation, we also use monolingual data and cross translate it to the source language. This method yielded the best performance by outperforming the \"CUNMT w/o Para.\" by more than 3 BLEU score on average. The improvements show great potential for introducing indirect cross lingual supervision for unsupervised NMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "3.2"
},
{
"text": "When compared with supervised approaches, CUNMT shows very promising performance. For the large scale WMT14 En-Fr tasks, the gap between CUNMT and supervised baseline is closed to 3.4 BLEU score. And for the medium WMT16 En-Ro task, CUNMT performs even better than the supervised approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "3.2"
},
{
"text": "In this part, we conduct several studies on CUNMT to better understand its setting. Figure 3 : Results comparison for CUNMT fine-tuning with different auxiliary data. \"Bw\" only adopts crosslingual backward translation synthetic data, and \"Fw\" only adopts cross-lingual forward translation synthetic data. The black horizontal is the baseline of UNMT. The horizontal axis is epoch and the vertical axis is the BLEU score. Epoch size is 100K sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 92,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analyses",
"sec_num": "4"
},
{
"text": "Backward or Forward Here we have explored the effect of cross-lingual backward supervision and cross-lingual forward supervision, and plot the performance curves along with the training procedure in Figure 3 . The comparison system is CUNMT trained only with monolingual data.",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 207,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analyses",
"sec_num": "4"
},
{
"text": "To make a fair comparison, we use \"CUNMT w/ Para.\" as the baseline model and fine-tuning it with only indirect forward supervision or indirect backward supervision. We conduct experiments on WMT16 En-De and En-Ro tasks. Clearly, the forward supervision outperforms the backward one with big margins, which shows the importance of introducing the forward supervision for multilingual UNMT. It is still interesting to find that only introducing the indirect backward translation achieves better results than the unsupervised baseline. We suppose the reasons for the performance gap is that, a) The UNMT baseline has included the traditional direct back translation, therefore the information gain from indirect backward translation is limited compared to the forward translation. b) The indirect forward translation provides a more direct way to model the relation across different languages. The results in consist with the previous research that pivot translation can help low resource language translation. scale. The results also dovetail with the unsupervised En-Fr experiments in Table 1 . As it turns out the smaller parallel data of En-De was able to significantly improve the performance of unsupervised En-Fr translation. We then reduce the scale of the parallel data En-De and surprisingly find that even with only 25% supervised data, CUNMT still works well. The experiments demonstrate that CUNMT is robust and has great potential to be applied to practical systems. Importance of the Auxiliary Language Table 3 shows effects of the auxiliary language. We first switch the parallel data from En-Fr to En-De, the performance is almost consistent. We then switch the parallen data to En \u2212 Zh, where Zh is dissimilar with Ro, the performance increases. This is in line with our expectations, that similar languages make it easier for transfer learning. Finally, we extend the parallel data to En-De and En-Fr, and achieves further benefits. Compared with , we suggest the language similarity is more important than the auxiliary data scale. of CUNMT is slightly lower than that of its state of the art counterparts. Also, some techniques such as model average are not applied, and two directions are trained in one model. In CUNMT, the performance of supervised directions drops a little, but in exchange, the performances of zero-shot directions are greatly improved and the model is convenient to serve for multiple translation directions.",
"cite_spans": [],
"ref_spans": [
{
"start": 1084,
"end": 1091,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 1506,
"end": 1523,
"text": "Language Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Analyses",
"sec_num": "4"
},
{
"text": "Strategies of Synthetic Data Generation For the synthetic data generation, the reported results are from greedy decoding for time efficiency. We compared the effects of sample strategies on the language setting of (Ro, En, De) where En-De is the supervised direction. The results based on beam search generation for En \u2192 Ro is 34.86, and 33.18 for En \u2192 Fr in terms of BLEU. Compared with greedy decoding, the performance of beam search is slightly inferior. A possible reason is that the beam search makes the synthetic data further biased on the learned pattern. The results suggest that CUNMT is exceedingly robust to the sampling strategies when performing forward and backward cross translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness on Parallel Data Scale As shown in",
"sec_num": null
},
{
"text": "Multilingual NMT It has been proven low resource machine translation can adopt methods to utilize other rich resource data in order to develop a better system. These methods include multilingual translation system (Firat et al., 2016; Johnson et al., 2017) , teacher-student framework , or others (Zheng et al., 2017) . Apart from parallel data as an entry point, many attempts have been made to explore the usefulness of monolingual data, including semi-supervised methods and unsupervised methods which only monolingual data is used. Much work also has been done to attempt to marry monolingual data with supervised data to create a better system, some of which include using small amounts of parallel data and augment the system with monolingual data (Sennrich et al., 2016; He et al., 2016; Wang et al., 2018; Gu et al., 2018; Edunov et al., 2018; Yang et al., 2020) . Others also try to utilize parallel data of rich resource language pairs and also monolingual data (Ren et al., 2018; Al-Shedivat and Parikh, 2019; Lin et al., 2020) . (Ren et al., 2018) also proposed a triangular architecture, but their work still relied on parallel data of low resource language pairs. With the joint support of parallel and monolingual data, the performance of a low resource system can be improved.",
"cite_spans": [
{
"start": 214,
"end": 234,
"text": "(Firat et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 235,
"end": 256,
"text": "Johnson et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 297,
"end": 317,
"text": "(Zheng et al., 2017)",
"ref_id": "BIBREF28"
},
{
"start": 754,
"end": 777,
"text": "(Sennrich et al., 2016;",
"ref_id": "BIBREF21"
},
{
"start": 778,
"end": 794,
"text": "He et al., 2016;",
"ref_id": "BIBREF11"
},
{
"start": 795,
"end": 813,
"text": "Wang et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 814,
"end": 830,
"text": "Gu et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 831,
"end": 851,
"text": "Edunov et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 852,
"end": 870,
"text": "Yang et al., 2020)",
"ref_id": "BIBREF27"
},
{
"start": 972,
"end": 990,
"text": "(Ren et al., 2018;",
"ref_id": "BIBREF19"
},
{
"start": 991,
"end": 1020,
"text": "Al-Shedivat and Parikh, 2019;",
"ref_id": "BIBREF0"
},
{
"start": 1021,
"end": 1038,
"text": "Lin et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 1041,
"end": 1059,
"text": "(Ren et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Unsupervised NMT In 2017, pure unsupervised machine translation method with only monolingual data was proven to be feasible. On the basis of embedding alignment (Artetxe et al., 2017; Lample et al., 2018b) , (Lample et al., 2018a) and (Artetxe et al., 2018b) devised similar methods for fully unsupervised machine translation. Considerable work has been done to improve the unsupervised machine translation systems by methods such as statistical machine translation (Lample et al., 2018c; Artetxe et al., 2018a; Ren et al., 2019; Artetxe et al., 2019) , pretraining models (Lample and Conneau, 2019; Song et al., 2019) , or others (Wu et al., 2019) , and all of which greatly improve the performance of unsupervised machine translation. Our work attempts to utilize both monolingual and parallel data, and combine unsupervised and supervised machine translation through multilingual translation method into a single model CUNMT to ensure better performance for unsupervised language pairs.",
"cite_spans": [
{
"start": 161,
"end": 183,
"text": "(Artetxe et al., 2017;",
"ref_id": "BIBREF1"
},
{
"start": 184,
"end": 205,
"text": "Lample et al., 2018b)",
"ref_id": "BIBREF15"
},
{
"start": 208,
"end": 230,
"text": "(Lample et al., 2018a)",
"ref_id": "BIBREF14"
},
{
"start": 235,
"end": 258,
"text": "(Artetxe et al., 2018b)",
"ref_id": "BIBREF4"
},
{
"start": 466,
"end": 488,
"text": "(Lample et al., 2018c;",
"ref_id": "BIBREF16"
},
{
"start": 489,
"end": 511,
"text": "Artetxe et al., 2018a;",
"ref_id": "BIBREF2"
},
{
"start": 512,
"end": 529,
"text": "Ren et al., 2019;",
"ref_id": "BIBREF20"
},
{
"start": 530,
"end": 551,
"text": "Artetxe et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 573,
"end": 599,
"text": "(Lample and Conneau, 2019;",
"ref_id": "BIBREF13"
},
{
"start": 600,
"end": 618,
"text": "Song et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 631,
"end": 648,
"text": "(Wu et al., 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this work, we propose a multilingual machine translation framework CUNMT incorporating distant supervision to tackle the challenge of the unsupervised translation task. By mixing different training schemes into one model and utilizing unrelated bilingual corpus, we greatly improve the performance of the unsupervised NMT direction. By joint training, CUNMT can serve all translation directions in one model. Empirically, CUNMT has been proven to deliver substantial improvements over several strong UNMT competitors and even achieve comparable performance to supervised NMT. In the future, we plan to build a universal CUNMT system that is applicable in a wide span of languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Consistency by agreement in zero-shot neural machine translation",
"authors": [
{
"first": "Maruan",
"middle": [],
"last": "Al",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Shedivat",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL:HLT)",
"volume": "1",
"issue": "",
"pages": "1184--1197",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1121"
]
},
"num": null,
"urls": [],
"raw_text": "Maruan Al-Shedivat and Ankur Parikh. 2019. Con- sistency by agreement in zero-shot neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL:HLT), Volume 1 (Long and Short Papers), pages 1184-1197, Minneapolis, Min- nesota.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning bilingual word embeddings with (almost) no bilingual data",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "1",
"issue": "",
"pages": "451--462",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1042"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (ACL) (Volume 1: Long Papers), pages 451-462, Vancouver, Canada.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised statistical machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "3632--3642",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1399"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Unsupervised statistical machine transla- tion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3632-3642, Brussels, Belgium.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An effective approach to unsupervised machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "194--203",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. An effective approach to unsupervised ma- chine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics (ACL), pages 194-203, Florence, Italy.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised neural machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised neural ma- chine translation. In International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations (ICLR).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A teacher-student framework for zeroresource neural machine translation",
"authors": [
{
"first": "Yun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "1",
"issue": "",
"pages": "1925--1935",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1176"
]
},
"num": null,
"urls": [],
"raw_text": "Yun Chen, Yang Liu, Yong Cheng, and Victor O.K. Li. 2017. A teacher-student framework for zero- resource neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL) (Volume 1: Long Papers), pages 1925-1935, Vancouver, Canada.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Understanding back-translation at scale",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "489--500",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1045"
]
},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 489-500, Brussels, Belgium.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Zero-resource translation with multi-lingual neural machine translation",
"authors": [
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Baskaran",
"middle": [],
"last": "Sankaran",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
},
{
"first": "Fatos",
"middle": [
"T Yarman"
],
"last": "Vural",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "268--277",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1026"
]
},
"num": null,
"urls": [],
"raw_text": "Orhan Firat, Baskaran Sankaran, Yaser Al-onaizan, Fatos T. Yarman Vural, and Kyunghyun Cho. 2016. Zero-resource translation with multi-lingual neu- ral machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 268-277, Austin, Texas.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning (ICML)",
"volume": "70",
"issue": "",
"pages": "1243--1252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, De- nis Yarats, and Yann N. Dauphin. 2017. Convolu- tional sequence to sequence learning. In Proceed- ings of the 34th International Conference on Ma- chine Learning (ICML), volume 70 of Proceedings of Machine Learning Research, pages 1243-1252, International Convention Centre, Sydney, Australia.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Universal neural machine translation for extremely low resource languages",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Hany",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL:HLT)",
"volume": "1",
"issue": "",
"pages": "344--354",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1032"
]
},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Hany Hassan, Jacob Devlin, and Vic- tor O.K. Li. 2018. Universal neural machine trans- lation for extremely low resource languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL:HLT), Volume 1 (Long Papers), pages 344- 354, New Orleans, Louisiana.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dual learning for machine translation",
"authors": [
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nenghai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wei-Ying",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems (NeurIPS)",
"volume": "29",
"issue": "",
"pages": "820--828",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Advances in Neural In- formation Processing Systems (NeurIPS) 29, pages 820-828.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics (TACL)",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00065"
]
},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the Association for Computational Linguistics (TACL), 5:339-351.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Crosslingual language model pretraining",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.07291"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Unsupervised machine translation using monolingual corpora only",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Alexis Conneau, Ludovic De- noyer, and Marc'Aurelio Ranzato. 2018a. Unsu- pervised machine translation using monolingual cor- pora only. In International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Word translation without parallel data",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018b. Word translation without parallel data. In International Conference on Learning Representa- tions (ICLR).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Phrase-based & neural unsupervised machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "5039--5049",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1549"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc'Aurelio Ranzato. 2018c. Phrase-based & neural unsupervised machine trans- lation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5039-5049, Brussels, Belgium.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Pretraining multilingual neural machine translation by leveraging alignment information",
"authors": [
{
"first": "Zehui",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Jiangtao",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.03142"
]
},
"num": null,
"urls": [],
"raw_text": "Zehui Lin, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, and Lei Li. 2020. Pre- training multilingual neural machine translation by leveraging alignment information. arXiv preprint arXiv:2010.03142.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multilingual denoising pre-training for neural machine translation",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.08210"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. arXiv preprint arXiv:2001.08210.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Triangular architecture for rare language translation",
"authors": [
{
"first": "Wenhu",
"middle": [],
"last": "Shuo Ren",
"suffix": ""
},
{
"first": "Shujie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shuai",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "1",
"issue": "",
"pages": "56--65",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1006"
]
},
"num": null,
"urls": [],
"raw_text": "Shuo Ren, Wenhu Chen, Shujie Liu, Mu Li, Ming Zhou, and Shuai Ma. 2018. Triangular architecture for rare language translation. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (ACL) (Volume 1: Long Papers), pages 56-65, Melbourne, Australia.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Unsupervised neural machine translation with smt as posterior regularization",
"authors": [
{
"first": "Zhirui",
"middle": [],
"last": "Shuo Ren",
"suffix": ""
},
{
"first": "Shujie",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shuai",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)",
"volume": "33",
"issue": "",
"pages": "241--248",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.3301241"
]
},
"num": null,
"urls": [],
"raw_text": "Shuo Ren, Zhirui Zhang, Shujie Liu, Ming Zhou, and Shuai Ma. 2019. Unsupervised neural machine translation with smt as posterior regularization. Pro- ceedings of the AAAI Conference on Artificial Intel- ligence (AAAI), 33:241-248.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (ACL) (Volume 1: Long Papers), pages 86-96, Berlin, Germany.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "MASS: Masked sequence to sequence pre-training for language generation",
"authors": [
{
"first": "Kaitao",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning (ICML)",
"volume": "97",
"issue": "",
"pages": "5926--5936",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2019. MASS: Masked sequence to se- quence pre-training for language generation. In Pro- ceedings of the 36th International Conference on Machine Learning (ICML), volume 97 of Proceed- ings of Machine Learning Research, pages 5926- 5936, Long Beach, California, USA.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems (NeurIPS)",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems (NeurIPS) 30, pages 5998-6008.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Dual transfer learning for neural machine translation with marginal distribution regularization",
"authors": [
{
"first": "Yijun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Guiquan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of AAAI Conference on Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "5553--5560",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yijun Wang, Yingce Xia, Li Zhao, Jiang Bian, Tao Qin, Guiquan Liu, and Tie-Yan Liu. 2018. Dual transfer learning for neural machine translation with marginal distribution regularization. In Proceed- ings of AAAI Conference on Artificial Intelligence (AAAI), pages 5553-5560, New Orleans, USA.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Multiagent dual learning",
"authors": [
{
"first": "Yiren",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Tianyu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiren Wang, Yingce Xia, Tianyu He, Fei Tian, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. 2019. Multi- agent dual learning. In International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Extract and edit: An alternative to back-translation for unsupervised neural machine translation",
"authors": [
{
"first": "Jiawei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL:HLT)",
"volume": "1",
"issue": "",
"pages": "1173--1183",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1120"
]
},
"num": null,
"urls": [],
"raw_text": "Jiawei Wu, Xin Wang, and William Yang Wang. 2019. Extract and edit: An alternative to back-translation for unsupervised neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL:HLT), Volume 1 (Long and Short Papers), pages 1173-1183, Minneapolis, Minnesota.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Towards making the most of bert in neural machine translation",
"authors": [
{
"first": "Jiacheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Chengqi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "9378--9385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiacheng Yang, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Weinan Zhang, Yong Yu, and Lei Li. 2020. Towards making the most of bert in neural machine translation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9378- 9385.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Maximum expected likelihood estimation for zeroresource neural machine translation",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI)",
"volume": "",
"issue": "",
"pages": "4251--4257",
"other_ids": {
"DOI": [
"10.24963/ijcai.2017/594"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Zheng, Yong Cheng, and Yang Liu. 2017. Maximum expected likelihood estimation for zero- resource neural machine translation. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI), pages 4251-4257.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Different settings for zero-resource NMT. Full edges indicate the existence of parallel training data. Dashed blue edges indicate the target translation pair. \"CUNMT w/o Para.\" jointly train several unsupervised pairs in one model with unsupervised crosslingual supervision. \"CUNMT w/ Para.\" train unsupervised directions with supervised cross-lingual supervision, such as jointly train unsupervised En-De with supervised En-Fr.",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Forward and backward cross lingual translation for auxiliary data. The dashed blue arrow indicates target unsupervised direction. The solid arrow indicates using the parallel data. The dashed black arrow indicates generating synthetic data.",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "MLM(Lample and Conneau, 2019) uses large scale cross-lingual data to train the mask language model and then fine-tune on unsupervised NMT.\u2022 MASS(Song et al., 2019) is a sequence to sequence model pre-trained with billions of 1 https://github.com/facebookresearch/ XLM monolingual data.\u2022 MBART(Liu et al., 2020) introduces tens of billions monolingual data to pre-train a deep Transformer model.",
"num": null,
"uris": null
},
"TABREF0": {
"text": "De 2. Sample batch of parallel sentence from D En,Fr to generate supervised data S 3. Back translate x En , x Fr , x De to generate pseudo data B 4. Indirect back translate x En , x Fr , x De to generate pseudo data B i 5. Indirect forward translate x En , x Fr , x De to generate pseudo data F i 6. Merge B, B i , F i and S to jointly train CUNMT.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF2": {
"text": "Main results comparisons. MASS uses large scale pre-training and back translation during fine-tuning.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"text": "CUNMT is robust to the parallel data",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">Auxiliary Direction En-Ro Ro-En</td></tr><tr><td>En-De</td><td>34.86</td><td>33.18</td></tr><tr><td>En-De (50%)</td><td>34.72</td><td>32.85</td></tr><tr><td>En-De (25%)</td><td>34.52</td><td>32.33</td></tr></table>"
},
"TABREF4": {
"text": "Robustness of Parallel Data Scale. Mainly evaluated on unsupervised En-Ro direction with different auxiliary parallel data settings.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"text": "",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Effects of the Auxiliary Language. Mainly</td></tr><tr><td>evaluated on unsupervised En-Ro direction with differ-</td></tr><tr><td>ent parallel data settings.En-Fr,En-De and En-Zh are</td></tr><tr><td>the auxiliary parallel data for training En-Ro. En-De-</td></tr><tr><td>Fr is the combination of the En-De and En-Fr parallel</td></tr><tr><td>data.</td></tr></table>"
},
"TABREF8": {
"text": "Translation performance on supervised directions of CUNMT.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}