ACL-OCL / Base_JSON /prefixW /json /wmt /2020.wmt-1.135.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:42:34.865762Z"
},
"title": "The NITS-CNLP System for the Unsupervised MT Task at WMT 2020",
"authors": [
{
"first": "Salam",
"middle": [],
"last": "Michael",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Technology Silchar",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Singh",
"middle": [],
"last": "Thoudam",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Technology Silchar",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Doren",
"middle": [],
"last": "Singh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Technology Silchar",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Sivaji",
"middle": [],
"last": "Bandyopadhyay",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Technology Silchar",
"location": {
"country": "India"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe NITS-CNLP's submission to WMT 2020 unsupervised machine translation shared task for German language (de) to Upper Sorbian (hsb) in a constrained setting i.e, using only the data provided by the organizers. We train our unsupervised model using monolingual data from both the languages by jointly pre-training the encoder and decoder and fine-tune using backtranslation loss. The final model uses the source side (de) monolingual data and the target side (hsb) synthetic data as a pseudo-parallel data to train a pseudosupervised system which is tuned using the provided development set(dev set).",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe NITS-CNLP's submission to WMT 2020 unsupervised machine translation shared task for German language (de) to Upper Sorbian (hsb) in a constrained setting i.e, using only the data provided by the organizers. We train our unsupervised model using monolingual data from both the languages by jointly pre-training the encoder and decoder and fine-tune using backtranslation loss. The final model uses the source side (de) monolingual data and the target side (hsb) synthetic data as a pseudo-parallel data to train a pseudosupervised system which is tuned using the provided development set(dev set).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper provides the system description of the unsupervised neural machine translation system for German to Upper Sorbian submitted by the Center for Natural Language Processing of National Institute of Technology, Silchar, India (NITS-CNLP) in the WMT 2020 shared task for Unsupervised and Very Low Resource machine translation for German and Upper-Sorbian language pair. Specifically, we made our primary submission for the unsupervised task in de \u2192 hsb direction. We use the data provided by the organisers only i.e, in a constrained manner. Our unsupervised neural machine translation (UNMT) system first pre-trains a transformer (Vaswani et al., 2017) based encoder and decoder model using masked sequence to sequence (MASS) pre-training (Song et al., 2019) and fine-tune using the back-translation (Sennrich et al., 2016a) loss. The final model trained using MASS objective is then used to translate the source side (M de ) monolingual data into a synthetic target side data (M hsb )and then train a pseudo-supervised model using {M de ,M hsb } from scratch.",
"cite_spans": [
{
"start": 637,
"end": 659,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 746,
"end": 765,
"text": "(Song et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 807,
"end": 831,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remaining of the paper is arranged in following manner: Section 2 gives a brief background of an unsupervised MT. Section 3 describes the data preprocessing. In Section 4, we describe our UNMT system. The results and analysis are shown in Section 5. Finally, Section 6 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "NMT (Kalchbrenner and Blunsom, 2013; has become the de-facto MT system in recent times achieving near human level translation quality for many language pair however at the cost of millions of bi-text data. Unfortunately, bi-text data for many languages is scarce or non-existent. Unsupervised MT (Lample et al., 2018a; Artetxe et al., 2018b) is one of the techniques to handle the bi-text unavailability by exploiting monolingual data (Sennrich et al., 2016a) . Primitive unsupervised MT first maps the monolingual data into a common cross-lingual shared vector embedding space (Conneau et al., 2017; Artetxe et al., 2017) and infer a bilingual dictionary from this shared space using adversarial training (Lample et al., 2018a) or through self learning (Artetxe et al., 2018b) and further improve the model through a combination of de-noising auto-encoder and iterative or on-the-fly back-translation. Subsequently, this principle has been applied in SMT (Lample et al., 2018b; Artetxe et al., 2018a) or a combination of NMT and SMT (Marie and Fujita, 2018; Ren et al., 2019) to further improve the unsupervised MT. However, in this work, we follow a newer approach of cross-lingual language model pretraining (Lample and Conneau, 2019; Song et al., 2019) which has shown to be a stronger initialization for unsupervised MT than the cross-lingual shared vector embedding space.",
"cite_spans": [
{
"start": 4,
"end": 36,
"text": "(Kalchbrenner and Blunsom, 2013;",
"ref_id": "BIBREF8"
},
{
"start": 296,
"end": 318,
"text": "(Lample et al., 2018a;",
"ref_id": "BIBREF12"
},
{
"start": 319,
"end": 341,
"text": "Artetxe et al., 2018b)",
"ref_id": "BIBREF2"
},
{
"start": 435,
"end": 459,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF19"
},
{
"start": 578,
"end": 600,
"text": "(Conneau et al., 2017;",
"ref_id": "BIBREF5"
},
{
"start": 601,
"end": 622,
"text": "Artetxe et al., 2017)",
"ref_id": "BIBREF0"
},
{
"start": 706,
"end": 728,
"text": "(Lample et al., 2018a)",
"ref_id": "BIBREF12"
},
{
"start": 754,
"end": 777,
"text": "(Artetxe et al., 2018b)",
"ref_id": "BIBREF2"
},
{
"start": 956,
"end": 978,
"text": "(Lample et al., 2018b;",
"ref_id": "BIBREF13"
},
{
"start": 979,
"end": 1001,
"text": "Artetxe et al., 2018a)",
"ref_id": "BIBREF1"
},
{
"start": 1034,
"end": 1058,
"text": "(Marie and Fujita, 2018;",
"ref_id": "BIBREF14"
},
{
"start": 1059,
"end": 1076,
"text": "Ren et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 1211,
"end": 1237,
"text": "(Lample and Conneau, 2019;",
"ref_id": "BIBREF11"
},
{
"start": 1238,
"end": 1256,
"text": "Song et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "This section is further divided into two subsections briefing the data description and the preprocessing steps used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "3"
},
{
"text": "Sentences ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": null
},
{
"text": "mono de (News Crawl) 5 M hsb 756.3 K dev/test de 2 K hsb 2 K",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": null
},
{
"text": "We use a randomly sampled 5M monolingual corpus for German side from News Crawl 1 dataset, while we use all the available monolingual data 2 and the parallel side 3 of Upper Sorbian 4 as the combined monolingual data for the same and summing up 756,271 number of sentences. For tuning and evaluation 5 , we use the provided devtest 6 data with 2000 sentences for both the dev and test files as shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 403,
"end": 410,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data Description",
"sec_num": "3.1"
},
{
"text": "We use Moses (Koehn et al., 2007) toolkit for preprocessing the data. The corpus underwent removal of non-printing characters and tokenization. For the Upper Sorbian, we used Czech(cs) language code for tokenization as Upper Sorbian(hsb) language code is unavailable in Moses toolkit 7 and considering the relatedness of these languages 8 . The above preprocessing is used by MASS pretrain and MASS finetune models while the pseudosupervised model uses the raw data and learns a Sentencepiece BPE. The details are described in Section 4.2.",
"cite_spans": [
{
"start": 13,
"end": 33,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.2"
},
{
"text": "Our UNMT system is a pipeline of encoderdecoder pretraining and fine-tuning using MASS (Song et al., 2019) and using the synthetic data 1 http://data.statmt.org/news-crawl/ de/ 2 http://www.statmt.org/wmt20/unsup_ and_very_low_res/ 3 http://www.statmt.org/wmt20/unsup_ and_very_low_res/train.hsb-de.hsb.gz 4 The parallel side of Upper Sorbian is allowed for Unsupervised task. 5 We use newstest2020 test set for the submission. 6 http://www.statmt.org/wmt20/unsup_ and_very_low_res/devtest.tar.gz 7 https://github.com/moses-smt/ mosesdecoder 8 Both Czech and Upper Sorbian belongs to Western Slavic language branch.",
"cite_spans": [
{
"start": 87,
"end": 106,
"text": "(Song et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 377,
"end": 378,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "UNMT System",
"sec_num": "4"
},
{
"text": "--mass steps 'de,hsb' --encoder only false --emb dim 1024 --n layers 6 --n heads 8 --dropout 0.1 --attention dropout 0.1 --gelu activation true --tokens per batch 3000 --optimizer adam inverse sqrt, beta1=0.9,beta2=0.98,lr=0.0001 --word mass 0.5 --min len 5 Table 2 : MASS pretraining parameters generated (M hsb ) from the source monolingual data (M de ) to train a forward model from scratch. This section is further divided into two subsections, first describing the MASS pretraining and finetuning and second, the transformer based forward ( \u2212 \u2192 f ) pseudo-supervised model using the pseudoparallel ({M de ,M hsb }) data by inducing Lample et al. (2018a) style noise (word drop, word shuffle and word blank) upon the input data.",
"cite_spans": [],
"ref_spans": [
{
"start": 258,
"end": 265,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "UNMT System",
"sec_num": "4"
},
{
"text": "We use the MASS toolkit 9 to pretrain a cross-lingual language model using the masked sequence to sequence objective. Initially, the corpus are segmented into subword units using BPE (Sennrich et al., 2016b) . A joint BPE is learnt over the monolingual data of both the languages (German and Upper Sorbian) and the vocabulary is limited to 60,000 shared vocabulary tokens. MASS Pretraining: The BPE tokenized monolingual data is used to pretrain the encoder and decoder jointly by the cross lingual MASS objective and the training is done for 100 epochs. The parameters for the MASS pretraining is shown in Table 2 . MASS Fine-tuning: The pretrained model is capable to generate translations but it is merely a copy task. So, in order to make the model more robust, it is further fine-tuned using the loss objective of back-translation. The fine-tuning is halted after the 10th epoch before being converged due to resource limitation. The parameters for fine-tuning is listed in Table 3 .",
"cite_spans": [
{
"start": 183,
"end": 207,
"text": "(Sennrich et al., 2016b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 607,
"end": 614,
"text": "Table 2",
"ref_id": null
},
{
"start": 979,
"end": 986,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "MASS Pretrain and Finetune",
"sec_num": "4.1"
},
{
"text": "We follow Marie et al. (2019) style of using the pseudo-parallel data generated from a previous --bt steps 'de-hsb-de,hsb-de-hsb' --encoder only false --emb dim 1024 --n layers 6 --n heads 8 --dropout 0.1 --attention dropout 0.1 --gelu activation true --tokens per batch 2000 --optimizer adam inverse sqrt, beta1=0.9,beta2=0.98,lr=0.0001 --eval bleu true Table 3 : MASS finetuning parameters model to train a forward pseudo-supervised model. In our case, we first generate a synthetic data (M hsb ) from the source monolingual data (M de ) using beam search decoding with a beam size of 10 from the MASS fine tuned model. Unlike Marie et al. (2019) where back translation was applied, we use forward translation from the source side monolingual (He et al., 2020) data to generate synthetic data. The synthetic data is detokenized, and we learn a joint subword BPE from the raw M de and M hsb using Sentencepiece (Kudo and Richardson, 2018) and limit the shared vocabulary to 10 K units. Noisy Pseudo-Supervised NMT: We add perturbations or noise, specifically we apply word dropout, word shuffle and word blank to our synthetic data. This kind of perturbation is found to be effective for overcoming the local minima by enforcing local smoothness (He et al., 2020; Shen et al., 2019) . We train our pseudo-supervised NMT in a pseudo self-training approach by leveraging the source side monolingual data. This self-training is partial in the sense that we only use the pseudoparallel data which lacks any sort of real labelled data for a single iteration.",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "Marie et al. (2019)",
"ref_id": "BIBREF15"
},
{
"start": 629,
"end": 648,
"text": "Marie et al. (2019)",
"ref_id": "BIBREF15"
},
{
"start": 745,
"end": 762,
"text": "(He et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 912,
"end": 939,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF10"
},
{
"start": 1247,
"end": 1264,
"text": "(He et al., 2020;",
"ref_id": "BIBREF7"
},
{
"start": 1265,
"end": 1283,
"text": "Shen et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 355,
"end": 362,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pseudo-Supervised NMT",
"sec_num": "4.2"
},
{
"text": "The pseudo-supervised NMT is trained from scratch using Fairseq (Ott et al., 2019) toolkit 10 i.e, we do not use the previous models weights rather we apply random weight initialization for our new model. The model is trained for 300 K update steps. We follow Guzm\u00e1n et al. (2019) style transformer architecture of 5 encoder and decoder layers, 512 embedding dimension, the feed-forward hidden dimension is 2048 with 4 multi-head attentions 11 . The rest of the parameters are listed in Table 4 . We 10 https://github.com/pytorch/fairseq 11 We have used 4 attention heads instead of 8 as in Guzm\u00e1n et al. (2019) --encoder-normalize-before --decoder-normalize-before --dropout 0.3 --relu-dropout 0.3 --attention-dropout 0.3 --label-smoothing 0.2 --criterion label smoothed cross entropy --weight-decay 0.0001 --lr-scheduler inverse sqrt --min-lr 1e-9 --max-tokens 4000 --warmup-updates 4000 --warmup-init-lr 1e-7 --optimizer adam --lr 0.0005 --adam-betas '(0.9, 0.98)' --share-all-embeddings Table 4 : Pseudo-supervised NMT training parameters make our primary submission of the test source generated using a beam search decoding with beam size of 5 and a length penalty of 1.2.",
"cite_spans": [
{
"start": 260,
"end": 280,
"text": "Guzm\u00e1n et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 591,
"end": 611,
"text": "Guzm\u00e1n et al. (2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 487,
"end": 494,
"text": "Table 4",
"ref_id": null
},
{
"start": 991,
"end": 998,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pseudo-Supervised NMT",
"sec_num": "4.2"
},
{
"text": "The official automatic evaluation uses the the following metrics: BLEU (Papineni et al., 2002) , TER (Snover et al., 2006) , BEER (Stanojevi\u0107 and Sima'an, 2014) , and CharactTER (Wang et al., 2016) . Our primary submission (NITS-CNLP), the pseudo-supervised NMT achieves a cased BLEU of 15.4 and 15.8 as the uncased BLEU score on the newstest2020 blind-test data. The scores are reported in Table 5 . We also present the sample input-output of our primary system (NITS-CNLP) from two randomly selected test sentences from the matrix 12 in Table 6 . We also report the Sacrebleu score of our various settings with the released test set (non blind test) in Table 7 .",
"cite_spans": [
{
"start": 71,
"end": 94,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF17"
},
{
"start": 101,
"end": 122,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF22"
},
{
"start": 130,
"end": 160,
"text": "(Stanojevi\u0107 and Sima'an, 2014)",
"ref_id": "BIBREF24"
},
{
"start": 178,
"end": 197,
"text": "(Wang et al., 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 391,
"end": 398,
"text": "Table 5",
"ref_id": null
},
{
"start": 539,
"end": 546,
"text": "Table 6",
"ref_id": "TABREF1"
},
{
"start": 655,
"end": 662,
"text": "Table 7",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Result",
"sec_num": "5"
},
{
"text": "We report here the system description for our submission to the WMT 2020 shared task of Unsupervised MT for German-Upper Sorbian language pair. We submit our pipelined architecture of masked sequence to sequence pretraining along with finetuning and a pseudo-supervised model in German to Upper Sorbian direction. We observe that the performance of an unsupervised model improves significantly over the base MASS pretraining and System BLEU BLEU-cased TER BEER 2.0 CharactTER NITS-CNLP 15.8 15.4 0.668 0.489 0.604 Table 5 : BLEU, BLEU-cased, TER, BEER 2.0 and CharactTER scores of our final primary system NITS-CNLP for the German \u2192 Upper Sorbian language using blindtest (newstest2020).",
"cite_spans": [],
"ref_spans": [
{
"start": 514,
"end": 521,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Source-1 M\u00f6chten Sie erfahren, wie sich bei uns die Unterrichtsr\u00e4ume mit Leben f\u00fcllen? Reference-1 Chce\u0107e w\u011bd\u017ae\u0107, kak so pola nas wu\u010dbne rumnos\u0107e ze\u017eiwjenjom pjelnja? NITS-CNLP\u010cas\u0107e zhoni\u0107, kak so pola nas wu\u010dbnych rumow z\u017eiwami\u010duje?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "R\u00e4cht euch nicht selbst, sondern gebt Raum dem Zorn Gottes. Reference-2 Njewje\u0107\u0107e so sami, ale daj\u0107e m\u011bstno Bo\u017eemu hn\u011bwu. NITS-CNLP Njech wam sam, ale pomha rumnos\u0107 Bo\u017eeje s\u0142u\u017eby. finetuning after using the synthetic data to train a pseudo-supervised model using a very naive way of self-training i.e, we have just used a single iteration of our forward training. Synthetic data is the de-facto for any modern semi-supervised MT system and in this experiment we show that synthetic data in an unsupervised MT is effective and also emphasised the importance of a pseudo-supervised MT model as a refinement step to an unsupervised MT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source-2",
"sec_num": null
},
{
"text": "https://github.com/microsoft/MASS",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://matrix.statmt.org/matrix/ output/1920?run_id=7785",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning bilingual word embeddings with (almost) no bilingual data",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "451--462",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1042"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 451-462, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unsupervised statistical machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3632--3642",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1399"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Unsupervised statistical machine transla- tion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632-3642, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised neural machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Sixth International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised neural ma- chine translation. In Proceedings of the Sixth Inter- national Conference on Learning Representations.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "On the properties of neural machine translation: Encoder-decoder approaches",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "103--111",
"other_ids": {
"DOI": [
"10.3115/v1/W14-4012"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In Proceedings of SSST-8, Eighth Work- shop on Syntax, Semantics and Structure in Statisti- cal Translation, pages 103-111, Doha, Qatar. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Word translation without parallel data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.04087"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The FLORES evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English",
"authors": [
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Peng-Jen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "6098--6111",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1632"
]
},
"num": null,
"urls": [],
"raw_text": "Francisco Guzm\u00e1n, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The FLORES evaluation datasets for low-resource ma- chine translation: Nepali-English and Sinhala- English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 6098-6111, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Revisiting self-training for neural sequence generation",
"authors": [
{
"first": "Junxian",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junxian He, Jiatao Gu, Jiajun Shen, and Marc'Aurelio Ranzato. 2020. Revisiting self-training for neural sequence generation. In Proceedings of ICLR.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Recurrent continuous translation models",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1700--1709",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natu- ral Language Processing, pages 1700-1709, Seattle, Washington, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Ses- sions, pages 177-180, Prague, Czech Republic. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2012"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Crosslingual language model pretraining",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems (NeurIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Unsupervised machine translation using monolingual corpora only",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Represen- tations (ICLR).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Phrase-based & neural unsupervised machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "5039--5049",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1549"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc'Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine trans- lation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039-5049, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Unsupervised neural machine translation initialized by unsupervised statistical machine translation",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Marie",
"suffix": ""
},
{
"first": "Atsushi",
"middle": [],
"last": "Fujita",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.12703"
]
},
"num": null,
"urls": [],
"raw_text": "Benjamin Marie and Atsushi Fujita. 2018. Unsuper- vised neural machine translation initialized by un- supervised statistical machine translation. arXiv preprint arXiv:1810.12703.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "NICT's unsupervised neural and statistical machine translation systems for the WMT19 news translation task",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Marie",
"suffix": ""
},
{
"first": "Haipeng",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kehai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Atsushi",
"middle": [],
"last": "Fujita",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "294--301",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5330"
]
},
"num": null,
"urls": [],
"raw_text": "Benjamin Marie, Haipeng Sun, Rui Wang, Kehai Chen, Atsushi Fujita, Masao Utiyama, and Eiichiro Sumita. 2019. NICT's unsupervised neural and statistical machine translation systems for the WMT19 news translation task. In Proceedings of the Fourth Con- ference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 294-301, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "48--53",
"other_ids": {
"DOI": [
"10.18653/v1/N19-4009"
]
},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Unsupervised neural machine translation with smt as posterior regularization",
"authors": [
{
"first": "Zhirui",
"middle": [],
"last": "Shuo Ren",
"suffix": ""
},
{
"first": "Shujie",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shuai",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "241--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuo Ren, Zhirui Zhang, Shujie Liu, Ming Zhou, and Shuai Ma. 2019. Unsupervised neural machine translation with smt as posterior regularization. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 33, pages 241-248.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The source-target domain mismatch problem in machine translation",
"authors": [
{
"first": "Jiajun",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Peng-Jen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Junxian",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.13151"
]
},
"num": null,
"urls": [],
"raw_text": "Jiajun Shen, Peng-Jen Chen, Matt Le, Junxian He, Ji- atao Gu, Myle Ott, Michael Auli, and Marc'Aurelio Ranzato. 2019. The source-target domain mismatch problem in machine translation. arXiv preprint arXiv:1909.13151.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of association for machine translation in the Americas",
"volume": "200",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine transla- tion in the Americas, volume 200. Cambridge, MA.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Mass: Masked sequence to sequence pre-training for language generation",
"authors": [
{
"first": "Kaitao",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "5926--5936",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2019. Mass: Masked sequence to se- quence pre-training for language generation. In In- ternational Conference on Machine Learning, pages 5926-5936.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "BEER: BEtter evaluation as ranking",
"authors": [
{
"first": "Milo\u0161",
"middle": [],
"last": "Stanojevi\u0107",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "414--419",
"other_ids": {
"DOI": [
"10.3115/v1/W14-3354"
]
},
"num": null,
"urls": [],
"raw_text": "Milo\u0161 Stanojevi\u0107 and Khalil Sima'an. 2014. BEER: BEtter evaluation as ranking. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 414-419, Baltimore, Maryland, USA. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Character: Translation edit rate on character level",
"authors": [
{
"first": "Weiyue",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jan-Thorsten",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Hendrik",
"middle": [],
"last": "Rosendahl",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "505--510",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiyue Wang, Jan-Thorsten Peter, Hendrik Rosendahl, and Hermann Ney. 2016. Character: Translation edit rate on character level. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 505-510.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table/>",
"html": null,
"text": "Statistics of the monolingual and the dev/test set.",
"type_str": "table",
"num": null
},
"TABREF1": {
"content": "<table><tr><td>System</td><td>BLEU</td></tr><tr><td colspan=\"2\">MASS-PT 2.3</td></tr><tr><td colspan=\"2\">MASS-FT 8.1</td></tr><tr><td>PSNMT</td><td>14.5</td></tr></table>",
"html": null,
"text": "Sample input-output excerpted from the matrix primary submission of NITS-CNLP.",
"type_str": "table",
"num": null
},
"TABREF2": {
"content": "<table/>",
"html": null,
"text": "BLEU, scores of our three systems using the released test set: MASS-pretrain (MASS-PT), MASSfinetune (MASS-FT) and Pseudo Supervised NMT (PSNMT) for German \u2192 Upper Sorbian language.",
"type_str": "table",
"num": null
}
}
}
}