ACL-OCL / Base_JSON /prefixW /json /wmt /2020.wmt-1.136.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:44:02.198949Z"
},
"title": "Adobe AMPS's Submission for Very Low Resource Supervised Translation Task at WMT20",
"authors": [
{
"first": "Keshaw",
"middle": [],
"last": "Singh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Adobe Inc. Bengaluru",
"location": {
"country": "India"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we describe our systems submitted to the very low resource supervised translation task at WMT20. We participate in both translation directions for Upper Sorbian-German language pair. Our primary submission is a subword-level Transformer-based neural machine translation model trained on original training bitext. We also conduct several experiments with backtranslation using limited monolingual data in our postsubmission work and include our results for the same. In one such experiment, we observe jumps of up to 2.6 BLEU points over the primary system by pretraining on a synthetic, backtranslated corpus followed by fine-tuning on the original parallel training data.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we describe our systems submitted to the very low resource supervised translation task at WMT20. We participate in both translation directions for Upper Sorbian-German language pair. Our primary submission is a subword-level Transformer-based neural machine translation model trained on original training bitext. We also conduct several experiments with backtranslation using limited monolingual data in our postsubmission work and include our results for the same. In one such experiment, we observe jumps of up to 2.6 BLEU points over the primary system by pretraining on a synthetic, backtranslated corpus followed by fine-tuning on the original parallel training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper describes our submissions to the shared task on Very Low Resource Supervised Machine Translation at WMT 2020. The task involved a single language pair: Upper Sorbian-German. We submit supervised neural machine translation (NMT) systems for both translation directions, Upper Sorbian\u2192German and German\u2192Upper Sorbian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "NMT models (Sutskever et al., 2014; Bahdanau et al., 2015; Cho et al., 2014a) have achieved stateof-the-art performance on benchmark datasets for multiple language pairs. A big advantage of such systems over phrase-based statistical machine translation (PBSMT) (Koehn et al., 2003) models is that they can be trained end-to-end. The bulk of the development, however, has been limited to a handful of high-resource language pairs. The primary reason is that training a well-performing NMT system requires a large amount of parallel training data, which means a lot of equivalent investment in terms of resources. Koehn and Knowles (2017) show that when compared to PBSMT approaches, NMT models need more training data to achieve the same level of performance. 1 One of the most popular ways to increase the amount of parallel training data for supervised training is backtranslation (Sennrich et al., 2016a) . We utilize this approach to improve upon the performance of our baseline models.",
"cite_spans": [
{
"start": 11,
"end": 35,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF19"
},
{
"start": 36,
"end": 58,
"text": "Bahdanau et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 59,
"end": 77,
"text": "Cho et al., 2014a)",
"ref_id": "BIBREF1"
},
{
"start": 261,
"end": 281,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF11"
},
{
"start": 612,
"end": 636,
"text": "Koehn and Knowles (2017)",
"ref_id": "BIBREF10"
},
{
"start": 882,
"end": 906,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "All of our systems follow the Transformer architecture (Vaswani et al., 2017 ). Our primary system is a supervised NMT model trained on the original training bitext. We also report our results on experiments with backtranslation, which were completed post the shared task and hence not a part of our primary submissions. We use the backtranslated data in two distinct ways -as a standalone parallel corpus, and to create a combined parallel corpus by mixing in a 1:1 ratio with the provided training data. We also report the performance of fine-tuned models originally trained only on the backtranslated data. In the following sections, we begin by briefly describing the Transformer architecture and backtranslation. We then discuss our experimental setup as well as our experiments with backtranslation. We conclude with a discussion of our results and possible future work.",
"cite_spans": [
{
"start": 55,
"end": 76,
"text": "(Vaswani et al., 2017",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Transformer model is the dominant architecture within current NMT models due to its superior performance on several language pairs. While still a sequence-to-sequence (Sutskever et al., 2014) model composed of an encoder and a decoder, Transformer models are highly parallelizable thanks to being composed purely of feedforward and self-attention layers rather than recurrent layers (Hochreiter and Schmidhuber, 1997; Cho et al., 2014b) . The reader is encouraged to read the original paper (Vaswani et al., 2017) to gain a deeper understanding of the model. We adopt the Transformer base architecture available under the fairseq 2 (Ott et al., 2019) library for all our models.",
"cite_spans": [
{
"start": 171,
"end": 195,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF19"
},
{
"start": 387,
"end": 421,
"text": "(Hochreiter and Schmidhuber, 1997;",
"ref_id": "BIBREF6"
},
{
"start": 422,
"end": 440,
"text": "Cho et al., 2014b)",
"ref_id": "BIBREF2"
},
{
"start": 495,
"end": 517,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 636,
"end": 654,
"text": "(Ott et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "However, NMT models are known to be datahungry (Koehn and Knowles, 2017) ; their performance improves sharply with the availability of more parallel training data. Except for a few language pairs (e.g. English-German), most have little to no such data available. On the other hand, a far greater number of languages have a decent amount of monolingual data available online (e.g. Wikipedia).",
"cite_spans": [
{
"start": 47,
"end": 72,
"text": "(Koehn and Knowles, 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To address this issue of lack of parallel data, Sennrich et al. (2016a) introduced the concept of backtranslation. It involves creating a synthetic parallel corpus by translating sentences from the target-side monolingual data to the source language and making corresponding pairs. A baseline target\u2192source model (PBSMT or NMT), trained with limited data, is generally used for this purpose. It enables the use of large corpora of monolingual data for several languages, the size of which is typically orders of magnitude larger than any corresponding bitext available. What is notable is that only the sourceside data is synthetic in such a scenario and the target-side still corresponds to original monolingual data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Some studies (Poncelas et al., 2018; Popel, 2018 ) have investigated the effects of varying the amount of backtranslated data as a proportion of the total training corpus, including training only on the synthetic dataset as a standalone corpus. We follow some of the related experiments conducted by Kocmi and Bojar (2019) on Gujarati-English (another low-resource pair) with a few exceptions. Besides, we also report performance when pretraining solely on the synthetic corpus following by finetuning on either original or mixed data. While not quite the same, one could think of this approach as having some similarities with transfer learning (Zoph et al., 2016) as well as domain adaptation (Luong and Manning, 2015; Freitag and Al-Onaizan, 2016) for machine translation. There has also been work on using sampling (Edunov et al., 2018) for generating backtranslations, but we stick to using beam search in this work.",
"cite_spans": [
{
"start": 13,
"end": 36,
"text": "(Poncelas et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 37,
"end": 48,
"text": "Popel, 2018",
"ref_id": "BIBREF16"
},
{
"start": 300,
"end": 322,
"text": "Kocmi and Bojar (2019)",
"ref_id": "BIBREF8"
},
{
"start": 646,
"end": 665,
"text": "(Zoph et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 819,
"end": 840,
"text": "(Edunov et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3 Experimental Setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We used the complete parallel training corpus for our primary systems. In addition, we also made use of monolingual data from each language for 2 https://github.com/pytorch/fairseq two purposes -learning Byte Pair Encodings (BPE) (Sennrich et al., 2016b) and backtranslation. For Upper Sorbian (hsb), we used the monolingual corpora provided by the Sorbian Institute and by the Witaj Sprachzentrum. To control the quality of the backtranslated data, we chose not to use the data scraped from the web. For the German (de) side, we made use of the News Crawl 3 2009 dataset, as it is large enough to satisfy the requirements for our experiments.",
"cite_spans": [
{
"start": 230,
"end": 254,
"text": "(Sennrich et al., 2016b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "No. of sentences hsb-de, bitext 58,389 hsb, monolingual 540,994 de, monolingual 2,000,000 Moses toolkit (Koehn et al., 2007) was used for tokenization and punctuation normalization for all data. Before doing any additional preprocessing, we learned separate truecaser models using the toolkit. For this purpose, we took first 500K sentences from each of the monolingual corpora and aggregated them with the corresponding portion from the training bitext. After tokenizing and truecasing, we joined the parallel training corpus with the same monolingual data. We learned joint BPE 4 with 32K merge operations over this corpus and applied them to the parallel training data to get vocabularies for each language. Additionally, we used the clean-corpus-n.perl script within Moses to filter out sentences from the parallel corpus with more than 250 subwords as well as sentence length ratio over 1.5 in either direction. Final corpus statistics are presented in Table 1 .",
"cite_spans": [
{
"start": 104,
"end": 124,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 958,
"end": 965,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "Our primary system is a Transformer base model, trained on the parallel training corpus for both translation directions till 60 epochs. We keep most of the hyperparameters to their default values in fairseq. More precisely, we chose Adam (Kingma and Ba, 2015) as the optimizer and Adam betas were set to 0.9 and 0.98, respectively. The maximum number of tokens in each batch was set to 4096. Learning rate was set to 0.0005, with an inverse squared root decay schedule and 4000 steps of warmup updates. Label smoothing was set to 0.1 and dropout to 0.3. Label-smoothed cross-entropy was used as the training criterion. We trained all our models for a fixed number of epochs, determined separately for each system, and chose the last checkpoint for reporting BLEU (Papineni et al., 2002) scores on the test sets.",
"cite_spans": [
{
"start": 763,
"end": 786,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "All training was done using a single NVIDIA P100 GPU. Due to the small amount of parallel training data, each epoch of training took about 90 seconds on average for the primary system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "In this section, we report our post-submission work on using monolingual data for backtranslation. We took the raw monolingual data that we describe in Section 3.1 and backtranslated with our primary submission models for the respective translation directions, i.e., hsb\u2192de for Upper Sorbian data and de\u2192hsb for German data. We used fairseq-generate function with a beam size of 5 for this purpose. Once again, we limited the number of subwords in each sentence to 250. Finally, we took all sentence pairs for backtranslated Upper Sorbian corpus and the first two million sentence pairs for the German corpus. Table 1 indicates the size of the backtranslated corpora by original language. For further experiments, we name the datasets as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 610,
"end": 617,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Additional Backtranslation Experiments",
"sec_num": "4"
},
{
"text": "\u2022 auth: Processed original training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Backtranslation Experiments",
"sec_num": "4"
},
{
"text": "\u2022 synth: Backtranslated de\u2192hsb and hsb\u2192de corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Backtranslation Experiments",
"sec_num": "4"
},
{
"text": "\u2022 mixed: Augmented training data obtained by mixing auth with a portion of synth in 1:1 ratio, providing a total of 116,778 sentence pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Backtranslation Experiments",
"sec_num": "4"
},
{
"text": "We define the following systems for making use of the backtranslated data. Note that the first system only differs from the primary system in the number of training epochs completed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Backtranslation Experiments",
"sec_num": "4"
},
{
"text": "\u2022 auth-from-scratch: This system has the same settings as the primary system. It was trained on the auth corpus till 80 epochs (as opposed to 60 for primary).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Backtranslation Experiments",
"sec_num": "4"
},
{
"text": "\u2022 mixed-from-scratch: We trained models on mixed data from scratch for 40 epochs. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Backtranslation Experiments",
"sec_num": "4"
},
{
"text": "\u2022 synth-from-scratch: Models were trained only on the synth datasets. To adjust for the difference in the size of the respective backtranslated corpora, we trained hsb\u2192de system for 10 epochs and de\u2192hsb system for 30 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Backtranslation Experiments",
"sec_num": "4"
},
{
"text": "\u2022 synth-auth-finetune: We took the models trained via the previous system and fine-tuned them on auth data for 20 epochs in each translation direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Backtranslation Experiments",
"sec_num": "4"
},
{
"text": "\u2022 synth-mixed-finetune: Same as the last model, except that fine-tuning was done on mixed data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Backtranslation Experiments",
"sec_num": "4"
},
{
"text": "Fine-tuning was carried out by loading pretrained checkpoints and adding extra training flags in reset-optimizer and reset-lr-scheduler.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Backtranslation Experiments",
"sec_num": "4"
},
{
"text": "The systems were evaluated on the blind test set (newstest2020) using automated metrics; no human evaluation was done. Table 2 shows cased BLEU scores for various systems. Our primary systems achieved a BLEU score of 47.6 for Upper Sorbian\u2192German and 45.2 for German\u2192Upper Sorbian translation. We achieved an improvement of 0.3 and 0.4 BLEU points, respectively, by training further till 80 epochs in each direction. We also evaluated a third system, synth-auth-finetune, as described in Section 4, which provided a jump of 2.6 points in BLEU score over the primary system for Upper Sorbian\u2192German and 2.5 for German\u2192Upper Sorbian.",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 126,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In addition to evaluating on blind test sets, we also report BLEU scores on the development test set in the same table. Two outcomes are worth highlighting:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "\u2022 Model trained only on synth data for German\u2192Upper Sorbian translation matched the performance of a similar model trained on the authentic bitext.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "\u2022 Best results were obtained by fine-tuning a model trained on synth data with either auth or mixed. The second result is notable since the regime of pretraining followed by fine-tuning improves the BLEU scores by up to 4 points on this test set when compared to training only on the original bitext. Moreover, while the model trained on synth was not able to match the performance of that trained on auth for Upper Sorbian\u2192German, it still provides the same benefits as German\u2192Upper Sorbian model when fine-tuned further. Looking at the small improvements achieved by using only the mixed corpus for training, increasing its size by combining upsampled auth data with more synth data might lead to even further jumps in the BLEU scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In this paper, we described our Transformer model for supervised machine translation for Upper Sorbian-German language pair. We take note of relatively high BLEU scores achieved by our primary systems (and those of other participants) on this low-resource language pair, which could relate to the high quality of the training corpus. We also report results and takeaways from several experiments with backtranslated data completed post the shared task. A key result is matching the performance of a system trained on the original bitext with one trained on a limited amount of synthetic, backtranslated data. Domain mismatch and a difference in the quality of monolingual corpus might have prevented the system from achieving a similar result in the other direction. We notice big improvements in performance over the primary systems by following a \"pretraining then fine-tuning\" regime.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "An interesting future work would be to measure the applicability of this approach to other lowresource language pairs. Additional systems could be added as well. For instance, models trained on mixed data and fine-tuned on auth data might provide a meaningful comparison. Prior work (Ding et al., 2019) has shown that the number of BPE merge operations has a significant effect on the performance of NMT systems. This work was pointed out during the review process and should be an avenue for further improvement of the model performance.",
"cite_spans": [
{
"start": 283,
"end": 302,
"text": "(Ding et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "As measured by BLEU score(Papineni et al., 2002).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://data.statmt.org/news-crawl/de/ 4 https://github.com/glample/fastBPE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We trained further till 60 epochs, but observed no improvement in BLEU scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The author would like to thank his manager for supporting this project, and the anonymous reviewers for their thoughtful comments which helped improve the presentation of this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, California, USA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "On the properties of neural machine translation: Encoder-decoder approaches",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "103--111",
"other_ids": {
"DOI": [
"10.3115/v1/W14-4012"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014a. On the proper- ties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST-8, Eighth Work- shop on Syntax, Semantics and Structure in Statisti- cal Translation, pages 103-111, Doha, Qatar. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1179"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724- 1734, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A call for prudent choice of subword merge operations in neural machine translation",
"authors": [
{
"first": "Shuoyang",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Adithya",
"middle": [],
"last": "Renduchintala",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of Machine Translation Summit XVII",
"volume": "1",
"issue": "",
"pages": "204--213",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuoyang Ding, Adithya Renduchintala, and Kevin Duh. 2019. A call for prudent choice of subword merge operations in neural machine translation. In Proceedings of Machine Translation Summit XVII Volume 1: Research Track, pages 204-213, Dublin, Ireland. European Association for Machine Transla- tion.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Understanding back-translation at scale",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "489--500",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1045"
]
},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489-500, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fast domain adaptation for neural machine translation",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.06897"
]
},
"num": null,
"urls": [],
"raw_text": "Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. arXiv preprint arXiv:1612.06897.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, California, USA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "CUNI submission for low-resource languages in WMT news 2019",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kocmi",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "234--240",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5322"
]
},
"num": null,
"urls": [],
"raw_text": "Tom Kocmi and Ond\u0159ej Bojar. 2019. CUNI submission for low-resource languages in WMT news 2019. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 234-240, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Ses- sions, pages 177-180, Prague, Czech Republic. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Six challenges for neural machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Knowles",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Neural Machine Translation",
"volume": "",
"issue": "",
"pages": "28--39",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3204"
]
},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six chal- lenges for neural machine translation. In Proceed- ings of the First Workshop on Neural Machine Trans- lation, pages 28-39, Vancouver. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "127--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, pages 127-133.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Stanford neural machine translation systems for spoken language domains",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong and Christopher D Manning. 2015. Stanford neural machine translation systems for spo- ken language domains. In Proceedings of the In- ternational Workshop on Spoken Language Transla- tion.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "48--53",
"other_ids": {
"DOI": [
"10.18653/v1/N19-4009"
]
},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Investigating backtranslation in neural machine translation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Poncelas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shterionov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Way",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "De Buy",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Wenniger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Passban",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.06189"
]
},
"num": null,
"urls": [],
"raw_text": "A Poncelas, D Shterionov, A Way, GM de Buy Wen- niger, and P Passban. 2018. Investigating backtrans- lation in neural machine translation. arXiv preprint arXiv:1804.06189.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Machine translation using syntactic analysis",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Popel. 2018. Machine translation using syntac- tic analysis. Univerzita Karlova.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems, pages 3104-3112. Curran Associates, Inc.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008. Curran As- sociates, Inc.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Transfer learning for low-resource neural machine translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1568--1575",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1163"
]
},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1568-1575, Austin, Texas. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table/>",
"text": "",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table/>",
"text": "BLEU scores for the blind test set (newstest2020) and the development test set. Bold values in a column indicate the best scores among the evaluated systems. + Additional fine-tuning for models trained with backtranslated corpora. * Only the primary systems were evaluated before deadline.",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}