ACL-OCL / Base_JSON /prefixW /json /wat /2020.wat-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:34:06.971240Z"
},
"title": "The ADAPT Centre's Participation in WAT 2020 English-to-Odia Translation Task",
"authors": [
{
"first": "Prashanth",
"middle": [],
"last": "Nayak",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Dublin City University",
"location": {
"settlement": "Dublin",
"country": "Ireland"
}
},
"email": ""
},
{
"first": "Rejwanul",
"middle": [],
"last": "Haque",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Dublin City University",
"location": {
"settlement": "Dublin",
"country": "Ireland"
}
},
"email": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Dublin City University",
"location": {
"settlement": "Dublin",
"country": "Ireland"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the ADAPT Centre submissions to WAT 2020 for the English-to-Odia translation task. We present the approaches that we followed to try to build competitive machine translation (MT) systems for Englishto-Odia. Our approaches include monolingual data selection for creating synthetic data and identifying optimal sets of hyperparameters for Transformer in a low-resource scenario. Our best MT system produces 4.96 BLEU points on the evaluation test set in the English-to-Odia translation task.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the ADAPT Centre submissions to WAT 2020 for the English-to-Odia translation task. We present the approaches that we followed to try to build competitive machine translation (MT) systems for Englishto-Odia. Our approaches include monolingual data selection for creating synthetic data and identifying optimal sets of hyperparameters for Transformer in a low-resource scenario. Our best MT system produces 4.96 BLEU points on the evaluation test set in the English-to-Odia translation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The ADAPT Centre participated in the English-to-Odia shared task at the 7th Workshop on Asian translation (WAT 2020) (Nakazawa et al., 2020) . 1 This paper presents the approaches we adopted in order to try to build competitive MT systems for this translation task. We also discuss methods that did not work for us. Our NMT systems are state-ofthe-art Transformer models (Vaswani et al., 2017) .",
"cite_spans": [
{
"start": 117,
"end": 140,
"text": "(Nakazawa et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 143,
"end": 144,
"text": "1",
"ref_id": null
},
{
"start": 371,
"end": 393,
"text": "(Vaswani et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is organized as follows. Section 2 presents our approaches. We describe the resources we utilized for training in Section 3. Section 4 presents the results obtained, and Section 5 concludes our work with avenues for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Neural MT (NMT) (Vaswani et al., 2017) has made considerable progress in recent years, outperforming the previous state-of-the-art statistical MT in many translation tasks, particularly when there are large volumes of parallel corpora available. Building NMT systems for under-resourced languages still poses a challenge despite recent successes (Nakazawa et al., 2019) .",
"cite_spans": [
{
"start": 16,
"end": 38,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 346,
"end": 369,
"text": "(Nakazawa et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "2.1"
},
{
"text": "As for the task in which we are participating (English-to-Odia), the parallel data that the task organisers provided is relatively small. The organisers also provided us with monolingual data. We made use of monolingual data in training in order to improve our baseline models. The use of synthetic data to improve NMT systems is a well-accepted and popular method, especially in low-resource scenarios (Sennrich et al., 2016a) . We did not blindly use all sentences of the monolingual data; instead, we select those sentences that are similar in terms of style and domain to the sentences of the parallel data. In order to select the sentences which are similar to those of the parallel data, we use perplexity scores of the monolingual sentences according to the in-domain language model (Axelrod et al., 2011; Toral, 2013; Nayak et al., 2020; Parthasarathy et al., 2020) . The selected monolingual sentences are then back-translated to form synthetic training data.",
"cite_spans": [
{
"start": 403,
"end": 427,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF8"
},
{
"start": 790,
"end": 812,
"text": "(Axelrod et al., 2011;",
"ref_id": "BIBREF0"
},
{
"start": 813,
"end": 825,
"text": "Toral, 2013;",
"ref_id": "BIBREF12"
},
{
"start": 826,
"end": 845,
"text": "Nayak et al., 2020;",
"ref_id": "BIBREF5"
},
{
"start": 846,
"end": 873,
"text": "Parthasarathy et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "2.1"
},
{
"text": "We conducted a series of experiments to find the best hyperparameters for Transformer as far as lowresource translation is concerned. For our experiments we primarily used those hyperparameters that are commonly used for low-resource scenarios (Sennrich and Zhang, 2019) . Additionally, we varied a handful of parameters to see how the MT systems would perform, e.g. encoder and decoder layer sizes. We applied Byte-Pair Encoding (BPE) word segmentation (Sennrich et al., 2016b) both individually and jointly to the source and target language corpora. Since BPE when applied individually worked better for us, we stick to this setup for our system building. We found that the following hyperparameters provided us with the best result in this low-resource scenario: (i) the number of BPE merge operations: 32,000 (ii) the sizes of the encoder and decoder layers: 4 and 6, respectively, and (iii) the learning-rate: 0.02.",
"cite_spans": [
{
"start": 244,
"end": 270,
"text": "(Sennrich and Zhang, 2019)",
"ref_id": "BIBREF10"
},
{
"start": 454,
"end": 478,
"text": "(Sennrich et al., 2016b)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters Search",
"sec_num": "2.2"
},
{
"text": "We made use of both the parallel and monolingual data that were provided by the WAT 2020 task organisers. 2 Additionally, we used external monolingual data for system building. The statistics of the parallel and monolingual corpora (OpusNlp, 3 OSCAR 4 and AI4Bharat-IndicNLP) 5 are shown in Tables 1 and 2, respectively. In order to remove noisy sentences from the corpus, we used a language identifier CLD2 6 with a confidence of 95.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Used",
"sec_num": "3"
},
{
"text": "We used the state-of-the-art Transformer model in order to prepare our MT systems. For system building, we used the OpenNMT toolkit (Klein et al., 2017) . In order to evaluate our MT systems, we used the widely-used evaluation metric, BLEU (Papineni et al., 2002) .",
"cite_spans": [
{
"start": 132,
"end": 152,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 240,
"end": 263,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "We made use of the parallel corpus in order to build our baseline NMT system. The original parallel data includes many duplicate entries. There were 2 https://github.com/shantipriyap/ Odia-NLP-Resource-Catalog 3 https://object.pouta.csc.fi/ OPUS-Ubuntu/v14.10/moses 4 https://oscar-corpus.com/ 5 https://github.com/ ai4bharat-indicnlp/indicnlp_corpus 6 https://github.com/CLD2Owners/cld2 also many overlapping entries in the training, development and test sets. The duplicate entries from the training set were removed accordingly. Then we built an MT system on deduplicated training data. From now on, we call this MT system Base. We obtained the BLEU score to evaluate Base on the test set and report the score in Table 4 . Note that we built all our MT systems following the best hyperparameters setup described in Section 2.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 716,
"end": 723,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "The Baseline MT System",
"sec_num": "4.1"
},
{
"text": "6.11 Base + 1M 5.03 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLEU Base",
"sec_num": null
},
{
"text": "As mentioned above, since the parallel corpus is small in size, we made use of monolingual data to improve Base following the method presented in Section 2.1. For this, we built an Odia-to-English MT system and used it to translate our Odia monolingual sentences. The BLEU score of the Odia-to-English MT system (cf. Base) is shown in Table 3 . The quality of synthetic parallel data is crucial for training or fine-tuning an NMT system. As can be seen from Table 3 , since our Odia-to-English baseline MT system (i.e. Base) is also not good in quality, we tried to improve it so that we can have a better quality synthetic parallel corpus. Therefore, in addition to the parallel corpus, we used a synthetic corpus of one million sentence-pairs for training. However, we can see from Table 3 that using synthetic data causes to deteriorate the Odiato-English MT system's performance. As a result, we used our best Odia-to-English MT system, Base, for translating the Odia monolingual sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 335,
"end": 343,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 459,
"end": 466,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 785,
"end": 792,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Using Monolingual Data",
"sec_num": "4.2"
},
{
"text": "The score of the English-to-Odia MT system built on training data composed of the authentic and synthetic parallel data is shown in Table 4 . We see that adding synthetic data (one million sentence-pairs) to the original parallel data does not help in this case either.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Using Monolingual Data",
"sec_num": "4.2"
},
{
"text": "This paper presents the ADAPT Centre system description for the WAT 2020 English-to-Odia translation shared task. Our best MT model, a Transformer model prepared using an optimal set of hyperparameters, obtain 4.96 BLEU points on the evaluation test set. We selected those monolingual sentences from a large monolingual data that are similar in terms of style and domain to the sentences of the parallel corpus. We then created a synthetic parallel corpus by translating the selected Odia monolingual sentences to English. We finetuned our baseline MT system on the training data that combines of the synthetic and original parallel corpora. This strategy did not work for us since using synthetic data causes to deteriorate the performance of the English-to-Odia MT system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "As for future work, we aim to explore transfer learning and using data of other related languages in order to improve translation of the English-to-Odia MT system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "http://lotus.kuee.kyoto-u.ac.jp/WAT/ WAT2020/index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The ADAPT Centre for Digital Content Technology is funded under the Science Foundation Ireland (SFI) Research Centres Programme (Grant No. 13/RC/2106) and is co-funded under the European Regional Development Fund. This project has partially received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie grant agreement No. 713567, and the publication has emanated from research supported in part by a research grant from SFI under Grant Number 13/RC/2077 and 18/CRT/6224 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Domain adaptation via pseudo in-domain data selection",
"authors": [
{
"first": "Amittai",
"middle": [],
"last": "Axelrod",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "355--362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 355-362, Edinburgh, Scotland, UK. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The ADAPT system description for the STA-PLE 2020 English-to-Portuguese translation task",
"authors": [
{
"first": "Rejwanul",
"middle": [],
"last": "Haque",
"suffix": ""
},
{
"first": "Yasmin",
"middle": [],
"last": "Moslem",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourth Workshop on Neural Generation and Translation",
"volume": "",
"issue": "",
"pages": "144--152",
"other_ids": {
"DOI": [
"10.18653/v1/2020.ngt-1.17"
]
},
"num": null,
"urls": [],
"raw_text": "Rejwanul Haque, Yasmin Moslem, and Andy Way. 2020. The ADAPT system description for the STA- PLE 2020 English-to-Portuguese translation task. In Proceedings of the Fourth Workshop on Neural Gen- eration and Translation, pages 144-152, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "OpenNMT: Opensource toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. OpenNMT: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Overview of the 6th workshop on Asian translation",
"authors": [
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Nobushige",
"middle": [],
"last": "Doi",
"suffix": ""
},
{
"first": "Shohei",
"middle": [],
"last": "Higashiyama",
"suffix": ""
},
{
"first": "Chenchen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Hideya",
"middle": [],
"last": "Mino",
"suffix": ""
},
{
"first": "Isao",
"middle": [],
"last": "Goto",
"suffix": ""
},
{
"first": "Win",
"middle": [
"Pa"
],
"last": "Pa",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Oda",
"suffix": ""
},
{
"first": "Shantipriya",
"middle": [],
"last": "Parida",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 6th Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "1--35",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5201"
]
},
"num": null,
"urls": [],
"raw_text": "Toshiaki Nakazawa, Nobushige Doi, Shohei Hi- gashiyama, Chenchen Ding, Raj Dabre, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukut- tan, Yusuke Oda, Shantipriya Parida, Ond\u0159ej Bojar, and Sadao Kurohashi. 2019. Overview of the 6th workshop on Asian translation. In Proceedings of the 6th Workshop on Asian Translation, pages 1-35, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Overview of the 7th workshop on Asian translation",
"authors": [
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "Chenchen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Hideya",
"middle": [],
"last": "Mino",
"suffix": ""
},
{
"first": "Isao",
"middle": [],
"last": "Goto",
"suffix": ""
},
{
"first": "Win",
"middle": [
"Pa"
],
"last": "Pa",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Shantipriya",
"middle": [],
"last": "Parida",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukuttan, Shantipriya Parida, Ond\u0159ej Bojar, and Sadao Kurohashi. 2020. Overview of the 7th workshop on Asian transla- tion. In Proceedings of the 7th Workshop on Asian Translation, Suzhou, China. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The adapt's submissions to the WMT20 biomedical translation task",
"authors": [
{
"first": "Prashanth",
"middle": [],
"last": "Nayak",
"suffix": ""
},
{
"first": "Rejwanul",
"middle": [],
"last": "Haque",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation (Shared Task Papers (Biomedical)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prashanth Nayak, Rejwanul Haque, and Andy Way. 2020. The adapt's submissions to the WMT20 biomedical translation task. In Proceedings of the Fifth Conference on Machine Translation (Shared Task Papers (Biomedical), Punta Cana, Dominican Republic.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Rejwanul Haque, and Andy Way. 2020. The ADAPT system description for the WMT20 news translation task",
"authors": [
{
"first": "Akshai",
"middle": [],
"last": "Venkatesh Balavadhani Parthasarathy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ramesh",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Fifth Conference on Machine Translation (Shared Task Papers (News))",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Venkatesh Balavadhani Parthasarathy, Akshai Ramesh, Rejwanul Haque, and Andy Way. 2020. The ADAPT system description for the WMT20 news translation task. In Proceedings of the Fifth Confer- ence on Machine Translation (Shared Task Papers (News)), Punta Cana, Dominican Republic.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Revisiting lowresource neural machine translation: A case study",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Biao",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1021"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich and Biao Zhang. 2019. Revisiting low- resource neural machine translation: A case study.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "211--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 211- 221, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Hybrid selection of language model training data using linguistic information and perplexity",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Toral",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the second workshop on hybrid approaches to translation",
"volume": "",
"issue": "",
"pages": "8--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Toral. 2013. Hybrid selection of language model training data using linguistic information and perplexity. In Proceedings of the second workshop on hybrid approaches to translation, pages 8-12.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"content": "<table><tr><td colspan=\"2\">Monolingual-Corpus Sentences</td><td>Words</td></tr><tr><td>OpusNlp</td><td>30k</td><td>1,003,211</td></tr><tr><td>OSCAR</td><td colspan=\"2\">284K 14,938,567</td></tr><tr><td>AI4Bharat-IndicNLP</td><td colspan=\"2\">3.5M 53,694,876</td></tr></table>",
"text": "Statistics of the training, development and test sets.",
"num": null,
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table/>",
"text": "Statistics of the monolingual corpora.",
"num": null,
"type_str": "table"
},
"TABREF3": {
"html": null,
"content": "<table><tr><td colspan=\"2\">: The BLEU scores of the Odia-to-English MT</td></tr><tr><td>systems.</td><td/></tr><tr><td/><td>BLEU</td></tr><tr><td>Base</td><td>4.96</td></tr><tr><td>Base + 1M</td><td>3.53</td></tr></table>",
"text": "",
"num": null,
"type_str": "table"
},
"TABREF4": {
"html": null,
"content": "<table/>",
"text": "The BLEU scores of the English-to-Odia MT systems.",
"num": null,
"type_str": "table"
}
}
}
}