ACL-OCL / Base_JSON /prefixW /json /wat /2021.wat-1.25.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:33:46.204859Z"
},
"title": "IIIT Hyderabad Submission To WAT 2021: Efficient Multilingual NMT systems for Indian languages",
"authors": [
{
"first": "Sourav",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIIT-Hyderabad",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Salil",
"middle": [],
"last": "Aggarwal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIIT-Hyderabad",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Dipti",
"middle": [],
"last": "Misra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIIT-Hyderabad",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the work and the systems submitted by the IIIT-Hyderbad team (Id: IIIT-H) in the WAT 2021 (Nakazawa et al., 2021) MultiIndicMT shared task. The task covers 10 major languages of the Indian subcontinent. For the scope of this task, we have built multilingual systems for 20 translation directions namely English-Indic (one-to-many) and Indic-English (many-to-one). Individually, Indian languages are resource poor which hampers translation quality but by leveraging multilingualism and abundant monolingual corpora, the translation quality can be substantially boosted. But the multilingual systems are highly complex in terms of time as well as computational resources. Therefore, we are training our systems by efficiently selecting data that will actually contribute to most of the learning process. Furthermore, we are also exploiting the language relatedness found in between Indian languages. All the comparisons were made using BLEU score and we found that our final multilingual system significantly outperforms the baselines by an average of 11.3 and 19.6 BLEU points for English-Indic (en-xx) and Indic-English (xx-en) directions, respectively.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the work and the systems submitted by the IIIT-Hyderbad team (Id: IIIT-H) in the WAT 2021 (Nakazawa et al., 2021) MultiIndicMT shared task. The task covers 10 major languages of the Indian subcontinent. For the scope of this task, we have built multilingual systems for 20 translation directions namely English-Indic (one-to-many) and Indic-English (many-to-one). Individually, Indian languages are resource poor which hampers translation quality but by leveraging multilingualism and abundant monolingual corpora, the translation quality can be substantially boosted. But the multilingual systems are highly complex in terms of time as well as computational resources. Therefore, we are training our systems by efficiently selecting data that will actually contribute to most of the learning process. Furthermore, we are also exploiting the language relatedness found in between Indian languages. All the comparisons were made using BLEU score and we found that our final multilingual system significantly outperforms the baselines by an average of 11.3 and 19.6 BLEU points for English-Indic (en-xx) and Indic-English (xx-en) directions, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Good translation systems are an important requirement due to substantial government, business and social communication among people speaking different languages. Neural machine translation (Sutskever et al., 2014; Bahdanau et al., 2014; Vaswani et al., 2017) is the current state-of-the-art approach for Machine Translation in both academia and industry. The success of NMT heavily relies on substantial amounts of parallel sentences as training data (Koehn and Knowles, 2017) which is again an arduous task for low resource languages like Indian languages (Philip et al., 2021) . Many techniques have been devised to improve the translation quality of low resource languages like back translation (Sennrich et al., 2015) , dual learning (Xia et al., 2016) , transfer learning (Zoph et al., 2016; Kocmi and Bojar, 2018) , etc. Also, using the traditional approaches, one would still need to train a separate model for each translation direction. So, building multilingual neural machine translation models by means of sharing parameters with high-resource languages is a common practice to improve the performance of low-resource language pairs (Firat et al., 2017; Johnson et al., 2017; Ha et al., 2016) . Low resource language pairs perform better when combined opposed to the case where the models are trained separately due to sharing of parameters. It also enables training a single model that supports translation from multiple source languages to a single target language or from a single source language to multiple target languages. This approach mainly works by combining all the parallel data in hand which makes the training process quite complex in terms of both time and computational resources (Arivazhagan et al., 2019) . Therefore, we are training our systems by efficiently selecting data that will actually contribute to most of the learning process. Sometimes, this learning is hindered in case of language pairs that do not show any kind of relatedness among themselves. But on the other hand, Indian languages exhibit a lot of lexical and structural similarities on account of sharing a common ancestry (Kunchukuttan and Bhattacharyya, 2020) . Therefore, in this work, we have exploited the lexical similarity of these related languages to build efficient multilingual NMT systems. This paper describes our work in the WAT 2021 MultiIndicMT shared task (cite). The task (Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil and Telugu) and English. The objective of this shared task is to build translation models for 20 translation directions (English-Indic and Indic-English). This paper is further organized as follows. Section 2 describes the methodology behind our experiments. Section 3 talks about the experimental details like dataset pre-processing and training details. Results and analysis have been discussed in Section 4, followed by conclusion in Section 5.",
"cite_spans": [
{
"start": 189,
"end": 213,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF21"
},
{
"start": 214,
"end": 236,
"text": "Bahdanau et al., 2014;",
"ref_id": "BIBREF1"
},
{
"start": 237,
"end": 258,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 451,
"end": 476,
"text": "(Koehn and Knowles, 2017)",
"ref_id": "BIBREF12"
},
{
"start": 557,
"end": 578,
"text": "(Philip et al., 2021)",
"ref_id": "BIBREF18"
},
{
"start": 698,
"end": 721,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF19"
},
{
"start": 738,
"end": 756,
"text": "(Xia et al., 2016)",
"ref_id": "BIBREF23"
},
{
"start": 777,
"end": 796,
"text": "(Zoph et al., 2016;",
"ref_id": "BIBREF24"
},
{
"start": 797,
"end": 819,
"text": "Kocmi and Bojar, 2018)",
"ref_id": "BIBREF10"
},
{
"start": 1145,
"end": 1165,
"text": "(Firat et al., 2017;",
"ref_id": "BIBREF3"
},
{
"start": 1166,
"end": 1187,
"text": "Johnson et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 1188,
"end": 1204,
"text": "Ha et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 1709,
"end": 1735,
"text": "(Arivazhagan et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 2125,
"end": 2163,
"text": "(Kunchukuttan and Bhattacharyya, 2020)",
"ref_id": "BIBREF14"
},
{
"start": 2392,
"end": 2481,
"text": "(Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil and Telugu)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "India is one of the most linguistically diverse countries of the world but underlying this vast diversity in Indian languages are many commonalities. These languages exhibit lexical and structural similarities on account of sharing a common ancestry or being in contact for a long period of time (Bhattacharyya et al., 2016). These languages share many common cognates and therefore, it is very important to utilize the lexical similarity of these languages to build good quality multilingual NMT systems. To do this, we are using the two different approaches namely Unified Transliteration and Sub-word Segmentation proposed by (Goyal et al., 2020) .",
"cite_spans": [
{
"start": 629,
"end": 649,
"text": "(Goyal et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting Language Relatedness",
"sec_num": "2.1"
},
{
"text": "The major Indian languages have a long written tradition and use a variety of scripts but correspondences can be established between equivalent characters across scripts. These scripts are derived from the ancient Brahmi script. In order to achieve this, we transliterated all the Indian languages into a common Devanagari script (which in our case is the script for Hindi) to share the same surface form. This unified transliteration is a string homomorphism, replacing characters in all the languages to a single desired script.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unified Transliteration",
"sec_num": "2.1.1"
},
{
"text": "Despite sharing a lot of cognates, Indian languages do not share many words at their non-root level. Therefore, the more efficient approach is to exploit Indian languages at their sub-word level which will ensure more vocabulary overlap. Therefore, we are converting every word to sub-word level using the very well known technique Byte Pair Encoding (BPE) (Sennrich et al., 2015) . This technique is applied after the unified transliteration in order to ensure that languages share same surface form (script). BPE units are variable length units which provide appropriate context for translation systems involving related languages. Since their vocabularies are much smaller than the morpheme and wordlevel models, data sparsity is also not a problem. In a multilingual scenario, learning BPE merge rules will not only find the common sub-words between multiple languages but it also ensures consistency of segmentation among each considered language pair.",
"cite_spans": [
{
"start": 357,
"end": 380,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subword Segmentation",
"sec_num": "2.1.2"
},
{
"text": "Since the traditional approaches of training a multilingual system simply work by combining all the parallel dataset in hand, making it infeasible in terms of both time as well as computational resources. Therefore, in order to select only the relevant domains, we are incrementally adding all the domains in decreasing order of their vocab overlap with the PMI domain (Haddow and Kirefu, 2020) . Detection of dip in the BLEU score (Papineni et al., 2002) is considered as the stopping criteria for our strategy. The vocab overlap between any two domains is calculated using the formula shown below:",
"cite_spans": [
{
"start": 369,
"end": 394,
"text": "(Haddow and Kirefu, 2020)",
"ref_id": "BIBREF6"
},
{
"start": 432,
"end": 455,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Selection Strategy",
"sec_num": "2.2"
},
{
"text": "Vocab Overlap = |V ocab d1 \u2229 V ocab d2 | max(|V ocab d1 |, |V ocab d2 |) * 100",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Selection Strategy",
"sec_num": "2.2"
},
{
"text": "Here translation is particularly useful for low resource languages. We use back translation to augment our multilingual models. The back translation data is generated by multilingual models in the reverse direction, hence some implicit multilingual transfer is incorporated in the back translated data also. For the scope of this paper, we have used monolingual data of the PMI given on the WAT website.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Selection Strategy",
"sec_num": "2.2"
},
{
"text": "Multilingual model enables us to translate to and from multiple languages using a shared word piece vocabulary, which is significantly simpler than training a different model for each language pair. We used the technique proposed by Johnson et al. (2017) where he introduced a \"language flag\" based approach that shares the attention mechanism and a single encoder-decoder network to enable multilingual models. A language flag or token is part of the input sequence to indicate which direction to translate to. The decoder learns to generate the target given this input. This approach has been shown to be simple, effective and forces the model to generalize across language boundaries during training. It is also observed that when language pairs with little available data and language pairs with abundant data are mixed into a single model, translation quality on the low resource language pair is significantly improved. Furthermore, We are also fine tuning our multilingual system on PMI (multilingual) domain by the means of transfer learning b/w the parent and the child model.",
"cite_spans": [
{
"start": 233,
"end": 254,
"text": "Johnson et al. (2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual NMT and Fine-tuning",
"sec_num": "2.4"
},
{
"text": "We are using the dataset provided in WAT 2021 shared task. Our experiments mainly use PMI (Haddow and Kirefu, 2020) , CVIT (Siripragada et al., 2020) and IIT-B (Kunchukuttan et al., 2017) parallel dataset, along with monolingual data of PMI for further improvements Table 2 . We used Moses (Koehn et al., 2007) toolkit for tokenization and cleaning of English and Indic NLP library (Kunchukuttan, 2020) for normalizing, tokenization and transliteration of all Indian languages. For our bilingual model we used BPE segmentation with 16K merge operation and for multilingual models we learned the Joint-BPE on source and target side with 16K merges (Sennrich et al., 2015) .",
"cite_spans": [
{
"start": 90,
"end": 115,
"text": "(Haddow and Kirefu, 2020)",
"ref_id": "BIBREF6"
},
{
"start": 123,
"end": 149,
"text": "(Siripragada et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 154,
"end": 187,
"text": "IIT-B (Kunchukuttan et al., 2017)",
"ref_id": null
},
{
"start": 290,
"end": 310,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF11"
},
{
"start": 382,
"end": 402,
"text": "(Kunchukuttan, 2020)",
"ref_id": "BIBREF13"
},
{
"start": 647,
"end": 670,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 266,
"end": 273,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Dataset and Preprocessing",
"sec_num": "3.1"
},
{
"text": "For all of our experiments, we use the OpenNMTpy (Klein et al., 2017) toolkit for training the NMT systems. We used the Transformer model with 6 layers in both the encoder and decoder, each with 512 hidden units. The word embedding size is set to 512 with 8 heads. The training is done in batches of maximum 4096 tokens at a time with dropout set to 0.3. We use Adam (Kingma and Ba, 2014) optimizer to optimize model parameters. We validate the model every 5,000 steps via BLEU (Papineni et al., 2002) and perplexity on the development set. We are training all of our models with early stopping criteria based on validation set accuracy. During testing, we rejoin translated BPE segments and convert the translated sentences back to their original language scripts. Finally, we evaluate the accuracy of our translation models using BLEU.",
"cite_spans": [
{
"start": 49,
"end": 69,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 478,
"end": 501,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.2"
},
{
"text": "We report the Bleu score on the test set provided in the WAT 2021 MultiIndic shared task. Table 3 and Table 4 represents the results for different experiments we have performed for En-XX and XX-En directions respectively. The rows corresponding to PMI + CVIT + Back Translation + Fine tuning on PMI multilingual is our final system submitted for this shared task (Bleu scores shown in the table for this task are from automatic evaluation system).",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 97,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 102,
"end": 109,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4"
},
{
"text": "We observe that Multilingual system of PMI outperforms the bilingual baseline model of PMI by significant margins. The reason for this is the abil-En-XX en-hi en-pa en-gu en-mr en-bn en-or en-kn en-ml en-ta en-te Table 4 : Results for XX-En direction ity to induce learning from multiple languages; also there is increase in vocab overlap using our technique of exploiting language relatedness. Further we tried to improve the performance of system using the relevant domains by incrementally adding different domains based on vocab overlap to the already existing system. We observed a decrease in Bleu score after adding the IIT-B corpus and therefore we stopped our incremental training at that point. Further we can see that our final multilingual model using back translation and fine tuning outperforms all other systems. Our submission also got evaluated with AMFM scores which can be found in the WAT 2021 evaluation website.",
"cite_spans": [],
"ref_spans": [
{
"start": 213,
"end": 220,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4"
},
{
"text": "This paper presents the submissions by IIIT Hyderabd on the WAT 2021 MultiIndicMT shared Task. We performed experiments by combining different pre-processing and training techniques in series to achieve competitive results. The effectiveness of each technique is demonstrated. Our final submission able to achieve the second rank in this task according to automatic evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Massively multilingual neural machine translation in the wild: Findings and challenges",
"authors": [
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Bapna",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Lepikhin",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Mia",
"middle": [
"Xu"
],
"last": "Chen",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.05019"
]
},
"num": null,
"urls": [],
"raw_text": "Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019. Massively multilingual neural machine translation in the wild: Findings and chal- lenges. arXiv preprint arXiv:1907.05019.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Statistical machine translation between related languages",
"authors": [
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mitesh",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Khapra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorial Abstracts",
"volume": "",
"issue": "",
"pages": "17--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pushpak Bhattacharyya, Mitesh M Khapra, and Anoop Kunchukuttan. 2016. Statistical machine translation between related languages. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Tu- torial Abstracts, pages 17-20.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multi-way, multilingual neural machine translation",
"authors": [
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Baskaran",
"middle": [],
"last": "Sankaran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fatos T Yarman",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Vural",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "Computer Speech & Language",
"volume": "45",
"issue": "",
"pages": "236--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Orhan Firat, Kyunghyun Cho, Baskaran Sankaran, Fatos T Yarman Vural, and Yoshua Bengio. 2017. Multi-way, multilingual neural machine translation. Computer Speech & Language, 45:236-252.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Efficient neural machine translation for lowresource languages via exploiting related languages",
"authors": [
{
"first": "Vikrant",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Sourav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Dipti Misra",
"middle": [],
"last": "Sharma",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "162--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vikrant Goyal, Sourav Kumar, and Dipti Misra Sharma. 2020. Efficient neural machine translation for low- resource languages via exploiting related languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 162-168.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Toward multilingual neural machine translation with universal encoder and decoder",
"authors": [
{
"first": "Thanh-Le",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.04798"
]
},
"num": null,
"urls": [],
"raw_text": "Thanh-Le Ha, Jan Niehues, and Alexander Waibel. 2016. Toward multilingual neural machine trans- lation with universal encoder and decoder. arXiv preprint arXiv:1611.04798.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Pmindia-a collection of parallel corpora of languages of india",
"authors": [
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Faheem",
"middle": [],
"last": "Kirefu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.09907"
]
},
"num": null,
"urls": [],
"raw_text": "Barry Haddow and Faheem Kirefu. 2020. Pmindia-a collection of parallel corpora of languages of india. arXiv preprint arXiv:2001.09907.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Corrado",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, et al. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "OpenNMT: Opensource toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. OpenNMT: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Trivial transfer learning for low-resource neural machine translation",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kocmi",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.00357"
]
},
"num": null,
"urls": [],
"raw_text": "Tom Kocmi and Ond\u0159ej Bojar. 2018. Trivial transfer learning for low-resource neural machine translation. arXiv preprint arXiv:1809.00357.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th annual meeting of the association for computational linguistics companion",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Pro- ceedings of the 45th annual meeting of the associ- ation for computational linguistics companion vol- ume proceedings of the demo and poster sessions, pages 177-180.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Six challenges for neural machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Knowles",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.03872"
]
},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. arXiv preprint arXiv:1706.03872.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The IndicNLP Library",
"authors": [
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anoop Kunchukuttan. 2020. The IndicNLP Library. https://github.com/anoopkunchukuttan/ indic_nlp_library/blob/master/docs/ indicnlp.pdf.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Utilizing language relatedness to improve machine translation: A case study on languages of the indian subcontinent",
"authors": [
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.08925"
]
},
"num": null,
"urls": [],
"raw_text": "Anoop Kunchukuttan and Pushpak Bhattacharyya. 2020. Utilizing language relatedness to im- prove machine translation: A case study on lan- guages of the indian subcontinent. arXiv preprint arXiv:2003.08925.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The iit bombay english-hindi parallel corpus",
"authors": [
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Pratik",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.02855"
]
},
"num": null,
"urls": [],
"raw_text": "Anoop Kunchukuttan, Pratik Mehta, and Pushpak Bhat- tacharyya. 2017. The iit bombay english-hindi par- allel corpus. arXiv preprint arXiv:1710.02855.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Overview of the 8th workshop on Asian translation",
"authors": [
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "Chenchen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Shohei",
"middle": [],
"last": "Higashiyama",
"suffix": ""
},
{
"first": "Hideya",
"middle": [],
"last": "Mino",
"suffix": ""
},
{
"first": "Isao",
"middle": [],
"last": "Goto",
"suffix": ""
},
{
"first": "Win",
"middle": [
"Pa"
],
"last": "Pa",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Shantipriya",
"middle": [],
"last": "Parida",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Chenhui",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Akiko",
"middle": [],
"last": "Eriguchi",
"suffix": ""
},
{
"first": "Kaori",
"middle": [],
"last": "Abe",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Oda",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 8th Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Shohei Higashiyama, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukut- tan, Shantipriya Parida, Ond\u0159ej Bojar, Chenhui Chu, Akiko Eriguchi, Kaori Abe, and Sadao Oda, Yusuke Kurohashi. 2021. Overview of the 8th work- shop on Asian translation. In Proceedings of the 8th Workshop on Asian Translation, Bangkok, Thailand. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Revisiting low resource status of indian languages in machine translation",
"authors": [
{
"first": "Jerin",
"middle": [],
"last": "Philip",
"suffix": ""
},
{
"first": "Shashank",
"middle": [],
"last": "Siripragada",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Vinay",
"suffix": ""
},
{
"first": "C",
"middle": [
"V"
],
"last": "Namboodiri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jawahar",
"suffix": ""
}
],
"year": 2021,
"venue": "8th ACM IKDD CODS and 26th COMAD",
"volume": "",
"issue": "",
"pages": "178--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerin Philip, Shashank Siripragada, Vinay P Nambood- iri, and CV Jawahar. 2021. Revisiting low resource status of indian languages in machine translation. In 8th ACM IKDD CODS and 26th COMAD, pages 178-187.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.06709"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A multilingual parallel corpora collection effort for indian languages",
"authors": [
{
"first": "Shashank",
"middle": [],
"last": "Siripragada",
"suffix": ""
},
{
"first": "Jerin",
"middle": [],
"last": "Philip",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Vinay",
"suffix": ""
},
{
"first": "C",
"middle": [
"V"
],
"last": "Namboodiri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jawahar",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.07691"
]
},
"num": null,
"urls": [],
"raw_text": "Shashank Siripragada, Jerin Philip, Vinay P Nambood- iri, and CV Jawahar. 2020. A multilingual parallel corpora collection effort for indian languages. arXiv preprint arXiv:2007.07691.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.3215"
]
},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. arXiv preprint arXiv:1409.3215.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.03762"
]
},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Dual learning for machine translation",
"authors": [
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nenghai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wei-Ying",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.00179"
]
},
"num": null,
"urls": [],
"raw_text": "Yingce Xia, Di He, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. arXiv preprint arXiv:1611.00179.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Transfer learning for lowresource neural machine translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1604.02201"
]
},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low- resource neural machine translation. arXiv preprint arXiv:1604.02201.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"text": "74.14 72.04 70.60 65.30 47.47 42.93 31.12 29.99 22.44 22.15 16.70 16.28 14.86 10.58 10.09",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>Domain</td><td>PMI Cvit</td><td>IITB ocor</td><td>m2o</td><td>ufal</td><td>Wmat ALT JW</td><td>Osub Ted</td><td>Wtile nlpc</td><td>Tanz urst</td><td>Bible</td></tr><tr><td colspan=\"2\">Vocab Overlap 100</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF1": {
"text": "Vocab Overlap of domains with PMI covers 10 Indic Languages",
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF2": {
"text": "Vocab d1 & Vocab d2 represents vocabulary of domain 1 and domain 2 respectively. Vocab overlap of each domain with PMI is shown in Table 1.",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"2\">Dataset En-hi</td><td colspan=\"5\">En-pa En-gu En-mr En-bn En-or</td><td colspan=\"3\">En-kn En-ml En-ta</td><td>En-te</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">Parallel corpus</td><td/><td/><td/><td/><td/></tr><tr><td>PMI</td><td>50349</td><td>28294</td><td colspan=\"2\">41578 28974</td><td>23306</td><td>31966</td><td>28901</td><td colspan=\"2\">26916 32638</td><td>33380</td></tr><tr><td>CVIT</td><td>266545</td><td colspan=\"4\">101092 58264 114220 91985</td><td>94494</td><td>-</td><td colspan=\"3\">43087 115968 44720</td></tr><tr><td>IITB</td><td colspan=\"2\">1603080 -</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td/><td/><td/><td/><td colspan=\"3\">Monolingual corpus</td><td/><td/><td/><td/></tr><tr><td/><td>En</td><td>Hi</td><td>Pa</td><td>Gu</td><td>Mr</td><td>Bn</td><td>Or</td><td>Kn</td><td>Ml</td><td>Ta</td><td>Te</td></tr><tr><td>PMI</td><td>89269</td><td colspan=\"8\">151792 87804 123008 118848 116835 103331 79024 81786</td><td colspan=\"2\">90912 111325</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"4\">2.3 Back Translation</td><td/><td/></tr><tr><td/><td/><td/><td/><td/><td colspan=\"7\">Back translation (Sennrich et al., 2015)is a widely</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"7\">used data augmentation method where the reverse</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"7\">direction is used to translate sentences from target</td></tr></table>"
},
"TABREF3": {
"text": "Training dataset statistics",
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF4": {
"text": "28.29 23.85 16.74 11.71 16.79 15.63 10.71 11.85 9.18 PMI + CVIT + IITB Multilingual 32.68 23.55 22.36 15.74 8.66 13.88 13.71 8.03 9.23 7.31 PMI + CVIT + Back Translation 35.81 30.15 25.84 18.47 12.50 18.52 17.98 11.99 12.31 12.89 PMI + CVIT + Back Translation + Fine Tuning on PMI Multilingual 38.25 33.35 26.97 19.48 14.73 20.15 19.57 12.76 14.43 15.61",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>PMI Baselines</td><td>23.21 18.26 15.46 7.07</td><td>5.25</td><td>8.32</td><td>8.67</td><td>4.63</td><td>5.32</td><td>6.12</td></tr><tr><td>PMI Multilingual</td><td colspan=\"5\">28.22 26.00 21.19 13.37 10.53 14.78 15.39 8.99</td><td>9.38</td><td>8.57</td></tr><tr><td>PMI + CVIT Multilingual</td><td>32.86</td><td/><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF5": {
"text": "Results for En-XX direction",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>XX-En</td><td>hi-en pa-en gu-en mr-en bn-en or-en kn-en ml-en ta-en te-en</td></tr><tr><td>PMI Baselines</td><td>24.69 19.80 20.16 11.70 10.25 13.80 13.32 11.30 9.82 13.39</td></tr><tr><td>PMI Multilingual</td><td>26.91 24.26 23.91 19.66 17.44 19.65 21.08 18.99 18.95 19.94</td></tr><tr><td>PMI + CVIT Multilingual</td><td>39.40 37.35 35.12 29.59 25.35 30.38 29.56 27.69 28.12 28.97</td></tr><tr><td colspan=\"2\">PMI + CVIT + IITB Multilingual 37.93 36.08 35.03 28.71 24.18 29.04 28.95 27.24 27.61 28.41</td></tr><tr><td>PMI + CVIT + Back Translation</td><td>41.41 39.15 37.84 32.17 26.90 32.52 32.58 28.99 29.31 30.29</td></tr><tr><td>PMI + CVIT + Back Translation+ Fine Tuning on PMI Multilingual</td><td>43.23 41.24 39.39 34.02 28.28 34.11 34.69 29.19 29.61 30.44</td></tr></table>"
}
}
}
}