ACL-OCL / Base_JSON /prefixW /json /wat /2020.wat-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:34:03.191855Z"
},
"title": "Multimodal Neural Machine Translation for English to Hindi",
"authors": [
{
"first": "Rahman",
"middle": [],
"last": "Sahinur",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Technology Silchar Assam",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Abdullah",
"middle": [],
"last": "Laskar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Technology Silchar Assam",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Rahman",
"middle": [],
"last": "Faiz Ur",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Technology Silchar Assam",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Partha",
"middle": [],
"last": "Khilji",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Technology Silchar Assam",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Sivaji",
"middle": [],
"last": "Pakray",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Technology Silchar Assam",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Bandyopadhyay",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Technology Silchar Assam",
"location": {
"country": "India"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Machine translation (MT) focuses on the automatic translation of text from one natural language to another natural language. Neural machine translation (NMT) achieves state-of-theart results in the task of machine translation because of utilizing advanced deep learning techniques and handles issues like long-term dependency, and context-analysis. Nevertheless, NMT still suffers low translation quality for low resource languages. To encounter this challenge, the multi-modal concept comes in. The multi-modal concept combines textual and visual features to improve the translation quality of low resource languages. Moreover, the utilization of monolingual data in the pre-training step can improve the performance of the system for low resource language translations. Workshop on Asian Translation 2020 (WAT2020) organized a translation task for multimodal translation in English to Hindi. We have participated in the same in two-track submission, namely text-only and multi-modal translation with team name CNLP-NITS. The evaluated results are declared at the WAT2020 translation task, which reports that our multimodal NMT system attained higher scores than our text-only NMT on both challenge and evaluation test set. For the challenge test data, our multi-modal neural machine translation system achieves Bilingual Evaluation Understudy (BLEU) score of 33.57, Rankbased Intuitive Bilingual Evaluation Score (RIBES) 0.754141, Adequacy-Fluency Metrics (AMFM) score 0.787320 and for evaluation test data, BLEU, RIBES, and, AMFM score of 40.51, 0.803208, and 0.820980 for English to Hindi translation respectively.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Machine translation (MT) focuses on the automatic translation of text from one natural language to another natural language. Neural machine translation (NMT) achieves state-of-theart results in the task of machine translation because of utilizing advanced deep learning techniques and handles issues like long-term dependency, and context-analysis. Nevertheless, NMT still suffers low translation quality for low resource languages. To encounter this challenge, the multi-modal concept comes in. The multi-modal concept combines textual and visual features to improve the translation quality of low resource languages. Moreover, the utilization of monolingual data in the pre-training step can improve the performance of the system for low resource language translations. Workshop on Asian Translation 2020 (WAT2020) organized a translation task for multimodal translation in English to Hindi. We have participated in the same in two-track submission, namely text-only and multi-modal translation with team name CNLP-NITS. The evaluated results are declared at the WAT2020 translation task, which reports that our multimodal NMT system attained higher scores than our text-only NMT on both challenge and evaluation test set. For the challenge test data, our multi-modal neural machine translation system achieves Bilingual Evaluation Understudy (BLEU) score of 33.57, Rankbased Intuitive Bilingual Evaluation Score (RIBES) 0.754141, Adequacy-Fluency Metrics (AMFM) score 0.787320 and for evaluation test data, BLEU, RIBES, and, AMFM score of 40.51, 0.803208, and 0.820980 for English to Hindi translation respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Multi-modal NMT aims to draw information from the input data from different modalities like text, image, and audio. By combining information from more than one modality, it attempts to amend the quality of low resource language translation. The work undertaken by (Shah et al., 2016) merges the visual features of images from the corresponding input data with textual features of the input bitext to translate sentences, which outperforms text-only translation. For text-only based NMT, encoder-decoder architecture is a widely used technique in the MT community. Because it handles various issues like variable-length phrases using sequence to sequence learning, the problem of long term dependency using Long Short Term Memory (LSTM) (Sutskever et al., 2014) . However, in the case of very long sentences, the basic encoderdecoder architecture is unable to encode all the information. To resolve this issue, the attention mechanism is proposed which pays attention to all source words locally as well as globally (Bahdanau et al., 2015; Luong et al., 2015) . For Indian language translation, attention-based NMT yields remarkable performance Laskar et al., 2019b,a) . Besides, without modifying the system architecture, NMT performance can be improved using monolingual data (Sennrich et al., 2016; Zhang and Zong, 2016) , which is very effective in the case of low resource language translation. This paper investigates English to Hindi translation using the multimodal concept with monolingual data to improve the translation quality at the WAT2020 translation task.",
"cite_spans": [
{
"start": 264,
"end": 283,
"text": "(Shah et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 736,
"end": 760,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF21"
},
{
"start": 1015,
"end": 1038,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 1039,
"end": 1058,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF11"
},
{
"start": 1144,
"end": 1167,
"text": "Laskar et al., 2019b,a)",
"ref_id": null
},
{
"start": 1277,
"end": 1300,
"text": "(Sennrich et al., 2016;",
"ref_id": "BIBREF19"
},
{
"start": 1301,
"end": 1322,
"text": "Zhang and Zong, 2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The literature survey finds out very limited existing works on English-Hindi language pair translation using multi-modal NMT (Dutta Chowdhury et al., 2018; Sanayai Meetei et al., 2019; Laskar et al., 2019c) . The work by (Dutta Chowdhury et al., 2018) (Nakazawa et al., 2020; Parida et al., 2019) .",
"cite_spans": [
{
"start": 125,
"end": 155,
"text": "(Dutta Chowdhury et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 156,
"end": 184,
"text": "Sanayai Meetei et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 185,
"end": 206,
"text": "Laskar et al., 2019c)",
"ref_id": "BIBREF10"
},
{
"start": 221,
"end": 251,
"text": "(Dutta Chowdhury et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 252,
"end": 275,
"text": "(Nakazawa et al., 2020;",
"ref_id": "BIBREF12"
},
{
"start": 276,
"end": 296,
"text": "Parida et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "NMT settings of . The loophole in the existing works of English to Hindi translation using multi-modal NMT is that they have not used monolingual data to improve the performance of multi-modal NMT (Sennrich et al., 2016) . In this paper, we have used monolingual corpus in the pre-training step to enhance the performance of the multi-modal NMT for English to Hindi translation respectively.",
"cite_spans": [
{
"start": 197,
"end": 220,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Hindi Visual Genome 1.1 consists of parallel text and image data, which is provided by the WAT2020 organizers (Nakazawa et al., 2020; Parida et al., 2019 IITB 1 (Kunchukuttan et al., 2018) and English monolingual data from WMT16 2 as shown in Table 2 .",
"cite_spans": [
{
"start": 110,
"end": 133,
"text": "(Nakazawa et al., 2020;",
"ref_id": "BIBREF12"
},
{
"start": 134,
"end": 153,
"text": "Parida et al., 2019",
"ref_id": "BIBREF14"
},
{
"start": 161,
"end": 188,
"text": "(Kunchukuttan et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 243,
"end": 251,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "3"
},
{
"text": "We have used OpenNMT-py (Klein et al., 2017) to setup our multi-modal NMT and text-only NMT systems. The key process of the operations include data preprocessing, system training to generate an optimum trained model, and then obtained trained model is used in the testing/translation process to predict translation on the given unseen data.",
"cite_spans": [
{
"start": 24,
"end": 44,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "4"
},
{
"text": "For multi-modal translation, pre-trained CNN with VGG19 is used for the extraction of global and local features from the provided image dataset. The pre-trained CNN with VGG19 is publicly available in OpenNMT-py. In the text-only and multi-modal task, we have used GloVe (Pennington et al., 2014) to pretrain on monolingual data of English-Hindi and generated global vectors of word embedding. The OpenNMT-py tool is used to create a vocabulary size of 5004 for both source and target sentences. We have not used any word-segmentation technique.",
"cite_spans": [
{
"start": 271,
"end": 296,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "4.1"
},
{
"text": "Our ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "4.1"
},
{
"text": "System Test Set BLEU RIBES AMFM Text",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "4.1"
},
{
"text": "The training process for each track is carried out separately. For multi-modal translation, the obtained pretrained vectors, extracted visual features from data preprocessing are fine-tuned with the parallel text data during the training process. We have used bidirectional RNN (BRNN) at encoder type and doubly-attentive RNN at decoder type following default settings of . BRNN uses two distinct RNN, one for the forward direction and another for backward, and two different attention mechanisms are incorporated across the source words and visual features at a single RNN decoder. Two layer LSTM networks having 500 nodes in each layer are used in both encoder and decoder. Our multi-modal NMT is trained on a single GPU up to 40 epochs with 0.3 drop out, batch size 40 and the best model is obtained at epoch 10. For text-only translation, we have not used visual features and only used pretrained vectors of monolingual data to fine-tune with parallel corpus in the training process. The text-only NMT is trained up to 20,000 epoch since learning curve raises up to 18,000 and then drops. We have selected best trained model at epoch 18,000. The difference between (Laskar et al., 2019c) and this paper, is that in this work, our multi-modal NMT adopts BRNN at encoder type unlike RNN in (Laskar et al., 2019c) and utilizes pretrain word embeddings of monolingual corpus.",
"cite_spans": [
{
"start": 1169,
"end": 1191,
"text": "(Laskar et al., 2019c)",
"ref_id": "BIBREF10"
},
{
"start": 1292,
"end": 1314,
"text": "(Laskar et al., 2019c)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "In this process, the obtained trained models of both multi-modal and text-only NMT system, are used to translate the given test data in each track separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testing",
"sec_num": "4.3"
},
{
"text": "The WAT2020 translation task organizer declared the evaluation result 3 of multi-modal translation task for English to Hindi and our system's results are presented in Table 3 . Our team name is CNLP-NITS and participated in text-only and multi-modal submission track of the same task. In text-only translation submission track, a total of four teams participated for both challenges and evaluation test data and for multi-modal translation submission track, only our team participated. The submitted predicted translations are evaluated via standard evaluation metrics namely, BLEU (Papineni et al., 2002) , RIBES (Isozaki et al., 2010) and AMFM (Banchs et al., 2015) . From the Table 2 , it is observed that our multi-modal NMT system obtained higher scores on the ground of BLEU, RIBES, AMFM than our text-only NMT system. This reasons about combination of visual and textual features in multi-modal NMT shows better performance than only textual features based NMT. Moreover, our systems used pretrained word embedding of monolingual data and adopted BRNN encoder that reasons about outperform previous work (Laskar et al., 2019c) at WAT2019. Figure 1 and 2 present best and worst performance our systems outputs, where included Google translation for comparative analysis.",
"cite_spans": [
{
"start": 582,
"end": 605,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF13"
},
{
"start": 614,
"end": 636,
"text": "(Isozaki et al., 2010)",
"ref_id": "BIBREF5"
},
{
"start": 646,
"end": 667,
"text": "(Banchs et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 167,
"end": 174,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 679,
"end": 686,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 1146,
"end": 1154,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Result and Analysis",
"sec_num": "5"
},
{
"text": "This work participates in two different translation tracks at WAT2020 multi-modal translation task of English to Hindi namely: multi-modal and textonly. In this competition our multi-modal NMT achieves higher BLEU, RIBES and AMFM scores than text-only NMT. From the best of our knowledge, our multi-modal NMT achieves best score on English to Hindi multi-modal translation. In future work, more experiments, analysis will be carried out to enhance the performance of multi-modal NMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "http://www.cfilt.iitb.ac.in/iitb_ parallel/ 2 http://www.statmt.org/wmt16/ translation-task.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://lotus.kuee.kyoto-u.ac.jp/WAT/ evaluation/index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Center for Natural Language Processing (CNLP) and Department of Computer Science and Engineering at National Institute of Technology, Silchar, India for providing the requisite support and infrastructure to execute this work. We also thank the WAT2020 Translation task organizers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Adequacy-fluency metrics: Evaluating mt in the continuous space model framework",
"authors": [
{
"first": "Rafael",
"middle": [
"E"
],
"last": "Banchs",
"suffix": ""
},
{
"first": "Luis",
"middle": [
"F"
],
"last": "D'haro",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE/ACM Trans. Audio, Speech and Lang. Proc",
"volume": "23",
"issue": "3",
"pages": "472--482",
"other_ids": {
"DOI": [
"10.1109/TASLP.2015.2405751"
]
},
"num": null,
"urls": [],
"raw_text": "Rafael E. Banchs, Luis F. D'Haro, and Haizhou Li. 2015. Adequacy-fluency metrics: Evaluating mt in the continuous space model framework. IEEE/ACM Trans. Audio, Speech and Lang. Proc., 23(3):472- 482.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Incorporating global visual features into attention-based neural machine translation",
"authors": [
{
"first": "Iacer",
"middle": [],
"last": "Calixto",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "992--1003",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1105"
]
},
"num": null,
"urls": [],
"raw_text": "Iacer Calixto and Qun Liu. 2017. Incorporating global visual features into attention-based neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 992-1003, Copenhagen, Denmark. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Doubly-attentive decoder for multi-modal neural machine translation",
"authors": [
{
"first": "Iacer",
"middle": [],
"last": "Calixto",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Campbell",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1913--1924",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1175"
]
},
"num": null,
"urls": [],
"raw_text": "Iacer Calixto, Qun Liu, and Nick Campbell. 2017. Doubly-attentive decoder for multi-modal neural machine translation. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1913-1924. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multimodal neural machine translation for low-resource language pairs using synthetic data",
"authors": [
{
"first": "Mohammed",
"middle": [],
"last": "Koel Dutta Chowdhury",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Hasanuzzaman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "33--42",
"other_ids": {
"DOI": [
"10.18653/v1/W18-3405"
]
},
"num": null,
"urls": [],
"raw_text": "Koel Dutta Chowdhury, Mohammed Hasanuzzaman, and Qun Liu. 2018. Multimodal neural machine translation for low-resource language pairs using synthetic data. In \".\", pages 33-42.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic evaluation of translation quality for distant language pairs",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "944--952",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic eval- uation of translation quality for distant language pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Process- ing, pages 944-952, Cambridge, MA. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Opennmt: Opensource toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. Opennmt: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The IIT Bombay English-Hindi parallel corpus",
"authors": [
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Pratik",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anoop Kunchukuttan, Pratik Mehta, and Pushpak Bhat- tacharyya. 2018. The IIT Bombay English-Hindi parallel corpus. In Proceedings of the Eleventh In- ternational Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. Euro- pean Language Resources Association (ELRA).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural machine translation: English to hindi",
"authors": [
{
"first": "Abinash",
"middle": [],
"last": "Sahinur Rahman Laskar",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Dutta",
"suffix": ""
},
{
"first": "Sivaji",
"middle": [],
"last": "Pakray",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bandyopadhyay",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Conference on Information and Communication Technology",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sahinur Rahman Laskar, Abinash Dutta, Partha Pakray, and Sivaji Bandyopadhyay. 2019a. Neural machine translation: English to hindi. In 2019 IEEE Con- ference on Information and Communication Technol- ogy, pages 1-6.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Neural machine translation: Hindi-Nepali",
"authors": [
{
"first": "Partha",
"middle": [],
"last": "Sahinur Rahman Laskar",
"suffix": ""
},
{
"first": "Sivaji",
"middle": [],
"last": "Pakray",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bandyopadhyay",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "202--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sahinur Rahman Laskar, Partha Pakray, and Sivaji Bandyopadhyay. 2019b. Neural machine transla- tion: Hindi-Nepali. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 202-207, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "English to Hindi multi-modal neural machine translation and Hindi image captioning",
"authors": [
{
"first": "Rohit Pratap",
"middle": [],
"last": "Sahinur Rahman Laskar",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Sivaji",
"middle": [],
"last": "Pakray",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bandyopadhyay",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 6th Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "62--67",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5205"
]
},
"num": null,
"urls": [],
"raw_text": "Sahinur Rahman Laskar, Rohit Pratap Singh, Partha Pakray, and Sivaji Bandyopadhyay. 2019c. English to Hindi multi-modal neural machine translation and Hindi image captioning. In Proceedings of the 6th Workshop on Asian Translation, pages 62-67, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1166"
]
},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lis- bon, Portugal. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Overview of the 7th workshop on Asian translation",
"authors": [
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "Chenchen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Hideya",
"middle": [],
"last": "Mino",
"suffix": ""
},
{
"first": "Isao",
"middle": [],
"last": "Goto",
"suffix": ""
},
{
"first": "Win",
"middle": [
"Pa"
],
"last": "Pa",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Shantipriya",
"middle": [],
"last": "Parida",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukuttan, Shantipriya Parida, Ond\u0159ej Bojar, and Sadao Kurohashi. 2020. Overview of the 7th workshop on Asian transla- tion. In Proceedings of the 7th Workshop on Asian Translation, Suzhou, China. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bleu: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Com- putational Linguistics, ACL '02, pages 311-318, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Hindi Visual Genome: A Dataset for Multimodal English-to-Hindi Machine Translation",
"authors": [
{
"first": "Shantipriya",
"middle": [],
"last": "Parida",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Satya Ranjan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dash",
"suffix": ""
}
],
"year": 2019,
"venue": "Computaci\u00f3n y Sistemas",
"volume": "23",
"issue": "4",
"pages": "1499--1505",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shantipriya Parida, Ond\u0159ej Bojar, and Satya Ranjan Dash. 2019. Hindi Visual Genome: A Dataset for Multimodal English-to-Hindi Machine Translation. Computaci\u00f3n y Sistemas, 23(4):1499-1505.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Neural machine translation for indian languages",
"authors": [
{
"first": "Amarnath",
"middle": [],
"last": "Pathak",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Pakray",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Intelligent Systems",
"volume": "",
"issue": "",
"pages": "1--13",
"other_ids": {
"DOI": [
"10.1515/jisys-2018-0065"
]
},
"num": null,
"urls": [],
"raw_text": "Amarnath Pathak and Partha Pakray. 2018. Neural ma- chine translation for indian languages. Journal of Intelligent Systems, pages 1-13.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "English-mizo machine translation using neural and statistical approaches",
"authors": [
{
"first": "Amarnath",
"middle": [],
"last": "Pathak",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Pakray",
"suffix": ""
},
{
"first": "Jereemi",
"middle": [],
"last": "Bentham",
"suffix": ""
}
],
"year": 2018,
"venue": "Neural Computing and Applications",
"volume": "30",
"issue": "",
"pages": "1--17",
"other_ids": {
"DOI": [
"10.1007/s00521-018-3601-3"
]
},
"num": null,
"urls": [],
"raw_text": "Amarnath Pathak, Partha Pakray, and Jereemi Bentham. 2018. English-mizo machine translation using neu- ral and statistical approaches. Neural Computing and Applications, 30:1-17.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "WAT2019: English-Hindi translation on Hindi visual genome dataset",
"authors": [
{
"first": "Thoudam Doren",
"middle": [],
"last": "Loitongbam Sanayai Meetei",
"suffix": ""
},
{
"first": "Sivaji",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bandyopadhyay",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 6th Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "181--188",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5224"
]
},
"num": null,
"urls": [],
"raw_text": "Loitongbam Sanayai Meetei, Thoudam Doren Singh, and Sivaji Bandyopadhyay. 2019. WAT2019: English-Hindi translation on Hindi visual genome dataset. In Proceedings of the 6th Workshop on Asian Translation, pages 181-188, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "SHEF-multimodal: Grounding machine translation on images",
"authors": [
{
"first": "Kashif",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Josiah",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "660--665",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2363"
]
},
"num": null,
"urls": [],
"raw_text": "Kashif Shah, Josiah Wang, and Lucia Specia. 2016. SHEF-multimodal: Grounding machine translation on images. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Pa- pers, pages 660-665, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 27th International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems -Vol- ume 2, NIPS'14, pages 3104-3112, Cambridge, MA, USA. MIT Press.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Exploiting source-side monolingual data in neural machine translation",
"authors": [
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1535--1545",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1160"
]
},
"num": null,
"urls": [],
"raw_text": "Jiajun Zhang and Chengqing Zong. 2016. Exploit- ing source-side monolingual data in neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 1535-1545, Austin, Texas. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Examples of our best predicted output on challenge test data.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Examples of our worst predicted output on challenge test data.",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "",
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td>Monolingual</td><td>Sentences</td><td>Tokens</td></tr><tr><td>Data</td><td/><td/></tr><tr><td>English</td><td colspan=\"2\">107,597,494 1,832,008,594</td></tr><tr><td>Hindi</td><td>44,949,045</td><td>743,723,731</td></tr></table>",
"html": null,
"text": "). The original train parallel text data consists of 28,930 sentences and 28,928 images. We have removed three duplicate sentences (id:2328549, 2385507, 2391240) from the parallel data. Also, we have removed one image (id:2326837) from the image dataset since it is not available in train paral-",
"num": null
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "",
"num": null
},
"TABREF6": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Our system's results on English to Hindi multi-modal translation Task.",
"num": null
}
}
}
}