|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:20:59.961002Z" |
|
}, |
|
"title": "IRLAB-DAIICT@DravidianLangTech-EACL2021: Neural Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Raj", |
|
"middle": [], |
|
"last": "Prajapati", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Vedant", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Prasenjit", |
|
"middle": [], |
|
"last": "Majumder", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes our team's submission of the EACL DravidianLangTech-2021's shared task on Machine Translation of Dravidian languages.We submitted our translations for English to Malayalam , Tamil , and Telugu. The submissions mainly focus on having adequate amount of data backed up by good pre-processing of it to produce quality translations,which includes some custom made rules to remove unnecessary sentences. We conducted several experiments on these models by tweaking the architecture, Byte Pair Encoding (BPE) and other hyperparameters.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes our team's submission of the EACL DravidianLangTech-2021's shared task on Machine Translation of Dravidian languages.We submitted our translations for English to Malayalam , Tamil , and Telugu. The submissions mainly focus on having adequate amount of data backed up by good pre-processing of it to produce quality translations,which includes some custom made rules to remove unnecessary sentences. We conducted several experiments on these models by tweaking the architecture, Byte Pair Encoding (BPE) and other hyperparameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "We participated in the shared task on Machine Translation in Dravidian languages Dravidian-LangTech, EACL 2021 .The advancement of technology has increased our internet usage and majority of the languages have acclimatised to the growing digital world. However, there are many regional languages which are under-resourced languages and still lack development.One such language family is the Dravidian languages , these languages are majorly spoken in south India ,Nepal, Pakistan, Sri Lanka and South Asia, we have submitted our translations for three language pairs namely:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1. English-Malayalam 2. English-Tamil", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our implementations uses Transformer architecture and for that we have used OpenNMT-py (Klein et al., 2017) framework and BLEU (Papineni et al., 2002) score as the evaluation metric for our translation system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 107, |
|
"text": "(Klein et al., 2017)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 127, |
|
"end": 150, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "English-Telugu", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Our main focus was on proper pre-processing of the data and often we have seen that improper preprocessing has led to horrendous translations. We have done extensive data pre-processing starting basic cleaning of punctuation symbols to language specific script normalization , apart from this we have added some custom rules as well. Which is followed by tokenization , truecasing and byte pair encoding (BPE).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "English-Telugu", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "For Indic languages espacially Dravidian langauges we often face the problem of Out of Vocabulary word (OOV) which is taken care by word segmentation using BPE ,so we deal with subwords instead of words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "English-Telugu", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "This paper is arranged as follows : First we describe the task undertaken which is followed by in-depth explanation of the model architecture, then next we have described the experimental setup which includes provided data set information , preprocessing steps and clean data statistics. After that , we describe the experiments conducted on different language pairs and analysis of the results produced. At last we draw some conclusions and propose some future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "English-Telugu", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The task focuses on improvement to access and production of information for speakers of Dravidian languages. Due to low resources available , the research community has not developed much of an interest in this domain , the main focus of this task is to promote research in this area and build machine translation systems for native monolingual speakers of these group of languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the era of digitization there is a large population who are not fully connected to the digital world because of their inability to access the digital world in their native language, which is what this task tries to accomplish.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The experiment setup contains the detailed information about our experiments,data and vision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 Architecture", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Given two parallel sentences (a , b), the NMT model tries to learn the parameters \u03b8 by maximizing the probability P( b | a ; \u03b8 ) . The Encoder generates a mapping from the input sentence to a hidden set of representations h and the decoder generates a target token b t using the previously generated target tokens b k where k<t and source representations h.Both encoder and decoder can be individually RNN/LSTM/GRU models as adopted by (Bahdanau et al., 2014) along with that self attention mechanism explained by (Vaswani et al., 2017) which is a vital combination for NMT systems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 436, |
|
"end": 459, |
|
"text": "(Bahdanau et al., 2014)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 514, |
|
"end": 536, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Encoder Decoder Frame work", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Introduction of Transformer models has increased the interests of researchers in NMT , transformers have preserved the idea of an encoder -decoder framework , with an addition of attention mechanism as explained in (Vaswani et al., 2017) it increases its worth . Transformer is one of its kind model which only uses self attention mechanisms to generate intermediate representation of input data . Transformers were initially tested on English-French dataset and were pretty successful achieving state of the art results.Unlike English-French language pair, Indian languages are a bit difficult to model because of certain reasons like richness in morphology, free word ordering. So more often we get poor translations. Figure 1 is the architecture used by almost every recent NMT paper, the biggest challenges in any NMT system are : Missing words , data sparsity . To overcome these challenges subword models were introduced to understand the subwords and how can we utilise them to increase our translation quality. Byte pair encoding is one way to compute subwords , initially introduced as a compression format but has been very efficient in word segmentation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 237, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 720, |
|
"end": 728, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Transformers", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We have tried to perform experiments on three different language pairs as mentioned in the introduction section. Below mentioned is the detailed explanation of our approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Data is the key for any Neural Machine Translation system, this is something which is a driving factor.The language pairs are very resource scarce and the official training data (Chakravarthi et al., 2021) is not sufficient, so we took some additional parallel data as well. Table 1 contains the data statistics for each language pair we have taken into consideration.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 275, |
|
"end": 282, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We have taken parallel data for language pairs from different sources. So for English-Tamil pair we have used 1 , similarly for English-Telugu we have used OPUS 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Preprocessing is one of the main steps in any Machine translation system. In our experiment we have perform several steps which are listed below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "No. of sentences English-Malayalam 380K English-Tamil 169K English-Telugu 110K \u2022 The sentences were normalized for punctuation by using Indic NLP Library 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Pair", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Some of the sentences consists numerical either only in source sentences or target sentences.We removed these sentences from dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Pair", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 We also removed sentences which contains of repetition of words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Pair", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 The words which were not of same language they where transliterated using Indic NLP's transliteration tool.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Pair", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Some specific special character(s) were also manually removed from the sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Pair", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Many sentences were not aligned properly , so they were removed directly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Pair", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 In many sentences either source or target or both sentences were blank, which were also removed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Pair", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "After all this prepossessing the final data statistics are explained pairwise in Tables 2 and 3 for training and validation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 95, |
|
"text": "Tables 2 and 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Language Pair", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An NMT system relies on mapping each word into a vector space, and we have a word vector corresponding to each word in a fixed vocabulary. The pertaining issues of data scarcity and inability of the system to learn high quality representations for rarely occuring words, (Sennrich et al., 2016) proposed to learn subwords and perform translation at a subword level.Subword segmentation is achieved using Byte Pair Encoding (BPE), by using BPE the vocabulary size is reduced drastically therefore we see a reduction in out-of-vocabulary words error, but it adds an overhead post processing step to convert the subwords back to the original word.", |
|
"cite_spans": [ |
|
{ |
|
"start": 271, |
|
"end": 294, |
|
"text": "(Sennrich et al., 2016)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BPE segmentation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We have used google's SentencePiece 4 to perform word segmentations using BPE in which we kept a uniform vocabulary size between 2K and 3K.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BPE segmentation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "This work used BLEU (Papineni et al., 2002) score as evaluation metrics. A BLEU score compares a machine-translated sentence with the actual reference sentence by matching thier n-grams. The higher the number of n-grams matches, the closer are the two sentences.However, there are several implementations of BLEU available online, we have used multi-bleu 5 script from Mosesdecoder 6 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 43, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metrics", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "For all the experiments we used OpenNMT-py Klein et al. (2017) toolkit. Table 4 describes the model configuration used in this experiment. Table 5 describes the training parameters used by us to model data. We validate the model for every 5000 steps on BLEU and perplexity on validation set. We used 2000 as vocab size for English-Malayalam, English-Tamil, Tamil-Telugu and 2500 for English-Telugu language pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 62, |
|
"text": "Klein et al. (2017)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 79, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 139, |
|
"end": 146, |
|
"text": "Table 5", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Modelling", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Value maximum sentence length 80 learning rate 0.0005 label-smoothing 0.1 optimizer Adam learning rate warmup 8000 training batch size 12800 tokens ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We made several models with different parameters and vocabulary sizes, Table 6 and Table 7 shows the results produced by the best models in each language pair for validation and test data respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 78, |
|
"text": "Table 6", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 83, |
|
"end": 90, |
|
"text": "Table 7", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper we describe our submission to Machine Translation for Dravidian Languages (EACL 2021). As the quality of the sentences was not good, we had do a lot of preprocessing steps. So we also added other open source parallel corpora for our training. Our models are performing good on validation data but somewhat good on test data .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Works", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "For future works, we would like to try pivoting methods and transfer learning methods. We would also like to introduce semantic features such Part of Speech Tags(POS), Named Entity Tags(NER), Lemmas etc. We can also use the language models for feature injection processes. Apart from this, we would also like to employ semi-supervised and unsupervised methods into these language pairs. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Works", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "http://ufal.mff.cuni.cz/\u02dcramasamy/ parallel/html/ 2 http://opus.nlpl.eu/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://anoopkunchukuttan.github.io/ indic_nlp_library", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/google/ sentencepiece 5 https://github.com/marian-nmt/ moses-scripts/blob/master/scripts/ generic/multi-bleu.perl 6 https://github.com/moses-smt/ mosesdecoder", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Findings of the shared task on Offensive Language Identification in Tamil, Malayalam, and Kannada", |
|
"authors": [ |
|
{ |
|
"first": "Ruba", |
|
"middle": [], |
|
"last": "Bharathi Raja Chakravarthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Navya", |
|
"middle": [], |
|
"last": "Priyadharshini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anand", |
|
"middle": [], |
|
"last": "Jose", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Mandl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prasanna", |
|
"middle": [], |
|
"last": "Kumar Kumaresan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rahul", |
|
"middle": [], |
|
"last": "Ponnusamy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Hariharan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Sherly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"Philip" |
|
], |
|
"last": "Mc-Crae", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bharathi Raja Chakravarthi, Ruba Priyadharshini, Navya Jose, Anand Kumar M, Thomas Mandl, Prasanna Kumar Kumaresan, Rahul Ponnusamy, Hariharan V, Elizabeth Sherly, and John Philip Mc- Crae. 2021. Findings of the shared task on Offen- sive Language Identification in Tamil, Malayalam, and Kannada. In Proceedings of the First Workshop on Speech and Language Technologies for Dravid- ian Languages. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "OpenNMT: Opensource toolkit for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuntian", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Senellart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of ACL 2017, System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "67--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. OpenNMT: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1073083.1073135" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1715--1725", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1162" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Transformer Architecture from(Vaswani et al., 2017)", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Cleaned training data statistics", |
|
"num": null, |
|
"content": "<table><tr><td>Language Pair</td><td>No. of sentences</td></tr><tr><td>English-Malayalam</td><td>2K</td></tr><tr><td>English-Tamil</td><td>1.5K</td></tr><tr><td>English-Telugu</td><td>1.3K</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Cleaned validation data statistics", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "The main model configuration", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">: Training Parameters</td></tr><tr><td>Language Pair</td><td>BLEU Score</td></tr><tr><td>English-Malayalam</td><td>24.89</td></tr><tr><td>English-Tamil</td><td>7.00</td></tr><tr><td>English-Telugu</td><td>15.79</td></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Results on Validation data", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF9": { |
|
"html": null, |
|
"type_str": "table", |
|
"text": "", |
|
"num": null, |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |