ACL-OCL / Base_JSON /prefixC /json /calcs /2021.calcs-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:13:35.394057Z"
},
"title": "CoMeT: Towards Code-Mixed Translation Using Parallel Monolingual Sentences",
"authors": [
{
"first": "Devansh",
"middle": [],
"last": "Gautam",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "International Institute of Information Technology Hyderabad \u2021 Indraprastha Institute of Information Technology Delhi \u2020 \u2020 Guru Gobind Singh Indraprastha University",
"location": {
"settlement": "Delhi"
}
},
"email": "[email protected]"
},
{
"first": "Prashant",
"middle": [],
"last": "Kodali",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "International Institute of Information Technology Hyderabad \u2021 Indraprastha Institute of Information Technology Delhi \u2020 \u2020 Guru Gobind Singh Indraprastha University",
"location": {
"settlement": "Delhi"
}
},
"email": "[email protected]"
},
{
"first": "Kshitij",
"middle": [],
"last": "Gupta",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "International Institute of Information Technology Hyderabad \u2021 Indraprastha Institute of Information Technology Delhi \u2020 \u2020 Guru Gobind Singh Indraprastha University",
"location": {
"settlement": "Delhi"
}
},
"email": "[email protected]"
},
{
"first": "Anmol",
"middle": [],
"last": "Goel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "International Institute of Information Technology Hyderabad \u2021 Indraprastha Institute of Information Technology Delhi \u2020 \u2020 Guru Gobind Singh Indraprastha University",
"location": {
"settlement": "Delhi"
}
},
"email": ""
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "International Institute of Information Technology Hyderabad \u2021 Indraprastha Institute of Information Technology Delhi \u2020 \u2020 Guru Gobind Singh Indraprastha University",
"location": {
"settlement": "Delhi"
}
},
"email": "[email protected]"
},
{
"first": "Ponnurangam",
"middle": [],
"last": "Kumaraguru",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "International Institute of Information Technology Hyderabad \u2021 Indraprastha Institute of Information Technology Delhi \u2020 \u2020 Guru Gobind Singh Indraprastha University",
"location": {
"settlement": "Delhi"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Code-mixed languages are very popular in multilingual societies around the world, yet the resources lag behind to enable robust systems on such languages. A major contributing factor is the informal nature of these languages which makes it difficult to collect codemixed data. In this paper, we propose our system for Task 1 of CACLS 2021 1 to generate a machine translation system for English to Hinglish in a supervised setting. Translating in the given direction can help expand the set of resources for several tasks by translating valuable datasets from high resource languages. We propose to use mBART, a pre-trained multilingual sequence-to-sequence model, and fully utilize the pre-training of the model by transliterating the roman Hindi words in the code-mixed sentences to Devanagri script. We evaluate how expanding the input by concatenating Hindi translations of the English sentences improves mBART's performance. Our system gives a BLEU score of 12.22 on test set. Further, we perform a detailed error analysis of our proposed systems and explore the limitations of the provided dataset and metrics.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Code-mixed languages are very popular in multilingual societies around the world, yet the resources lag behind to enable robust systems on such languages. A major contributing factor is the informal nature of these languages which makes it difficult to collect codemixed data. In this paper, we propose our system for Task 1 of CACLS 2021 1 to generate a machine translation system for English to Hinglish in a supervised setting. Translating in the given direction can help expand the set of resources for several tasks by translating valuable datasets from high resource languages. We propose to use mBART, a pre-trained multilingual sequence-to-sequence model, and fully utilize the pre-training of the model by transliterating the roman Hindi words in the code-mixed sentences to Devanagri script. We evaluate how expanding the input by concatenating Hindi translations of the English sentences improves mBART's performance. Our system gives a BLEU score of 12.22 on test set. Further, we perform a detailed error analysis of our proposed systems and explore the limitations of the provided dataset and metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Code-mixing 2 is the mixing of two or more languages where words from different languages are interleaved with each other in the same conversation. It is a common phenomenon in multilingual societies across the globe. In the last decade, due to the increase in the popularity of social media and various online messaging platforms, there has been an increase in various forms of informal writing, such as emojis, slang, and the usage of code-mixed languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1 https://code-switching.github.io/2021 2 Code-switching is another term that slightly differs in its meaning but is often used interchangeably with code-mixing in the research community. We will also be following the same convention and use both the terms interchangeably in our paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Due to the informal nature of code-mixing, codemixed languages do not follow a prescriptively defined structure, and the structure often varies with the speaker. Nevertheless, some linguistic constraints (Poplack, 1980; Belazi et al., 1994) have been proposed that attempt to determine how languages mix with each other.",
"cite_spans": [
{
"start": 204,
"end": 219,
"text": "(Poplack, 1980;",
"ref_id": "BIBREF28"
},
{
"start": 220,
"end": 240,
"text": "Belazi et al., 1994)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given the increasing use of code-mixed languages by people around the globe, there is a growing need for research related to code-mixed languages. A significant challenge to research is that there are no formal sources like books or news articles in code-mixed languages, and studies have to rely on sources like Twitter or messaging platforms. Another challenge with Hinglish, in particular, is that there is no standard system of transliteration for Hindi words, and individuals provide a rough phonetic transcription of the intended word, which often varies with individuals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we describe our systems for Task 1 of CALCS 2021, which focuses on translating English sentences to English-Hindi code-mixed sentences. The code-mixed language is often called Hinglish. It is commonly used in India because many bilingual speakers use both Hindi and English frequently in their personal and professional lives. The translation systems could be used to augment datasets for various Hinglish tasks by translating datasets from English to Hinglish. An example of a Hinglish sentence from the provided dataset (with small modifications) is shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Hinglish Sentence: Bahut strange choice thi ye.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Gloss of Hinglish Sentence: Very [strange choice] was this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 English Sentence: This was a very strange choice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose to fine-tune mBART for the given task by first transliterating the Hindi words in the target sentences from Roman script to Devanagri script to utilize its pre-training. We further translate the English input to Hindi using pre-existing models and show improvements in the translation using parallel sentences as input to the mBART model. The code for our systems, along with error analysis, is public 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of our work are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We explore the effectiveness of fine-tuning mBART to translate to code-mixed sentences by utilizing the Hindi pre-training of the model in Devanagri script. We further explore the effectiveness of using parallel sentences as input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a normalized BLEU score metric to better account for the spelling variations in the code-mixed sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Along with BLEU scores, we analyze the code-mixing quality of the reference translations along with the generated outputs and propose that for assessing code-mixed translations, measures of code-mixing should be part of evaluation and analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. We discuss prior work related to code-mixed language processing, machine translation, and synthetic generation of code-mixed data. We describe our translation systems and compare the performances of our approaches. We discuss the amount of codemixing in the translations predicted by our systems and discuss some issues present in the provided dataset. We conclude with a direction for future work and highlight our main findings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Code-mixing occurs when a speaker switches between two or more languages in the context of the same conversation. It has become popular in multilingual societies with the rise of social media applications and messaging platforms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In attempts to progress the field of code-mixed data, several code-switching workshops (Diab et al., 2014 Aguilar et al., 2018b) have been organized in notable conferences. Most of the workshops include shared tasks on various of the lan-guage understanding tasks like language identification (Solorio et al., 2014; Molina et al., 2016) , NER (Aguilar et al., 2018a; Rao and Devi, 2016) , IR (Roy et al., 2013; Banerjee et al., 2018) , PoS tagging (Jamatia et al., 2016) , sentiment analysis (Patra et al., 2018; Patwa et al., 2020) , and question answering (Chandu et al., 2018) .",
"cite_spans": [
{
"start": 87,
"end": 105,
"text": "(Diab et al., 2014",
"ref_id": "BIBREF13"
},
{
"start": 106,
"end": 128,
"text": "Aguilar et al., 2018b)",
"ref_id": "BIBREF1"
},
{
"start": 293,
"end": 315,
"text": "(Solorio et al., 2014;",
"ref_id": "BIBREF13"
},
{
"start": 316,
"end": 336,
"text": "Molina et al., 2016)",
"ref_id": "BIBREF23"
},
{
"start": 343,
"end": 366,
"text": "(Aguilar et al., 2018a;",
"ref_id": "BIBREF0"
},
{
"start": 367,
"end": 386,
"text": "Rao and Devi, 2016)",
"ref_id": "BIBREF31"
},
{
"start": 389,
"end": 410,
"text": "IR (Roy et al., 2013;",
"ref_id": null
},
{
"start": 411,
"end": 433,
"text": "Banerjee et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 448,
"end": 470,
"text": "(Jamatia et al., 2016)",
"ref_id": "BIBREF17"
},
{
"start": 492,
"end": 512,
"text": "(Patra et al., 2018;",
"ref_id": "BIBREF26"
},
{
"start": 513,
"end": 532,
"text": "Patwa et al., 2020)",
"ref_id": "BIBREF27"
},
{
"start": 558,
"end": 579,
"text": "(Chandu et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Although these workshops have gained traction, the field lacks standard datasets to build robust systems. The small size of the datasets is a major factor that limits the scope of code-mixed systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Machine Translation refers to the use of software to translate text from one language to another. In the current state of globalization, translation systems have widespread applications and are consequently an active area of research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Neural machine translation has gained popularity only in the last decade, while earlier works focused on statistical or rule-based approaches. Kalchbrenner and Blunsom (2013) first proposed a DNN model for translation, following which transformerbased approaches (Vaswani et al., 2017) have taken the stage. Some approaches utilize multilingual pre-training (Song et al., 2019; Conneau and Lample, 2019; ; however, these works focus only on monolingual language pairs.",
"cite_spans": [
{
"start": 263,
"end": 285,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF42"
},
{
"start": 358,
"end": 377,
"text": "(Song et al., 2019;",
"ref_id": "BIBREF40"
},
{
"start": 378,
"end": 403,
"text": "Conneau and Lample, 2019;",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Although a large number of multilingual speakers in a highly populous country like India use English-Hindi code-mixed language, only a few studies (Srivastava and Singh, 2020; Singh and Solorio, 2018; Dhar et al., 2018) have attempted the problem. Enabling translation systems in the following pair can bridge the communication gap between several people and further improve the state of globalization in the world.",
"cite_spans": [
{
"start": 147,
"end": 175,
"text": "(Srivastava and Singh, 2020;",
"ref_id": "BIBREF41"
},
{
"start": 176,
"end": 200,
"text": "Singh and Solorio, 2018;",
"ref_id": "BIBREF37"
},
{
"start": 201,
"end": 219,
"text": "Dhar et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Synthetic code-mixed data generation is a plausible option to build resources for code-mixed language research and is a very similar task to translation. While translation focuses on retaining the meaning of the source sentence, generation is a simpler task requiring focus only on the quality of the synthetic data generated. Pratapa et al. (2018) started by exploring linguistic theories to generate code-mixed data. Later works attempt the problem using several approaches including Generative Adversarial Networks (Chang et al., 2019) , an encoder-decoder framework (Gupta et al., 2020) , pointer-generator networks (Winata et al., 2019), and a two-level variational autoencoder (Samanta et al., 2019) . Recently, Rizvi et al. (2021) released a tool to generate code-mixed data using parallel sentences as input.",
"cite_spans": [
{
"start": 327,
"end": 348,
"text": "Pratapa et al. (2018)",
"ref_id": "BIBREF30"
},
{
"start": 518,
"end": 538,
"text": "(Chang et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 570,
"end": 590,
"text": "(Gupta et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 683,
"end": 705,
"text": "(Samanta et al., 2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In this section, we describe our proposed systems for the task, which use mBART to translate English to Hinglish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "3"
},
{
"text": "We use the dataset provided by the task organizers for our systems, the statistics of the datasets are provided in Table 1 . Since the target sentences in the dataset contain Hindi words in Roman script, we use the CSNLI library 4 (Bhat et al., 2017 as a preprocessing step. It transliterates the Hindi words to Devanagari and also performs text normalization. We use the provided train:validation:test split, which is in the ratio 8:1:1.",
"cite_spans": [
{
"start": 231,
"end": 249,
"text": "(Bhat et al., 2017",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 115,
"end": 122,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "3.1"
},
{
"text": "We fine-tune mBART, which is a multilingual sequence-to-sequence denoising auto-encoder pretrained using the BART objective on large-scale monolingual corpora of 25 languages including English and Hindi. It uses a standard sequence-to-sequence Transformer architecture (Vaswani et al., 2017) , with 12 encoder and decoder layers each and a model dimension of 1024 on 16 heads resulting in \u223c680 million parameters. To train our systems efficiently, we prune mBART's vocabulary by removing the tokens which are not present in the provided dataset or the dataset released by Kunchukuttan et al. (2018) which contains 1,612,709 parallel sentences for English and Hindi. We compare the following two strategies for finetuning mBART:",
"cite_spans": [
{
"start": 269,
"end": 291,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF42"
},
{
"start": 572,
"end": 598,
"text": "Kunchukuttan et al. (2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.2"
},
{
"text": "\u2022 mBART-en: We fine-tune mBART on the train set, feeding the English sentences to the encoder and decoding Hinglish sentences. We use beam search with a beam size of 5 for decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.2"
},
{
"text": "\u2022 mBART-hien: We fine-tune mBART on the train set, feeding the English sentences along with their parallel Hindi translations to the encoder and decoding Hinglish sentences. For feeding the data to the encoder, we concatenate the Hindi translations, followed by a separator token '##', followed by the English sentence. We use the Google NMT system 5 (Wu et al., 2016) to translate the English source sentences to Hindi. We again use beam search with a beam size of 5 for decoding.",
"cite_spans": [
{
"start": 351,
"end": 368,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3.2"
},
{
"text": "We transliterate the Hindi words in our predicted translations from Devanagari to Roman. We use the following methods to transliterate a given Devanagari token (we use the first method which provides us with the transliteration):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-Processing",
"sec_num": "3.3"
},
{
"text": "1. When we transliterate the Hindi words in the target sentences from Roman to Devanagari (as discussed in Section 3.1), we store the most frequent Roman transliteration for each Hindi word in the train set. If the current Devanagari token's transliteration is available, we use it directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-Processing",
"sec_num": "3.3"
},
{
"text": "2. We use the publicly available Dakshina Dataset (Roark et al., 2020) which has 25,000 Hindi words in Devanagari script along with their attested romanizations. If the current Devanagari token is available in the dataset, we use the transliteration with the maximum number of attestations from the dataset.",
"cite_spans": [
{
"start": 50,
"end": 70,
"text": "(Roark et al., 2020)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Post-Processing",
"sec_num": "3.3"
},
{
"text": "3. We use the indic-trans library 6 (Bhat et al., 2015) to transliterate the token from Devanagari to Roman.",
"cite_spans": [
{
"start": 36,
"end": 55,
"text": "(Bhat et al., 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Post-Processing",
"sec_num": "3.3"
},
{
"text": "4 Experimental Setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-Processing",
"sec_num": "3.3"
},
{
"text": "We use the implementation of mBART available in the fairseq library 7 (Ott et al., 2019) . We finetune on 4 Nvidia GeForce RTX 2080 Ti GPUs with an effective batch size of 1024 tokens per GPU. We use the Adam optimizer ( = 10 \u22126 , \u03b2 1 = 0.9, \u03b2 2 = 0.98) (Kingma and Ba, 2015) with 0.3 dropout, 0.1 attention dropout, 0.2 label smoothing and polynomial decay learning rate scheduling. We fine-tune the model for 10,000 steps with 2,500 warm-up steps and a learning rate of 3 * 10 \u22125 . We validate the models for every epoch and select the best checkpoint based on the best BLEU score on the validation set. To train our systems efficiently, we prune mBART's vocabulary by removing the tokens which are not present in any of the datasets mentioned in the previous section.",
"cite_spans": [
{
"start": 70,
"end": 88,
"text": "(Ott et al., 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "4.1"
},
{
"text": "We use the following two evaluation metrics for comparing our systems:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "1. BLEU: The BLEU score (Papineni et al., 2002) is the official metric used in the leader board. We calculate the score using the SacreBLEU library 8 (Post, 2018) after lowercasing and tokenization using the TweetTokenizer available with the NLTK library 9 (Bird et al., 2009) .",
"cite_spans": [
{
"start": 24,
"end": 47,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF25"
},
{
"start": 150,
"end": 162,
"text": "(Post, 2018)",
"ref_id": "BIBREF29"
},
{
"start": 257,
"end": 276,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "2. BLEU normalized : Instead of calculating the BLEU scores on the texts where the Hindi words are transliterated to Roman, we calculate the score on texts where Hindi words are in Devanagari and English words in Roman. We transliterate the target sentences using the CSNLI library and we use the outputs of our system before performing the post-processing (Section 3.3). We again use the SacreBLEU library after lowercasing and tokenization using the TweetTokenizer available with the NLTK library. Word. These spelling variations can cause the BLEU score to be low, even if the correct Hindi word is predicted. Table 2 shows the BLEU scores of the outputs generated by our models described in Section 3.2. In Hinglish sentences, Hindi tokens are often transliterated to roman script, and that results in spelling variation. Since BLEU score compares token/ngram overlap between source and target, lack of canonical spelling for transliterated words, reduces BLEU score and can mischaracterize the quality of translation. To estimate the variety in roman spellings for a Hindi word, we perform normalization by back transliterating the Hindi words in a code-mixed sentence to Devanagari and aggregated the number of different spellings for a single Devanagari token. Figure 1 shows the extent of this phenomena in the dataset released as part of this shared task, and it is evident that there are Hindi words that have multiple roman spellings. Thus, even if the model is generating the correct Devanagari token, the BLEU scores will be understated due to the spelling variation in the transliterated reference sentence. By back-transliterating Hindi tokens to Devanagari, BLEU normalized score thus provides a better representation of translation quality.",
"cite_spans": [],
"ref_spans": [
{
"start": 613,
"end": 620,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1268,
"end": 1276,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "Since BLEU score primarily look at n-gram overlaps, it does not provide any insight into the quality of generated output or the errors therein. To Table 3 : Error Analysis of 100 randomly sampled translations from test set for both mBART-en and mBARThien model Figure 2 : Code Mixing Index(CMI) for the generated translation of dev and test set .",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 154,
"text": "Table 3",
"ref_id": null
},
{
"start": 261,
"end": 269,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis of Translations of Test set",
"sec_num": "5.1"
},
{
"text": "analyse the quality of translations on the test set, we randomly sampled 100 sentences (> 10% of test set) from the outputs generated by the two models: mBART-en and mBART-hien, and bucketed them into various categories. Table 3 shows the categories of errors and their corresponding frequency. Mistranslated/partially translated category indicates that the generated translation has no or very less semantic resemblance with the source sentence. Sentences, where Multi-Word Expressions/Named Entities are wrongly translated, is the second category. Morphology/Case Marking/Agreement/Syntax Issues category indicates sentences where most of the semantic content is faithfully captured in the generated output. However, the errors on a grammatical level render the output less fluent. mBART-hien makes fewer errors when compared to mBART-en, but that can possibly be attributed to the fact that this model generates a higher number of Hindi tokens while being low in code-mixing quality, and makes lesser grammatical errors. A more extensive and finegrained analysis of these errors will undoubtedly help improve the models' characterization, and we leave it for future improvements. mBART-hien # of English tokens 2,462 (18.5%) 2,519 (18.8%) # of Hindi tokens 9,471 (71.3%) 9,616 (72.0%) # of 'Other' tokens 1,356 (10.2%) 1,233 (9.2%) Table 5 : The number of tokens of each language in our predicted translations. The language tags are based on the script of the token.",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 228,
"text": "Table 3",
"ref_id": null
},
{
"start": 1335,
"end": 1342,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis of Translations of Test set",
"sec_num": "5.1"
},
{
"text": "In the code-mixed machine translation setting, it is essential to observe the quality of the code-mixing in the generated translations. While BLEU scores indicate how close we are to the target translation in terms of n-gram overlap, a measure like Code-Mixing Index (CMI) provides us means to assess if the generated output is a mix of two languages or not. Relying on just the BLEU score for assessing translations can misrepresent the quality of translations, as models could generate monolingual outputs and still have a basic BLEU score due to n-gram overlap. If a measure of code mixing intensity, like CMI, is also part of the evaluation regime, we would be able to assess the code mixing quality of generated outputs as well. Figure 2 shows us that the distribution of CMI for outputs generated by our various models (mBART-en and mBART-hien) for both validation and test set. Figure 2 and Table 4 show that the code mixing quality of the two models is is more or less similar across the validation and test set. The high",
"cite_spans": [],
"ref_spans": [
{
"start": 734,
"end": 742,
"text": "Figure 2",
"ref_id": null
},
{
"start": 885,
"end": 893,
"text": "Figure 2",
"ref_id": null
},
{
"start": 898,
"end": 905,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Code Mixing Quality of generated translations",
"sec_num": "5.2"
},
{
"text": "Meaning of target similar to source 759 Meaning of target distored compared to source 141 Total 900 Table 6 : Statistics of the errors in randomly sampled subset of train + dev.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 107,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Num of Pairs",
"sec_num": null
},
{
"text": "percentages of sentences having a 0 CMI score shows that in a lot of sentences, the model does not actually perform code-mixing. We also find that even though the outputs generated by the mBARThien model have a higher BLEU normalized score, the average CMI is lower and the percentage of sentences with a 0 CMI score is higher. This suggests that mBART-hien produces sentences with a lower amount of code-mixing. This observation, we believe, can be attributed to the mBART-hien model's propensity to generate a higher percentage of Hindi words, as shown in Table 5 . We also find that in the train set, more than 20% of the sentences have a CMI score of 0. Replacing such samples with sentence pairs with have a higher degree of code mixing will help train the model to generate better code mixed outputs. Further analysis using different measures of code-mixing can provide deeper insights. We leave this for future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 558,
"end": 565,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Num of Pairs",
"sec_num": null
},
{
"text": "We randomly sampled \u223c10% (900 sentence pairs) of the parallel sentences from the train and validation set and annotated them for translation errors. For annotation, we classified the sentence pairs into one of two classes : 1) Error -semantic content in the target is distorted as compared to source; 2) No Error -semantic content of source and target are similar and the target might have minor errors. Minor errors in translations that are attributable to agreement issues, case markers issues, pronoun errors etc were classified into the No Error bucket. Out of the 900 samples that were manually annoatated, 141 samples, i.e 15% of annotated pairs, had targets whose meaning was distorted as compared to source sentence. One such example is shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Erroneous Reference Translations in the dataset",
"sec_num": "5.3"
},
{
"text": "\u2022 English Sentence: I think I know the football player it was based on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Erroneous Reference Translations in the dataset",
"sec_num": "5.3"
},
{
"text": "\u2022 Hinglish Sentence: Muje lagtha ki yeh football player ke baare mein hein.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Erroneous Reference Translations in the dataset",
"sec_num": "5.3"
},
{
"text": "\u2022 Translation of Hinglish Sentence: I thought that this is about football player. Table 6 shows the analysis of these annotated subset. The annotated file with all 900 examples can be found in our code repository. Filtering such erroneous examples from training and validation datasets, and augmenting the dataset with better quality translations will certainly help in improving the translation quality.",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 89,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Erroneous Reference Translations in the dataset",
"sec_num": "5.3"
},
{
"text": "In this paper, we presented our approaches for English to Hinglish translation using mBART. We analyse our model's outputs and show that the translation quality can be improved by including parallel Hindi translations, along with the English sentences, while translating English sentences to Hinglish. We also discuss the limitations of using BLEU scores for evaluating code-mixed outputs and propose using BLEU normalized -a slightly modified version of BLEU. To understand the codemixing quality of the generated translations, we propose that a code-mixing measure, like CMI, should also be part of the evaluation process. Along with the working models, we have analysed the model's shortcomings by doing error analysis on the outputs generated by the models. Further, we have also presented an analysis on the shared dataset : percentage of sentences in the dataset which are not code-mixed, the erroneous reference translations. Removing such pairs and replacing them with better samples will help improve the translation quality of the models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "As part of future work, we would like to improve our translation quality by augmenting the current dataset with parallel sentences with a higher degree of code-mixing and good reference translations. We would also like to further analyse the nature of code-mixing in the generated outputs, and study the possibility of constraining the models to generated translations with a certain degree of code-mixing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "https://github.com/devanshg27/cm_ translation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/irshadbhat/csnli",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://cloud.google.com/translate 6 https://github.com/libindic/ indic-trans 7 https://github.com/pytorch/fairseq",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/mjpost/sacrebleu 9 https://www.nltk.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Named entity recognition on code-switched data: Overview of the CALCS 2018 shared task",
"authors": [
{
"first": "Gustavo",
"middle": [],
"last": "Aguilar",
"suffix": ""
},
{
"first": "Fahad",
"middle": [],
"last": "Alghamdi",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Soto",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching",
"volume": "",
"issue": "",
"pages": "138--147",
"other_ids": {
"DOI": [
"10.18653/v1/W18-3219"
]
},
"num": null,
"urls": [],
"raw_text": "Gustavo Aguilar, Fahad AlGhamdi, Victor Soto, Mona Diab, Julia Hirschberg, and Thamar Solorio. 2018a. Named entity recognition on code-switched data: Overview of the CALCS 2018 shared task. In Proceedings of the Third Workshop on Compu- tational Approaches to Linguistic Code-Switching, pages 138-147, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching. Association for Computational Linguistics",
"authors": [
{
"first": "Gustavo",
"middle": [],
"last": "Aguilar",
"suffix": ""
},
{
"first": "Fahad",
"middle": [],
"last": "Alghamdi",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Soto",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gustavo Aguilar, Fahad AlGhamdi, Victor Soto, Thamar Solorio, Mona Diab, and Julia Hirschberg, editors. 2018b. Proceedings of the Third Workshop on Computational Approaches to Linguistic Code- Switching. Association for Computational Linguis- tics, Melbourne, Australia.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Overview of the mixed script information retrieval (msir) at fire-2016",
"authors": [
{
"first": "Somnath",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Kunal",
"middle": [],
"last": "Chakma",
"suffix": ""
},
{
"first": "Sudip",
"middle": [],
"last": "Kumar Naskar",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Sivaji",
"middle": [],
"last": "Bandyopadhyay",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2018,
"venue": "Text Processing",
"volume": "",
"issue": "",
"pages": "39--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Somnath Banerjee, Kunal Chakma, Sudip Kumar Naskar, Amitava Das, Paolo Rosso, Sivaji Bandy- opadhyay, and Monojit Choudhury. 2018. Overview of the mixed script information retrieval (msir) at fire-2016. In Text Processing, pages 39-49, Cham. Springer International Publishing.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Code switching and x-bar theory: The functional head constraint",
"authors": [
{
"first": "Hedi",
"middle": [
"M"
],
"last": "Belazi",
"suffix": ""
},
{
"first": "Edward",
"middle": [
"J"
],
"last": "Rubin",
"suffix": ""
},
{
"first": "Almeida Jacqueline",
"middle": [],
"last": "Toribio",
"suffix": ""
}
],
"year": 1994,
"venue": "Linguistic Inquiry",
"volume": "25",
"issue": "2",
"pages": "221--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hedi M. Belazi, Edward J. Rubin, and Almeida Jacque- line Toribio. 1994. Code switching and x-bar theory: The functional head constraint. Linguistic Inquiry, 25(2):221-237.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Joining hands: Exploiting monolingual treebanks for parsing of code-mixing data",
"authors": [
{
"first": "Irshad",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Riyaz",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "Dipti",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sharma",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "324--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irshad Bhat, Riyaz A. Bhat, Manish Shrivastava, and Dipti Sharma. 2017. Joining hands: Exploiting monolingual treebanks for parsing of code-mixing data. In Proceedings of the 15th Conference of the European Chapter of the Association for Computa- tional Linguistics: Volume 2, Short Papers, pages 324-330, Valencia, Spain. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Universal Dependency parsing for Hindi-English code-switching",
"authors": [
{
"first": "Irshad",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Riyaz",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "Dipti",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sharma",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "987--998",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1090"
]
},
"num": null,
"urls": [],
"raw_text": "Irshad Bhat, Riyaz A. Bhat, Manish Shrivastava, and Dipti Sharma. 2018. Universal Dependency parsing for Hindi-English code-switching. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 987-998, New Orleans, Louisiana. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Iiit-h system submission for fire2014 shared task on transliterated search",
"authors": [
{
"first": "Ahmad",
"middle": [],
"last": "Irshad",
"suffix": ""
},
{
"first": "Vandan",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Mujadia",
"suffix": ""
},
{
"first": "Riyaz",
"middle": [
"Ahmad"
],
"last": "Tammewar",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shrivastava",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Forum for Information Retrieval Evaluation, FIRE '14",
"volume": "",
"issue": "",
"pages": "48--53",
"other_ids": {
"DOI": [
"10.1145/2824864.2824872"
]
},
"num": null,
"urls": [],
"raw_text": "Irshad Ahmad Bhat, Vandan Mujadia, Aniruddha Tam- mewar, Riyaz Ahmad Bhat, and Manish Shrivastava. 2015. Iiit-h system submission for fire2014 shared task on transliterated search. In Proceedings of the Forum for Information Retrieval Evaluation, FIRE '14, pages 48-53, New York, NY, USA. ACM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Natural Language Processing with Python",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O'Reilly Media.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Code-mixed question answering challenge: Crowdsourcing data and techniques",
"authors": [
{
"first": "Khyathi",
"middle": [],
"last": "Chandu",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Loginova",
"suffix": ""
},
{
"first": "Vishal",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
},
{
"first": "G\u00fcnter",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Manoj",
"middle": [],
"last": "Chinnakotla",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nyberg",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching",
"volume": "",
"issue": "",
"pages": "29--38",
"other_ids": {
"DOI": [
"10.18653/v1/W18-3204"
]
},
"num": null,
"urls": [],
"raw_text": "Khyathi Chandu, Ekaterina Loginova, Vishal Gupta, Josef van Genabith, G\u00fcnter Neumann, Manoj Chin- nakotla, Eric Nyberg, and Alan W. Black. 2018. Code-mixed question answering challenge: Crowd- sourcing data and techniques. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 29-38, Mel- bourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Code-Switching Sentence Generation by Generative Adversarial Networks and its Application to Data Augmentation",
"authors": [
{
"first": "Ching-Ting",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Shun-Po",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Hung-Yi",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "554--558",
"other_ids": {
"DOI": [
"10.21437/Interspeech.2019-3214"
]
},
"num": null,
"urls": [],
"raw_text": "Ching-Ting Chang, Shun-Po Chuang, and Hung-Yi Lee. 2019. Code-Switching Sentence Generation by Generative Adversarial Networks and its Appli- cation to Data Augmentation. In Proc. Interspeech 2019, pages 554-558.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Crosslingual language model pretraining",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Enabling code-mixed translation: Parallel corpus creation and MT augmentation approach",
"authors": [
{
"first": "Mrinal",
"middle": [],
"last": "Dhar",
"suffix": ""
},
{
"first": "Vaibhav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Linguistic Resources for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "131--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mrinal Dhar, Vaibhav Kumar, and Manish Shrivastava. 2018. Enabling code-mixed translation: Parallel cor- pus creation and MT augmentation approach. In Proceedings of the First Workshop on Linguistic Resources for Natural Language Processing, pages 131-140, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Julia Hirschberg, and Thamar Solorio",
"authors": [
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Ghoneim",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Second Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/W16-58"
]
},
"num": null,
"urls": [],
"raw_text": "Mona Diab, Pascale Fung, Mahmoud Ghoneim, Ju- lia Hirschberg, and Thamar Solorio, editors. 2016. Proceedings of the Second Workshop on Computa- tional Approaches to Code Switching. Association for Computational Linguistics, Austin, Texas.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Proceedings of the First Workshop on Computational Approaches to Code Switching. Association for Computational Linguistics",
"authors": [
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/v1/W14-39"
]
},
"num": null,
"urls": [],
"raw_text": "Mona Diab, Julia Hirschberg, Pascale Fung, and Thamar Solorio, editors. 2014. Proceedings of the First Workshop on Computational Approaches to Code Switching. Association for Computational Lin- guistics, Doha, Qatar.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Pre-trained language model representations for language generation",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4052--4059",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1409"
]
},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Alexei Baevski, and Michael Auli. 2019. Pre-trained language model representations for language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4052-4059, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Comparing the level of code-switching in corpora",
"authors": [
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "1850--1855",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bj\u00f6rn Gamb\u00e4ck and Amitava Das. 2016. Comparing the level of code-switching in corpora. In Proceed- ings of the Tenth International Conference on Lan- guage Resources and Evaluation (LREC'16), pages 1850-1855, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A semi-supervised approach to generate the code-mixed text using pre-trained encoder and transfer learning",
"authors": [
{
"first": "Deepak",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Asif",
"middle": [],
"last": "Ekbal",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "2267--2280",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.206"
]
},
"num": null,
"urls": [],
"raw_text": "Deepak Gupta, Asif Ekbal, and Pushpak Bhattacharyya. 2020. A semi-supervised approach to generate the code-mixed text using pre-trained encoder and trans- fer learning. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 2267- 2280, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Collecting and annotating indian social media code-mixed corpora",
"authors": [
{
"first": "Anupam",
"middle": [],
"last": "Jamatia",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "406--417",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anupam Jamatia, Bj\u00f6rn Gamb\u00e4ck, and Amitava Das. 2016. Collecting and annotating indian social me- dia code-mixed corpora. In International Confer- ence on Intelligent Text Processing and Computa- tional Linguistics, pages 406-417. Springer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Recurrent continuous translation models",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1700--1709",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natu- ral Language Processing, pages 1700-1709, Seattle, Washington, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The IIT Bombay English-Hindi parallel corpus",
"authors": [
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Pratik",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anoop Kunchukuttan, Pratik Mehta, and Pushpak Bhat- tacharyya. 2018. The IIT Bombay English-Hindi parallel corpus. In Proceedings of the Eleventh In- ternational Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. Euro- pean Language Resources Association (ELRA).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7871--7880",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.703"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Multilingual denoising pre-training for neural machine translation",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Overview for the second shared task on language identification in code-switched data",
"authors": [
{
"first": "Giovanni",
"middle": [],
"last": "Molina",
"suffix": ""
},
{
"first": "Fahad",
"middle": [],
"last": "Alghamdi",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Ghoneim",
"suffix": ""
},
{
"first": "Abdelati",
"middle": [],
"last": "Hawwari",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Rey-Villamizar",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Second Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "40--49",
"other_ids": {
"DOI": [
"10.18653/v1/W16-5805"
]
},
"num": null,
"urls": [],
"raw_text": "Giovanni Molina, Fahad AlGhamdi, Mahmoud Ghoneim, Abdelati Hawwari, Nicolas Rey- Villamizar, Mona Diab, and Thamar Solorio. 2016. Overview for the second shared task on language identification in code-switched data. In Proceedings of the Second Workshop on Computa- tional Approaches to Code Switching, pages 40-49, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "48--53",
"other_ids": {
"DOI": [
"10.18653/v1/N19-4009"
]
},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sentiment analysis of code-mixed indian languages: An overview of sail_code-mixed shared task @icon",
"authors": [
{
"first": "Dipankar",
"middle": [],
"last": "Braja Gopal Patra",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Braja Gopal Patra, Dipankar Das, and Amitava Das. 2018. Sentiment analysis of code-mixed indian lan- guages: An overview of sail_code-mixed shared task @icon-2017.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "SemEval-2020 task 9: Overview of sentiment analysis of code-mixed tweets",
"authors": [
{
"first": "Parth",
"middle": [],
"last": "Patwa",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Aguilar",
"suffix": ""
},
{
"first": "Sudipta",
"middle": [],
"last": "Kar",
"suffix": ""
},
{
"first": "Suraj",
"middle": [],
"last": "Pandey",
"suffix": ""
},
{
"first": "Pykl",
"middle": [],
"last": "Srinivas",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "774--790",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parth Patwa, Gustavo Aguilar, Sudipta Kar, Suraj Pandey, Srinivas PYKL, Bj\u00f6rn Gamb\u00e4ck, Tanmoy Chakraborty, Thamar Solorio, and Amitava Das. 2020. SemEval-2020 task 9: Overview of senti- ment analysis of code-mixed tweets. In Proceed- ings of the Fourteenth Workshop on Semantic Eval- uation, pages 774-790, Barcelona (online). Interna- tional Committee for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Sometimes i'll start a sentence in spanish y termino en espa\u00d1ol: toward a typology of code-switching 1",
"authors": [
{
"first": "Shana",
"middle": [],
"last": "Poplack",
"suffix": ""
}
],
"year": 1980,
"venue": "Linguistics",
"volume": "18",
"issue": "",
"pages": "581--618",
"other_ids": {
"DOI": [
"10.1515/ling.1980.18.7-8.581"
]
},
"num": null,
"urls": [],
"raw_text": "Shana Poplack. 1980. Sometimes i'll start a sentence in spanish y termino en espa\u00d1ol: toward a typology of code-switching 1. Linguistics, 18:581-618.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6319"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Language modeling for code-mixing: The role of linguistic theory based synthetic data",
"authors": [
{
"first": "Adithya",
"middle": [],
"last": "Pratapa",
"suffix": ""
},
{
"first": "Gayatri",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Sunayana",
"middle": [],
"last": "Sitaram",
"suffix": ""
},
{
"first": "Sandipan",
"middle": [],
"last": "Dandapat",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1543--1553",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1143"
]
},
"num": null,
"urls": [],
"raw_text": "Adithya Pratapa, Gayatri Bhat, Monojit Choudhury, Sunayana Sitaram, Sandipan Dandapat, and Kalika Bali. 2018. Language modeling for code-mixing: The role of linguistic theory based synthetic data. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1543-1553, Melbourne, Aus- tralia. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Cmee-il: Code mix entity extraction in indian languages from social media text @ fire 2016 -an overview",
"authors": [
{
"first": "R",
"middle": [
"K"
],
"last": "Pattabhi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Devi",
"suffix": ""
}
],
"year": 2016,
"venue": "FIRE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pattabhi R. K. Rao and S. Devi. 2016. Cmee-il: Code mix entity extraction in indian languages from social media text @ fire 2016 -an overview. In FIRE.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "GCM: A toolkit for generating synthetic code-mixed text",
"authors": [
{
"first": "Mohd",
"middle": [],
"last": "Sanad Zaki Rizvi",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Srinivasan",
"suffix": ""
},
{
"first": "Tanuja",
"middle": [],
"last": "Ganu",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Sunayana",
"middle": [],
"last": "Sitaram",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "205--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohd Sanad Zaki Rizvi, Anirudh Srinivasan, Tanuja Ganu, Monojit Choudhury, and Sunayana Sitaram. 2021. GCM: A toolkit for generating synthetic code-mixed text. In Proceedings of the 16th Con- ference of the European Chapter of the Association for Computational Linguistics: System Demonstra- tions, pages 205-211, Online. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Processing South Asian languages written in the Latin script: the dakshina dataset",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Wolf-Sonkin",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "Sabrina",
"middle": [
"J"
],
"last": "Mielke",
"suffix": ""
},
{
"first": "Cibu",
"middle": [],
"last": "Johny",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "2413--2423",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Roark, Lawrence Wolf-Sonkin, Christo Kirov, Sabrina J. Mielke, Cibu Johny, Isin Demirsahin, and Keith Hall. 2020. Processing South Asian languages written in the Latin script: the dakshina dataset. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 2413-2423, Mar- seille, France. European Language Resources Asso- ciation.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Overview of the fire 2013 track on transliterated search",
"authors": [
{
"first": "Rishiraj",
"middle": [],
"last": "Saha Roy",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Komal",
"middle": [],
"last": "Agarwal",
"suffix": ""
}
],
"year": 2013,
"venue": "Post-Proceedings of the 4th and 5th Workshops of the Forum for Information Retrieval Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2701336.2701636"
]
},
"num": null,
"urls": [],
"raw_text": "Rishiraj Saha Roy, Monojit Choudhury, Prasenjit Ma- jumder, and Komal Agarwal. 2013. Overview of the fire 2013 track on transliterated search. In Post- Proceedings of the 4th and 5th Workshops of the Fo- rum for Information Retrieval Evaluation, FIRE '12",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Association for Computing Machinery",
"authors": [],
"year": null,
"venue": "",
"volume": "13",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "& '13, New York, NY, USA. Association for Com- puting Machinery.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A deep generative model for code switched text",
"authors": [
{
"first": "Bidisha",
"middle": [],
"last": "Samanta",
"suffix": ""
},
{
"first": "Sharmila",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Hussain",
"middle": [],
"last": "Jagirdar",
"suffix": ""
},
{
"first": "Niloy",
"middle": [],
"last": "Ganguly",
"suffix": ""
},
{
"first": "Soumen",
"middle": [],
"last": "Chakrabarti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19",
"volume": "",
"issue": "",
"pages": "5175--5181",
"other_ids": {
"DOI": [
"10.24963/ijcai.2019/719"
]
},
"num": null,
"urls": [],
"raw_text": "Bidisha Samanta, Sharmila Reddy, Hussain Jagirdar, Niloy Ganguly, and Soumen Chakrabarti. 2019. A deep generative model for code switched text. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI- 19, pages 5175-5181. International Joint Confer- ences on Artificial Intelligence Organization.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Towards translating mixed-code comments from social media",
"authors": [
{
"first": "Doren",
"middle": [],
"last": "Thoudam",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "457--468",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thoudam Doren Singh and Thamar Solorio. 2018. To- wards translating mixed-code comments from social media. In Computational Linguistics and Intelligent Text Processing, pages 457-468, Cham. Springer In- ternational Publishing.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Overview for the first shared task on language identification in code-switched data",
"authors": [
{
"first": "Abdelati",
"middle": [],
"last": "Ghoneim",
"suffix": ""
},
{
"first": "Fahad",
"middle": [],
"last": "Hawwari",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Alghamdi",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Hirschberg",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "62--72",
"other_ids": {
"DOI": [
"10.3115/v1/W14-3907"
]
},
"num": null,
"urls": [],
"raw_text": "Ghoneim, Abdelati Hawwari, Fahad AlGhamdi, Ju- lia Hirschberg, Alison Chang, and Pascale Fung. 2014. Overview for the first shared task on language identification in code-switched data. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 62-72, Doha, Qatar. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "MASS: Masked sequence to sequence pre-training for language generation",
"authors": [
{
"first": "Kaitao",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning",
"volume": "97",
"issue": "",
"pages": "5926--5936",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2019. MASS: Masked sequence to se- quence pre-training for language generation. In Pro- ceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Ma- chine Learning Research, pages 5926-5936. PMLR.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "PHINC: A parallel Hinglish social media code-mixed corpus for machine translation",
"authors": [
{
"first": "Vivek",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Mayank",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)",
"volume": "",
"issue": "",
"pages": "41--49",
"other_ids": {
"DOI": [
"10.18653/v1/2020.wnut-1.7"
]
},
"num": null,
"urls": [],
"raw_text": "Vivek Srivastava and Mayank Singh. 2020. PHINC: A parallel Hinglish social media code-mixed cor- pus for machine translation. In Proceedings of the Sixth Workshop on Noisy User-generated Text (W- NUT 2020), pages 41-49, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Code-switched language models using neural based synthetic data from parallel sentences",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Genta Indra Winata",
"suffix": ""
},
{
"first": "Chien-Sheng",
"middle": [],
"last": "Madotto",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "271--280",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1026"
]
},
"num": null,
"urls": [],
"raw_text": "Genta Indra Winata, Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2019. Code-switched lan- guage models using neural based synthetic data from parallel sentences. In Proceedings of the 23rd Con- ference on Computational Natural Language Learn- ing (CoNLL), pages 271-280, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Oriol Vinyals",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, \u0141ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rud- nick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Multiple roman spellings for the same Hindi",
"uris": null,
"num": null
},
"TABREF1": {
"text": "The statistics of the dataset. We use the language tags predicted by the CSNLI library 4 . Since the target sentences of the test set are not public, we do not provide its statistics.",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF3": {
"text": "Performance of our systems on the validation set and test set of the dataset. Since the target sentences of the test set are not public, we do not calculate the scores ourselves. We report the BLEU scores of our systems on the test set from the official leader board.",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF6": {
"text": "Avg. CMI scores, Percentage of sentences with CMI = 0. Train Gold and Dev Gold are calculated on the target sentences given in the dataset. Rest are calculated on the outputs generated by our models.",
"html": null,
"content": "<table><tr><td/><td>Validation Set</td><td>Test Set</td></tr><tr><td>mBART-en</td><td/></tr><tr><td colspan=\"3\"># of English tokens 3,282 (25.5%) 3,571 (27.6%)</td></tr><tr><td># of Hindi tokens</td><td colspan=\"2\">8,155 (63.4%) 8,062 (62.3%)</td></tr><tr><td colspan=\"3\"># of 'Other' tokens 1,435 (11.1%) 1,302 (10.1%)</td></tr></table>",
"type_str": "table",
"num": null
}
}
}
}