ACL-OCL / Base_JSON /prefixC /json /calcs /2021.calcs-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:13:50.821135Z"
},
"title": "Gated Convolutional Sequence to Sequence Based Learning for English-Hingilsh Code-Switched Machine Translation",
"authors": [
{
"first": "Suman",
"middle": [],
"last": "Dowlagar",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Radhika",
"middle": [],
"last": "Mamidi",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Code-Switching is the embedding of linguistic units or phrases from two or more languages in a single sentence. This phenomenon is practiced in all multilingual communities and is prominent in social media. Consequently, there is a growing need to understand codeswitched translations by translating the codeswitched text into one of the standard languages or vice versa. Neural Machine translation is a well-studied research problem in the monolingual text. In this paper, we have used the gated convolutional sequences to sequence networks for English-Hinglish translation. The convolutions in the model help to identify the compositional structure in the sequences more easily. The model relies on gating and performs multiple attention steps at encoder and decoder layers.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Code-Switching is the embedding of linguistic units or phrases from two or more languages in a single sentence. This phenomenon is practiced in all multilingual communities and is prominent in social media. Consequently, there is a growing need to understand codeswitched translations by translating the codeswitched text into one of the standard languages or vice versa. Neural Machine translation is a well-studied research problem in the monolingual text. In this paper, we have used the gated convolutional sequences to sequence networks for English-Hinglish translation. The convolutions in the model help to identify the compositional structure in the sequences more easily. The model relies on gating and performs multiple attention steps at encoder and decoder layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Language is a social phenomenon. The day-to-day interactions are made possible via language. The adaptive nature of languages and the flexibility to use multiple languages in one text message might help the speakers to communicate efficiently. This form of language interaction/contact is considered to be an essential phenomenon, especially in multilingual societies. In bilingual or multilingual communities, speakers use their native tongue and their second language during interactions. This form of alternation of two or more languages is called Code-Switching (CS) (Muysken et al., 2000) .",
"cite_spans": [
{
"start": 571,
"end": 593,
"text": "(Muysken et al., 2000)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Through the advent of social media, people from around the world can connect and exchange information instantly. Users from a Multilingual community often express their thoughts or opinions on social media by mixing different languages in the same utterance (Dowlagar and Mamidi, 2021) . This mixing or alteration of two or more languages is known as code-mixing or code-switching (Wardhaugh, 2011).",
"cite_spans": [
{
"start": 258,
"end": 285,
"text": "(Dowlagar and Mamidi, 2021)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are no standard grammar rules that are meant to be practiced in the code-switched text. The code-switched data often contain variations of spellings and grammar. The computational processing of code-mixed or code-switched data is challenging due to the nature of the mixing and the presence of non-standard variations in spellings and grammar, and transliteration (Bali et al., 2014) . Because of such linguistic complexities, code-switching poses several unseen difficulties in fundamental fields of natural language processing (NLP) tasks such as language identification, part-of-speech tagging, shallow parsing, Named entity recognition, sentiment analysis, offensive language identification etc.",
"cite_spans": [
{
"start": 370,
"end": 389,
"text": "(Bali et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To encourage research on code-mixing text, the Computational Approaches to Linguistic Code-Switching (CALCS) community has organized several workshops on language identification, Named Entity Recognition (Aguilar et al., 2018) . This task focus on machine translation in the code-switched environment in multiple language combinations and directions 1 . This paper presents a gated convolutional sequence to sequence encoder and decoder models (Gehring et al., 2017) for machine translation on the code-mixed text. We have used the convolutional model because of its sliding window concept to deal with contextual words and the convolutions to extract rich representations.",
"cite_spans": [
{
"start": 204,
"end": 226,
"text": "(Aguilar et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 444,
"end": 466,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows. Section 2 provides related work on the code-switched text for machine translation. Section 3 provides information on the task and dataset. Section 4 describes the proposed work. Section 5 presents the experimental setup and the performance of the model. Section 6 concludes our work. There is relatively less research in the field of the machine translation of the code-switched text, partially due to the relative lack of structured corpora and also potentially because it also poses significant linguistic challenges such as ambiguity in language identification, spelling variations, informal style of writing, Misplaced/skipped punctuation, etc. Nonetheless, some researchers have provided datasets to enable research in code-mixed machine translation, specifically in Hindi-English codeswitched scenario (Srivastava and Singh, 2020; Dhar et al., 2018) . Dhar et al. (2018) presented a parallel corpus of the 13,738 code-mixed Hindi-English sentences and their corresponding human translation in English. In addition, they also provided a translation pipeline built on top of Google Translate. The pipeline fragments the input sentence into multiple chunks and identifies the language of each word in the chunk before feeding it to google-translate. The pipeline gives a BLEU-1 metric of 0.153 on the given English dataset. Dhar et al. (2018) translated the 6,096 code-mixed English-Hindi sentences into English and presented a translation augmentation pipeline. The pipeline is presented as a pre-processing step and can be plugged into any existing MT system. The preprocessed data is then given to translation systems like Moses, Google Neural Machine Translation System (NMTS), and Bing Translator, where the pre-processed data with NMTS has outperformed all the baselines with a BLEU score of 28.4.",
"cite_spans": [
{
"start": 843,
"end": 871,
"text": "(Srivastava and Singh, 2020;",
"ref_id": "BIBREF9"
},
{
"start": 872,
"end": 890,
"text": "Dhar et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 893,
"end": 911,
"text": "Dhar et al. (2018)",
"ref_id": "BIBREF4"
},
{
"start": 1362,
"end": 1380,
"text": "Dhar et al. (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The goal of this task is the machine translation for code-switching settings in multiple language combinations and directions, such as involving English, Hinglish, Spanish, Spanglish, Modern Standard Arabic, and Egyptian Arabic languages. The codemixed dataset is obtained from comments/posts from social media. In this paper, we have focussed on the English-Hinglish dataset. The English-Hinglish code-mixed dataset has 8060 train, 952 dev, and 960 test with source, and target translations. The task is to translate the given English sentence into a code-mixed Hindi-English sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "3"
},
{
"text": "The examples of the given English-Hinglish translation are given in the table 1. In the first translation, one can see that the Hinglish sentence has a mixture of non-standard variations of words such as komedee(comedy), edavenchar(adventure), eneemeshan(animation), and the second translation exhibits the switching of English and Hindi phrases. In the third translation, the sentence is completely translated to Hindi (with roman script). The fourth translation shows that no translation is followed. The above sentences depict the diversity of the code-mixed translations, thus making the research and translation of the codemixed text a complex task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "3"
},
{
"text": "This section presents the proposed gated convolutional neural networks with encoder and decoder models for machine translation from English to code-mixed Hinglish text. The encoder model encode the source sentence into a vector and the decoder model takes the encoder information and decodes the given target sentences. The encoded vector is also known context vector. The context vector can be visualized as an abstract representation of the entire input sentence. The vector is decoded by a decoder model that learns to output the target sentence. The context needs to contain all of the information about the source sentence. It can be done by using attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed work",
"sec_num": "4"
},
{
"text": "Our encoder and decoder attention-based models use convolutions to encode the source sentence and to decode it into the target sentence. The convolutional layer uses filters. These filters have a window size. For example, if a filter has a window size of 3, then it can process three consecutive tokens. This window helps in determining the context. The convolutional layer has many of these filters, where each filter will slide across the entire sequence by looking at all three consecutive tokens at a time. These filters will help extract different features in the given text and aid the machine translation model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed work",
"sec_num": "4"
},
{
"text": "The description of the encoder and decoder convolutional models is given in the subsequent subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed work",
"sec_num": "4"
},
{
"text": "In the encoder model, each token in the source sentence is passed through an embedding layer. As the convolutional model has no recurrent connections, the model has no idea about the order of the tokens within a sequence. So it is necessary to add the positional embedding layer. In the positional embedding, the position of the tokens, including the start of the sequence and the end of the sequence, are encoded. Next, the token and positional embeddings are combined by elementwise sum. The obtained embedding vector contains the token and also its position within the sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "4.1"
},
{
"text": "The given embedding vector is passed through a series of convolutional blocks. We follow the (Gehring et al., 2017) paper to implement the gated convolutional block architecture. It is formulated as,",
"cite_spans": [
{
"start": 93,
"end": 115,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "4.1"
},
{
"text": "h l i = v W l h l\u22121 i\u2212k/2 , . . . , h l\u22121 i+k/2 + b l w + h l\u22121 i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "4.1"
},
{
"text": "(1) Where h l i is the output of the i th sequence in l th block. v is the gated gated linear units (GLU) (Dauphin et al., 2016) activation function. h l\u22121 i\u2212k/2 , . . . , h l\u22121 i+k/2 are convolutional transformations of previous layer, W l and b l w are learnable parameters and h l\u22121 i is the residual output from the previous layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "4.1"
},
{
"text": "Passing the embedding vector through the convolutional blocks gives the convolved vector for each token in the given source sentence. The embedding vector is added as a residual connection is added to the convolved vector to get a combined vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoder",
"sec_num": "4.1"
},
{
"text": "The decoder is similar to the encoder, with a few additional paddings to both the main model and the convolutional blocks inside the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "4.2"
},
{
"text": "In the decoder, the encoder convolved and combined outputs are used with attention. Finally, the output of the decoder is passed through a feedforward layer to match the output target dimension in order to get the translated sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": "4.2"
},
{
"text": "Here, we demonstrate the performance of the machine translation systems on the code-mixed text. We experiment with the popular RNN based encoder-decoder machine translation and vanilla transformer models and evaluate their performance on the given English-Hinglish machine translation task. We use BLEU metrics to evaluate system performance (Papineni et al., 2002) .",
"cite_spans": [
{
"start": 342,
"end": 365,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "RNN based encoder-decoder model (Bahdanau et al., 2014) The model uses RNN blocks to encode and decode the given sequence. The model allows the decoder to look at the entire source sentence at each decoding step by using attention.",
"cite_spans": [
{
"start": 32,
"end": 55,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline MT models",
"sec_num": "5.1"
},
{
"text": "Transformer We have implemented the Transformer model from the paper Vaswani et al. (2017) . The transformer model uses multi-headed attention, layer normalizations, and feed-forward networks to implement the transformer models. The positional embeddings are used to remember the sequence of the sentence.",
"cite_spans": [
{
"start": 69,
"end": 90,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline MT models",
"sec_num": "5.1"
},
{
"text": "The parameters used to train our neural machine translation model are: the number of epochs used to train the model is 10. The Adam optimizer is used with cross-entropy loss with the gradient clipping of 0.1. The embedding and hidden dimensionalities are set as 256 and 512. The number of encoder and decoder convolutional layers used is 10. The default kernel window of size three is maintained. The dropout is kept at 0.25, and the maximum length used for the positional embeddings is 400. The Pytorch library is used to implement the model and is made publicly accessible 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters and libraries",
"sec_num": "5.2"
},
{
"text": "The results are guven in the table 2. From the table, it is clear that the convolutional model has obtained a better accuracy when compared to the vanilla transformer and encoder-decoder models. The use of convolutions and using the window size helped the convolutional model understand its context. We have even observed that the small length sequences and code-switching points are detected better by a convolutional model. As there is no",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Error analysis",
"sec_num": "5.3"
},
{
"text": "BLEU score (based on validation data) Encoder-Decoder RNN model 1.52 Transformer model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "2.51 proposed model (Conv seq2seq) 2.58 recurrence in the convolutions, the computations are performed faster than the RNN's. Compared to recurrent networks, our convolutional approach allows discovering compositional structure in the sequences more easily since representations. Our model relies on gating and performs multiple attention steps. The vanilla transformer model did not perform well on this task because of the limited dataset used by the model. The vanilla transformer model is designed to be trained on larger datasets. It might be possible that the pre-trained transformer models can achieve better results when the dataset is finetuned on such models. The encoder-decoder RNN model performed worst and were very slow when compared to the other models. During the error analysis we have found that there were repetitions in translations for the long sentences that are incorrectly translated by our model. This often lead to decrease in BLEU metric. The example is given table 3. Where as the short sentences are correctly translated by the given model. This is due to the low dependencies exhibited because of low window size and also due to presence of more Out of vocabulary (OOV) words because of the limited dataset. The improvement in the size of datasets will improve the translation accuracy of the proposed model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "This paper presents the performance of a neural machine translation model for the shared task on codeswitched English-Hinglish translation. The model uses the convolutional sequence to sequence-based neural network architecture to translate the given sequence. The contextual window and the stateof-the-art convolution model helped the model learn better representations from the text and improved the model's performance compared to RNN encoder-decoder and vanilla transformer models. In the future, we wish to use the pre-trained BERT models and their ensembles and also consider other code-mixing factors such as pre-processing of the code-switched text to improve the quality of the code-switched machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https\\protect\\leavevmode@ifvmode\\ kern+.2222em\\relax//code-switching. github.io/2021#shared-task",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/suman101112/CMMT. git",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Named entity recognition on code-switched data: Overview of the CALCS 2018 shared task",
"authors": [
{
"first": "Gustavo",
"middle": [],
"last": "Aguilar",
"suffix": ""
},
{
"first": "Fahad",
"middle": [],
"last": "Alghamdi",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Soto",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching",
"volume": "",
"issue": "",
"pages": "138--147",
"other_ids": {
"DOI": [
"10.18653/v1/W18-3219"
]
},
"num": null,
"urls": [],
"raw_text": "Gustavo Aguilar, Fahad AlGhamdi, Victor Soto, Mona Diab, Julia Hirschberg, and Thamar Solorio. 2018. Named entity recognition on code-switched data: Overview of the CALCS 2018 shared task. In Proceedings of the Third Workshop on Compu- tational Approaches to Linguistic Code-Switching, pages 138-147, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "i am borrowing ya mixing?\" an analysis of english-hindi code mixing in facebook",
"authors": [
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
},
{
"first": "Jatin",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Yogarshi",
"middle": [],
"last": "Vyas",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "116--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kalika Bali, Jatin Sharma, Monojit Choudhury, and Yo- garshi Vyas. 2014. \"i am borrowing ya mixing?\" an analysis of english-hindi code mixing in facebook. In Proceedings of the First Workshop on Computa- tional Approaches to Code Switching, pages 116- 126.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Language modeling with gated convolutional networks",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Yann N Dauphin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2016. Language modeling with gated con- volutional networks. arxiv.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Enabling code-mixed translation: Parallel corpus creation and mt augmentation approach",
"authors": [
{
"first": "Mrinal",
"middle": [],
"last": "Dhar",
"suffix": ""
},
{
"first": "Vaibhav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Linguistic Resources for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "131--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mrinal Dhar, Vaibhav Kumar, and Manish Shrivas- tava. 2018. Enabling code-mixed translation: Par- allel corpus creation and mt augmentation approach. In Proceedings of the First Workshop on Linguistic Resources for Natural Language Processing, pages 131-140.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Graph convolutional networks with multi-headed attention for code-mixed sentiment analysis",
"authors": [
{
"first": "Suman",
"middle": [],
"last": "Dowlagar",
"suffix": ""
},
{
"first": "Radhika",
"middle": [],
"last": "Mamidi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suman Dowlagar and Radhika Mamidi. 2021. Graph convolutional networks with multi-headed attention for code-mixed sentiment analysis. In Proceedings of the First Workshop on Speech and Language Tech- nologies for Dravidian Languages, pages 65-72.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann N",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1243--1252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In International Conference on Machine Learning, pages 1243-1252. PMLR.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bilingual speech: A typology of code-mixing",
"authors": [
{
"first": "Pieter",
"middle": [],
"last": "Muysken",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Cornelis Muysken",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pieter Muysken, Pieter Cornelis Muysken, et al. 2000. Bilingual speech: A typology of code-mixing. Cam- bridge University Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Phinc: a parallel hinglish social media code-mixed corpus for machine translation",
"authors": [
{
"first": "Vivek",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Mayank",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.09447"
]
},
"num": null,
"urls": [],
"raw_text": "Vivek Srivastava and Mayank Singh. 2020. Phinc: a parallel hinglish social media code-mixed cor- pus for machine translation. arXiv preprint arXiv:2004.09447.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.03762"
]
},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An introduction to sociolinguistics",
"authors": [
{
"first": "Ronald",
"middle": [],
"last": "Wardhaugh",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "28",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald Wardhaugh. 2011. An introduction to sociolin- guistics, volume 28. John Wiley & Sons.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Example translations</td></tr></table>",
"num": null,
"html": null
},
"TABREF2": {
"type_str": "table",
"text": "BLEU metrics of the proposed model when compared to baselines.",
"content": "<table><tr><td/><td>Incorrect Translation</td></tr><tr><td>Source Sentence (English)</td><td>Plus how many times are you going to leave your kid behind. I see</td></tr><tr><td/><td>they added another scene. How the robbers got caught and Kevin</td></tr><tr><td/><td>reuniting with his mom and family</td></tr><tr><td colspan=\"2\">Target Translation (Hinglish) aur kitne baar apne bete ko chod doge. maine dekha hai ki woh log</td></tr><tr><td/><td>aur ek scene add kar diya. Robbers kaise pakde gaye aur kevin apni</td></tr><tr><td/><td>mom aur family se mila.</td></tr><tr><td>Translation by our model</td><td>mujhe us kahani pasand hein, jho tho yeh sach ko bahut pasand hein</td></tr><tr><td/><td>jab mein kabhi kabhi ko dekhna nahi hein lekin mein kabhi kabhi ko</td></tr><tr><td/><td>apney kabhi ko nahi nahi hein jab mujhe yakeen hai ki yah kuchh</td></tr><tr><td/><td>daraatee hai ki mujhe kabhi ko apney kabhi ko nahi</td></tr><tr><td/><td>Proper Translation</td></tr><tr><td>Source Sentence (English)</td><td>yes, it is good</td></tr><tr><td colspan=\"2\">Target Translation (Hinglish) han, ye accha hai</td></tr><tr><td>Translation by our model</td><td>han, ye good hai</td></tr></table>",
"num": null,
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "Output of our convolutional sequence to sequence model on English-Hinglish text",
"content": "<table/>",
"num": null,
"html": null
}
}
}
}