ACL-OCL / Base_JSON /prefixW /json /wassa /2021.wassa-1.21.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:07:17.862640Z"
},
"title": "Towards Emotion Recognition in Hindi-English Code-Mixed Data: A Transformer Based Approach",
"authors": [
{
"first": "Anshul",
"middle": [],
"last": "Wadhawan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Netaji Subhas University of Technology",
"location": {
"settlement": "Dwarka, New Delhi"
}
},
"email": "[email protected]"
},
{
"first": "Akshita",
"middle": [],
"last": "Aggarwal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Netaji Subhas University of Technology",
"location": {
"settlement": "Dwarka, New Delhi"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In the last few years, emotion detection in social-media text has become a popular problem due to its wide ranging application in better understanding the consumers, in psychology, in aiding human interaction with computers, designing smart systems etc. Because of the availability of huge amounts of data from social-media, which is regularly used for expressing sentiments and opinions, this problem has garnered great attention. In this paper, we present a Hinglish dataset labelled for emotion detection. We highlight a deep learning based approach for detecting emotions in Hindi-English code mixed tweets, using bilingual word embeddings derived from FastText and Word2Vec approaches, as well as transformer based models. We experiment with various deep learning models, including CNNs, LSTMs, Bi-directional LSTMs (with and without attention), along with transformers like BERT, RoBERTa, and ALBERT. The transformer based BERT model outperforms all other models giving the best performance with an accuracy of 71.43%.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In the last few years, emotion detection in social-media text has become a popular problem due to its wide ranging application in better understanding the consumers, in psychology, in aiding human interaction with computers, designing smart systems etc. Because of the availability of huge amounts of data from social-media, which is regularly used for expressing sentiments and opinions, this problem has garnered great attention. In this paper, we present a Hinglish dataset labelled for emotion detection. We highlight a deep learning based approach for detecting emotions in Hindi-English code mixed tweets, using bilingual word embeddings derived from FastText and Word2Vec approaches, as well as transformer based models. We experiment with various deep learning models, including CNNs, LSTMs, Bi-directional LSTMs (with and without attention), along with transformers like BERT, RoBERTa, and ALBERT. The transformer based BERT model outperforms all other models giving the best performance with an accuracy of 71.43%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the growth of social networking sites like Facebook and Twitter, humans have started communicating online much more than ever before. This leads to the generation of huge amounts of textual data which introduces interesting challenges in the domain of NLP. Automatic detection of various linguistic expressions like irony, hate, sarcasm, aggression etc. is being widely explored. Another problem that has drawn keen interest of NLP researchers is detecting emotions of a human via the texts they have produced. In order to aid humancomputer interaction, determining the emotions via texts becomes significant (Greaves et al., 2009) . There are multiple ways of detecting emotions, including but not limited to speech (Schmitt et al., 2016) , facial expressions recognition (Ko, 2018) and text-based approaches.",
"cite_spans": [
{
"start": 614,
"end": 636,
"text": "(Greaves et al., 2009)",
"ref_id": "BIBREF11"
},
{
"start": 722,
"end": 744,
"text": "(Schmitt et al., 2016)",
"ref_id": "BIBREF29"
},
{
"start": 778,
"end": 788,
"text": "(Ko, 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Text-based emotion detection is based on the assumption that when a person is happy, they would use positive words. Likewise, when they are angry, frustrated or upset, negative emotions will be depicted by a certain kind of words carrying negative connotation. Contrary to popular belief, emotions are not only significant in human creativity, but they also play an instrumental part in making rational decisions. With the rise of artificial intelligence and increased focus on human-computer interaction, smart machines that will communicate naturally and intelligently with humans, need to recognize their emotions effectively. Affective computing has emerged as an exciting field with recent focus on emotion detection (Picard, 2000) .",
"cite_spans": [
{
"start": 722,
"end": 736,
"text": "(Picard, 2000)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of earlier work has been carried out on a mono-lingual dataset due to easy availability of a large corpus of annotated data (Chen et al., 2010; Canales and Mart\u00ednez-Barco, 2014) . However, in multilingual cultures, use of multiple languages while exchanging information on social media is quite common. Studies show that as many as 314.9 million people in India are bilingual 1 . This leads to the issue of code-mixing and code-switching especially while communicating on social media platforms like Twitter, Facebook and Reddit (Gupta et al., 2016; M\u00f3nica et al., 2009) . Code-mixing occurs when lexicons and grammatical features of multiple languages are used in the same sentence (Poplack and Walker, 2003; Auer and Wei, 2007; 10, 2009) . The major issue in dealing with code-mixed problems is the absence of sufficiently annotated datasets (Nguyen and Dogruoz, 2013) .",
"cite_spans": [
{
"start": 129,
"end": 148,
"text": "(Chen et al., 2010;",
"ref_id": "BIBREF7"
},
{
"start": 149,
"end": 182,
"text": "Canales and Mart\u00ednez-Barco, 2014)",
"ref_id": "BIBREF6"
},
{
"start": 534,
"end": 554,
"text": "(Gupta et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 555,
"end": 575,
"text": "M\u00f3nica et al., 2009)",
"ref_id": "BIBREF23"
},
{
"start": 688,
"end": 714,
"text": "(Poplack and Walker, 2003;",
"ref_id": "BIBREF27"
},
{
"start": 715,
"end": 734,
"text": "Auer and Wei, 2007;",
"ref_id": "BIBREF3"
},
{
"start": 735,
"end": 744,
"text": "10, 2009)",
"ref_id": null
},
{
"start": 849,
"end": 875,
"text": "(Nguyen and Dogruoz, 2013)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present our findings on one of the most challenging problems in the domain of Natural Language Processing, 'emotion detection'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While a lot of work has been carried out for the English language (Aman and Szpakowicz, 2007) , the domain of Hindi-English code-mixed texts remains relatively new and not much explored. We present an annotated Hindi-English code-mixed dataset of 150k tweets for addressing this issue and for enabling future researchers to contribute to this domain. Our aim in this paper is to compare multiple deep learning models including CNNs, LSTMs, Bi-directional LSTMs (with and without attention) with the aid of bilingual self-trained word embeddings on a code-mixed dataset, along with transformer based models like BERT, RoBERTa, and ALBERT.",
"cite_spans": [
{
"start": 66,
"end": 93,
"text": "(Aman and Szpakowicz, 2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows -Section 2 details about the background and related work in this domain. Section 3 enumerates the methodology we used to perform the experiments including data annotation, pre-processing, embeddings and models used. Section 4 lists down the experimental settings to replicate the work done. Section 5 contains details of the results obtained and section 6 consists of conclusions drawn from the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With the huge growth of micro-blogging platforms like Facebook and Twitter, there has been an increased interest in detecting sentiments and emotions in large text corpus (Kouloumpis et al., 2011; Pak and Paroubek, 2010) . In initial work aimed at emotion detection in textual data, experiments have been carried out with text-based emotion classification in fairy tales for kids on the lines of basic emotions (Alm et al., 2005; Ekman, 1992) . In another related work (Liu et al., 2003) , the authors work on real-world knowledge bases highlighting human's natural reactions towards various situations, aimed at identifying emotions at the sentence-level. With the increase of non-native English speakers on social media, sentiment analysis on regional languages and code-mixed data has gained momentum.",
"cite_spans": [
{
"start": 171,
"end": 196,
"text": "(Kouloumpis et al., 2011;",
"ref_id": "BIBREF17"
},
{
"start": 197,
"end": 220,
"text": "Pak and Paroubek, 2010)",
"ref_id": "BIBREF25"
},
{
"start": 411,
"end": 429,
"text": "(Alm et al., 2005;",
"ref_id": "BIBREF1"
},
{
"start": 430,
"end": 442,
"text": "Ekman, 1992)",
"ref_id": "BIBREF9"
},
{
"start": 469,
"end": 487,
"text": "(Liu et al., 2003)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A pivotal work of sentiment analysis in Hindi corpus was done, where the authors were successful in extracting sentiment lexons from HindiWord-Net and were able to achieve an accuracy of 87% in the domain of movie (Joshi et al., 2010) . In a detailed analysis of data of English-Hindi bilingual users on Facebook, it was shown that 17.2% of all posts, which accounted for around one-fourth of the words in their dataset, revealed some form of code-mixing (Bali et al., 2014) . Sub-word level LSTM architecture for performing sentiment analysis was introduced on Hindi-English code-mixed dataset (Prabhu et al., 2016) . Experiments were conducted with supervised learning (SVM) on a Hindi-English code-mixed corpus for emotion detection (Vijay et al., 2018) .",
"cite_spans": [
{
"start": 214,
"end": 234,
"text": "(Joshi et al., 2010)",
"ref_id": "BIBREF14"
},
{
"start": 455,
"end": 474,
"text": "(Bali et al., 2014)",
"ref_id": "BIBREF4"
},
{
"start": 595,
"end": 616,
"text": "(Prabhu et al., 2016)",
"ref_id": "BIBREF28"
},
{
"start": 736,
"end": 756,
"text": "(Vijay et al., 2018)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This section describes the series of steps that constitute the methodology proposed, including detailed descriptions of dataset creation, annotation, preprocessing, embeddings, and the deep learning models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Methodology",
"sec_num": "3"
},
{
"text": "The dataset annotated by paper (Vijay et al., 2018) contains 2866 tweets. This data being insufficient for doing any meaningful work with deep learning due to the issue of overfitting, we created a self-annotated class-balanced dataset using TwitterScraper API 2 with relevant search tags like #happy, #sad, #angry, #fear, #disgust, #wow along with some commonly used hindi words to obtain Hinglish data.",
"cite_spans": [
{
"start": 31,
"end": 51,
"text": "(Vijay et al., 2018)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Creation",
"sec_num": "3.1"
},
{
"text": "We scraped around 250k tweets for analysis. After dropping the noisy instances containing unknown characters, we filtered the dataset down to a class balanced corpus of 150k tweets. The tweets were annotated with six standard emotions, including, happiness, sadness, anger, fear, disgust and surprise (Ekman, 1992) . The hashtags which were used as searching criteria for scraping the tweets, were used for annotation. All examples which were fetched using hashtags like #yayy were marked to have a positive happiness label. This process was repeated for all the 6 emotions under consideration. The number of tweets per class is depicted in Table 1 . Initially, embeddings were trained on just Hinglish tweets, however, English tweets were added later because of excess of hindi words in Hinglish tweets, causing a lack of English specific words. The labelled emotion detection dataset along with the deep learning classification models is made available online 3 to Examples of some annotated data :",
"cite_spans": [
{
"start": 301,
"end": 314,
"text": "(Ekman, 1992)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 641,
"end": 648,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset Annotation and Analysis",
"sec_num": "3.2"
},
{
"text": "TWEET: Great darshan today at siddhi vinayak along wid aarti!! #happiness @dollydas261 @vishal71182 @vishalbti .. TRANSLATION: Had a great experience in Siddhi Vinayak Temple, along with the ceremonies #happiness EMOTION: Happy TWEET : Jindagi me Maut sabse bada loss nahi, sabse bada loss tab hota hai jab Do logo ke jinda rehte hue unke beech aapsi riste toot jaye.#Sad :( TRANSLATION: The biggest loss in life is not death. The biggest loss is banishment of relations between loved ones even when alive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Annotation and Analysis",
"sec_num": "3.2"
},
{
"text": "EMOTION: Sad",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Annotation and Analysis",
"sec_num": "3.2"
},
{
"text": "We preprocessed the scraped data by retaining only Hinglish tweets while removing tweets in pure English and Devanagari. We also removed rare words (words having occurrence of less than 10 in the entire dataset), mentions, '#' symbols, URLs, punctuations and keywords used for scraping (like happy, sad, etc.) in order to feed our models with cleaner data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "3.3"
},
{
"text": "This being a multi-label text classification problem, it is required that the text be first converted to a form understandable by the various machine learning algorithms. Word embeddings are numerical representation of words. Specifically, word embeddings are vector representations of words that are learned in an unsupervised manner where their relative similarities are directly related to their se-mantic similarity (Mandelbaum and Shalev, 2016) . Due to unavailability of pre-trained Hindi-English bilingual word embeddings, we created our own embeddings by scrapping 427k Hinglish tweets and 300k English tweets using TwitterScrapper API. Processing was carried out by removing pure English and pure Devanagari tweets along with rare words, hashtags and mentions for obtaining better training results. We chose 2 types of word embeddings for our problem, each of which was trained on two kinds of datasets, after processing (removing hashtags, user mentions, URLs, punctuations and keywords used for scraping), one which had only Hinglish tweets, the other which had a mix of English and Hinglish tweets. In order to get the right co-relation between the words of the two languages, we experimented with a mixture of Hinglish and English tweets. Word2Vec: In this kind of embedding, words are converted to vector representations where words having common context are placed in vicinity amidst the vector space (Mikolov et al., 2013) . Taking a huge corpus of words as input, it generates a vector space with each word being assigned a unique vector value in that space. Since the available Word2Vec embeddings are pre-trained on English datasets only, we trained our embeddings on custom Hindi-English code-mixed dataset, to obtain the desired code-mixed embeddings.",
"cite_spans": [
{
"start": 420,
"end": 449,
"text": "(Mandelbaum and Shalev, 2016)",
"ref_id": "BIBREF21"
},
{
"start": 1416,
"end": 1438,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Creation of Hindi-English Bi-lingual Word Embeddings",
"sec_num": "3.4"
},
{
"text": "FastText: FastText is a modification to the Word2Vec embeddings that was developed by Facebook in 2016 . FastText assumes a word to be composed of character n-grams and hence breaks a given word into various sub-words (Example: light, li, ig, igt, gt) unlike word2vec which feeds individual words to the network. The training session of a FastText model involves learning of weights for not only the whole word, but also for each of the character n-grams. Unlike Word2vec, it can not only approximate rare words but also give representation of words not present in the corpus, as now it is highly possible that some of their n-grams are present in other words. This is particularly useful for messages on social networks where multiple representations are used for similar words (like pyar, pyaar, pyaaar).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Creation of Hindi-English Bi-lingual Word Embeddings",
"sec_num": "3.4"
},
{
"text": "We introduce seven deep learning based models for solving the task of emotion detection in textual We trained FastText and Word2vec word representations on two types of data, one which solely consisted of hinglish text, the other which was a mixture of hinglish and english text. These embeddings were then used to predict the emotion of the tweet by serving as input to all the proposed models except the transformer based models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Learning Models",
"sec_num": "3.5"
},
{
"text": "CNNs have been proven to be successful for multi class classification problems, where images are provided as inputs (Ezat et al., 2020) . In our case, word embeddings are given as input, from which features are extracted and final classification is performed. The network architecture we employed has been depicted in Fig. 2 . Embedding layer serves as the first layer, which is used to transfer the word vector representations of select words in the tweet under consideration, to the model structure. Four convolutional layers in parallel receive the output of the embedding layer, followed by a global max pooling layer, upon which dropout is applied. Three dense fully connected layers follow in which the last layer is responsible for classification. Application of dropout led to better convergence and decreased difference in the training and validation accuracies.",
"cite_spans": [
{
"start": 116,
"end": 135,
"text": "(Ezat et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 318,
"end": 324,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Convolutional Neural Networks (CNNs)",
"sec_num": "3.5.1"
},
{
"text": "The context in which a word is used, determines the meaning of the word, which in-turn may play a significant role in determining the overall sentiment of the sentence. For example, Sentence 1 : There are multiple kinds of human beings in this huge world.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks (RNNs)",
"sec_num": "3.5.2"
},
{
"text": "Sentence 2 : She is very generous and kindhearted. The context in which the word 'kind' is used, is different in both the sentences, thus the word carries different meanings in different scenarios. RNNs are helpful in modelling the context of a word, by having unique ways to capture the context of words using the surrounding words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks (RNNs)",
"sec_num": "3.5.2"
},
{
"text": "Long Short-Term Memory (LSTM): LSTMs have been shown to capture the relevant context for words (Tran et al., 2017) as well as address the issue of vanishing gradients (Hochreiter and Schmidhuber, 1997) . The words which precede a particular word, determine the context of the word. LSTMs inculcate memory cells in the network which serve to record the meaning of words that occurred previously. In order to model this scenario, an LSTM based network is constructed. In our model, an embedding layer, followed by an LSTM layer, further followed by 2 fully connected layers constitute the network. The last layer, consisting of 6 neurons, is responsible for the classification of the tweet's emotion.",
"cite_spans": [
{
"start": 95,
"end": 114,
"text": "(Tran et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 167,
"end": 201,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks (RNNs)",
"sec_num": "3.5.2"
},
{
"text": "Bi-directional LSTM: Bi-directional LSTMs have been proven successful in capturing the context for text classification tasks (Wang et al., 2016) . The words that precede as well as follow a particular word, determine the context of the word under consideration. Thus, memory cells must exist in both directions in order to maintain the track of words that surround a particular word. This is achieved by appending 2 LSTM layers to the embedding layer, whose concatenated output ( \u2212 \u2192 h T , \u2190 \u2212 h 1 ) is flattened and fed to 2 fully connected (FC) layers. The last layer carries out classification, as is done in all other proposed models.",
"cite_spans": [
{
"start": 125,
"end": 144,
"text": "(Wang et al., 2016)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks (RNNs)",
"sec_num": "3.5.2"
},
{
"text": "Attention based Bi-directional LSTM: The technique of attention is based on learning the words which contribute the most towards the overall emotion of the sentence, and filtering out the words which contribute the least, i.e. noise. Attention based BiLSTM differs in the manner of concatenation of states, which is fed to the fully connected (FC) layers. Apart from using concate-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks (RNNs)",
"sec_num": "3.5.2"
},
{
"text": "nated { \u2212 \u2192 h T , \u2190 \u2212 h 1 } ( \u2212 \u2192 h T denoting forward directed final hidden state representation,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks (RNNs)",
"sec_num": "3.5.2"
},
{
"text": "\u2190 \u2212 h 1 denoting backward directed first hidden state representation) as inputs to the fully connected layers, attention based BiL-STMs also take into consideration the weighted summation of all time steps ( \u2212 \u2192 h t , \u2190 \u2212 h t ). Hence, all hidden states serve as inputs to the 2 dense fully connected layers, out of which the final layer performs the classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks (RNNs)",
"sec_num": "3.5.2"
},
{
"text": "BERT (bert-base-uncased): (Devlin et al., 2018) Being a bidirectional transformer based model pretrained on a large Wikipedia and Toronto Book Corpus, BERT makes use of a combination of objectives which are meant for the next sentence prediction and masked language modeling tasks. RoBERTa (roberta-base): (Liu et al., 2019) With some modifications to the parameters of BERT, i.e. changing key hyperparameters, removing the next sentence prediction objective, and training with larger learning rate and batch size values, RoBERTa is built on top of BERT. ALBERT (albert-base-v2): (Lan et al., 2019) Trying to increase the training speed and decrease the memory utilization of BERT, ALBERT is another variation of BERT which repeats layers which are split among groups and splits the embedding matrix into two. ",
"cite_spans": [
{
"start": 26,
"end": 47,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 306,
"end": 324,
"text": "(Liu et al., 2019)",
"ref_id": null
},
{
"start": 580,
"end": 598,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer Based Models",
"sec_num": "3.5.3"
},
{
"text": "A split of ten percent was made on the total training dataset and the model was trained for a total of 20 epochs. At each epoch, we saved the model checkpoints, and particularly used that checkpoint which was saved before the model begins to overfit to calculate the metrics on the ten percent test dataset split.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4"
},
{
"text": "Different hyper parameters are involved for the task of training embeddings as well as the models. After working with several optimizers, loss functions and activation functions, the adam optimizer with categorical cross entropy loss function produced the best results for all stated deep learning models. We used relu activation function for all the layers except the output layer, which has sigmoid activation function. We evaluated the performance of CNN models with different values for kernel sizes, activation functions, number of kernels, dropouts, strides and optimizers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4"
},
{
"text": "We use pre-trained models like bert-baseuncased, roberta-base, and albert-base-v2, to fine tune the transformer based models on our dataset. Hugging-face 4 API was used to fine tune all the transformer based models. Table 2 denotes the hyperparameter combinations used in the training of embeddings, CNN, RNN and fine tuning transformer based models.",
"cite_spans": [],
"ref_spans": [
{
"start": 216,
"end": 223,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4"
},
{
"text": "Using all features, (Vijay et al., 2018) show that the baseline model i.e. the SVM classifier with RBF kernel, presented an accuracy of 58.2%, when dealing with the same emotion labels as we deal with in this paper. In the domain of emotion detection in Hindi-English code-mixed data, as far as we know, we are the first to compare transformer based models and word representations. In table 4, the results of CNN, RNN based models for both Word2Vec and FastText based word representations, along with those of transformer based models have been presented. All proposed deep learning models yield better results than state-of-the-art models which deal with these six emotion labels. The best accuracy of 71.43% is achieved with BERT, as expected. All models utilizing embeddings trained on Hinglish plus English data, perform better than those using embeddings trained on Hinglish data. One conceivable reason for this observation can be the extra coverage of semantics and correlation between the word vectors of English information, which can be utilized for code blended Hinglish information, hence serving as prior data for Hinglish embeddings information. The method works practically equivalent to a knowledge transfer step in which embeddings for English information are utilized as earlier information for embeddings of Hinglish information. Additionally, increased accuracies of all models in case of Fast-Text embeddings, as compared to the Word2Vec 4 https://huggingface.co/transformers/ embeddings, is observed. One possible reason for this could be the existence of code blended information where FastText enables the coverage of code-mixed vocabulary as against word2Vec which works only on the basis of overall context of word.",
"cite_spans": [
{
"start": 20,
"end": 40,
"text": "(Vijay et al., 2018)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The transformer based BERT model clearly outperforms both CNN and RNN based models, majorly because of its profound efficiency and its ability to process the input out of order. The major obstacles in the task of detecting emotions in Hindi-English code-mixed data are handling the linguistic complexities associated with code-mixed data and absence of clean data. Thus, we require even more class-specific cleaner data, in order to reduce the effect of noise, which comes from spelling mistakes, stemming words and the presence of multiple contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "As recent years have seen the rise in usage of social media for open expression of stance and opinions, sentiment analysis and opinion mining have gained attention as problems and become primary areas of research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In this paper, we present an openly available class-balanced dataset of Hindi-English codemixed data, consisting of tweets belonging to 6 types of emotions, which are happiness, sadness, anger, surprise, fear and disgust. We contrast the performance of two types of word representations, both trained on relevant scraped tweets from scratch. We develop two different types of embeddings, one which is trained on solely Hinglish tweets, the other which is trained on a mix of Hinglish and English tweets, and present the performance in both cases. Also, we present deep learning based models including CNNs, RNNs, and transformers, where BERT performs the best among all.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "As future scope, the problem can be solved to obtain even better results by carrying out a comparison of MUSE aligned vectors, pre-aligned Fast-Text word embeddings and language specific transformer based word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://en.wikipedia.org/wiki/ Multilingualism_in_India",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/taspinar/ twitterscraper 3 https://github.com/anshulwadhawan/ emotion_detection",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Cambridge Handbook of Linguistic Codeswitching. Cambridge Handbooks in Language and Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1017/CBO9780511576331"
]
},
"num": null,
"urls": [],
"raw_text": "The Cambridge Handbook of Linguistic Code- switching. Cambridge Handbooks in Language and Linguistics. Cambridge University Press.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Emotions from text: Machine learning for text-based emotion prediction",
"authors": [
{
"first": "Cecilia",
"middle": [],
"last": "Alm",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "579--586",
"other_ids": {
"DOI": [
"10.3115/1220575.1220648"
]
},
"num": null,
"urls": [],
"raw_text": "Cecilia Alm, Dan Roth, and Richard Sproat. 2005. Emotions from text: Machine learning for text-based emotion prediction. pages 579--586.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Identifying expressions of emotion in text",
"authors": [
{
"first": "Saima",
"middle": [],
"last": "Aman",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 10th International Conference on Text, Speech and Dialogue, TSD'07",
"volume": "",
"issue": "",
"pages": "196--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saima Aman and Stan Szpakowicz. 2007. Identifying expressions of emotion in text. In Proceedings of the 10th International Conference on Text, Speech and Dialogue, TSD'07, page 196-205, Berlin, Hei- delberg. Springer-Verlag.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Handbook of Multilingualism and Multilingual Communication",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Auer",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Auer and Li Wei. 2007. Handbook of Multi- lingualism and Multilingual Communication. De Gruyter Mouton, Berlin, Boston.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "I am borrowing ya mixing ?\" an analysis of English-Hindi code mixing in Facebook",
"authors": [
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
},
{
"first": "Jatin",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Yogarshi",
"middle": [],
"last": "Vyas",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "116--126",
"other_ids": {
"DOI": [
"10.3115/v1/W14-3914"
]
},
"num": null,
"urls": [],
"raw_text": "Kalika Bali, Jatin Sharma, Monojit Choudhury, and Yo- garshi Vyas. 2014. \"I am borrowing ya mixing ?\" an analysis of English-Hindi code mixing in Facebook. In Proceedings of the First Workshop on Computa- tional Approaches to Code Switching, pages 116- 126, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Emotion detection from text: A survey",
"authors": [
{
"first": "Lea",
"middle": [],
"last": "Canales",
"suffix": ""
},
{
"first": "Patricio",
"middle": [],
"last": "Mart\u00ednez-Barco",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lea Canales and Patricio Mart\u00ednez-Barco. 2014. Emo- tion detection from text: A survey.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Emotion cause detection with linguistic constructions",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Shoushan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "2",
"issue": "",
"pages": "179--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Chen, Sophia Lee, Shoushan Li, and Chu-Ren Huang. 2010. Emotion cause detection with linguis- tic constructions. volume 2, pages 179-187.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An argument for basic emotions",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Ekman",
"suffix": ""
}
],
"year": 1992,
"venue": "Cognition and Emotion",
"volume": "6",
"issue": "3-4",
"pages": "169--200",
"other_ids": {
"DOI": [
"10.1080/02699939208411068"
]
},
"num": null,
"urls": [],
"raw_text": "Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion, 6(3-4):169-200.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Multi-class image classification using deep learning algorithm",
"authors": [
{
"first": "Wael",
"middle": [],
"last": "Ezat",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Dessouky",
"suffix": ""
},
{
"first": "Nabil",
"middle": [],
"last": "Ismail",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Physics: Conference Series",
"volume": "1447",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1088/1742-6596/1447/1/012021"
]
},
"num": null,
"urls": [],
"raw_text": "Wael Ezat, Mohamed Dessouky, and Nabil Ismail. 2020. Multi-class image classification using deep learning algorithm. Journal of Physics: Conference Series, 1447:012021.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Emotional Intelligence 2.0. CA : TalentSmart",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Greaves",
"suffix": ""
},
{
"first": "Travis",
"middle": [],
"last": "Bradberry",
"suffix": ""
},
{
"first": "Patrick",
"middle": [
"M"
],
"last": "Lencioni",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Greaves, Travis Bradberry, and Patrick M. Lencioni. 2009. Emotional Intelligence 2.0. CA : TalentSmart, San Diego.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Resource creation for hindi-english code mixed social media text",
"authors": [
{
"first": "Sakshi",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Radhika",
"middle": [],
"last": "Mamidi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sakshi Gupta, Piyush Bansal, and Radhika Mamidi. 2016. Resource creation for hindi-english code mixed social media text.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Comput",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735-1780.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A fall-back strategy for sentiment analysis in hindi: a case study",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Balamurali",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 8th ICON",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Balamurali A R, and Pushpak Bhat- tacharyya. 2010. A fall-back strategy for sentiment analysis in hindi: a case study. In Proceedings of the 8th ICON.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "2",
"issue": "",
"pages": "427--431",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Pa- pers, pages 427-431, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A brief review of facial emotion recognition based on visual information",
"authors": [
{
"first": "Chul",
"middle": [],
"last": "Byoung",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ko",
"suffix": ""
}
],
"year": 2018,
"venue": "Sensors",
"volume": "18",
"issue": "2",
"pages": "1--20",
"other_ids": {
"DOI": [
"10.3390/s18020401"
]
},
"num": null,
"urls": [],
"raw_text": "Byoung Chul Ko. 2018. A brief review of facial emo- tion recognition based on visual information. Sen- sors, 18(2):1-20.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Twitter sentiment analysis: The good the bad and the omg! In ICWSM",
"authors": [
{
"first": "Efthymios",
"middle": [],
"last": "Kouloumpis",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Johanna",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "538--541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Efthymios Kouloumpis, Theresa Wilson, and Jo- hanna D. Moore. 2011. Twitter sentiment analysis: The good the bad and the omg! In ICWSM, pages 538-541.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A model of textual affect sensing using real-world knowledge",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Lieberman",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Selker",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 8th International Conference on Intelligent User Interfaces, IUI '03",
"volume": "",
"issue": "",
"pages": "125--132",
"other_ids": {
"DOI": [
"10.1145/604045.604067"
]
},
"num": null,
"urls": [],
"raw_text": "Hugo Liu, Henry Lieberman, and Ted Selker. 2003. A model of textual affect sensing using real-world knowledge. In Proceedings of the 8th International Conference on Intelligent User Interfaces, IUI '03, page 125-132, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Word embeddings and their use in sentence classification tasks",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Mandelbaum",
"suffix": ""
},
{
"first": "Adi",
"middle": [],
"last": "Shalev",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Mandelbaum and Adi Shalev. 2016. Word em- beddings and their use in sentence classification tasks.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems -Volume 2, NIPS'13, page 3111-3119, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Code switching and code mixing in internet chating: betwen \"yes\", \"ya\", and \"si\" a case study",
"authors": [
{
"first": "Stella",
"middle": [],
"last": "M\u00f3nica",
"suffix": ""
},
{
"first": "M\u00f3nica",
"middle": [],
"last": "C\u00e1rdenas-Claros",
"suffix": ""
},
{
"first": "Neny",
"middle": [],
"last": "Isharyanti",
"suffix": ""
}
],
"year": 2009,
"venue": "The jaltcall Journal",
"volume": "5",
"issue": "",
"pages": "67--78",
"other_ids": {
"DOI": [
"10.29140/jaltcall.v5n3.87"
]
},
"num": null,
"urls": [],
"raw_text": "Stella M\u00f3nica, M\u00f3nica C\u00e1rdenas-Claros, and Neny Isharyanti. 2009. Code switching and code mixing in internet chating: betwen \"yes\", \"ya\", and \"si\" a case study. The jaltcall Journal, Vol 5:67-78.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Word level language identification in online multilingual communication",
"authors": [
{
"first": "Dong-Phuong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Seza Dogruoz",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "857--862",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong-Phuong Nguyen and A. Seza Dogruoz. 2013. Word level language identification in online multi- lingual communication. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 857-862, United States. Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Twitter as a corpus for sentiment analysis and opinion mining",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Pak",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Paroubek",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Pak and Patrick Paroubek. 2010. Twitter as a corpus for sentiment analysis and opinion mining. volume 10.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Affective computing",
"authors": [
{
"first": "W",
"middle": [],
"last": "Rosalind",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Picard",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rosalind W Picard. 2000. Affective computing. MIT press.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Pieter muysken, bilingual speech: a typology of codemixing",
"authors": [
{
"first": "Shana",
"middle": [],
"last": "Poplack",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of Linguistics",
"volume": "39",
"issue": "",
"pages": "678--683",
"other_ids": {
"DOI": [
"10.1017/S0022226703272297"
]
},
"num": null,
"urls": [],
"raw_text": "Shana Poplack and James Walker. 2003. Pieter muysken, bilingual speech: a typology of code- mixing. cambridge: Cambridge university press, 2000. pp. xvi+306. Journal of Linguistics, 39:678 -683.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Towards sub-word level compositions for sentiment analysis of hindi-english code mixed text",
"authors": [
{
"first": "Ameya",
"middle": [],
"last": "Prabhu",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ameya Prabhu, Aditya Joshi, Manish Shrivastava, and Vasudeva Varma. 2016. Towards sub-word level compositions for sentiment analysis of hindi-english code mixed text.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "At the border of acoustics and linguistics: Bag-of-audio-words for the recognition of emotions in speech",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Schmitt",
"suffix": ""
},
{
"first": "Fabien",
"middle": [],
"last": "Ringeval",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [
"W"
],
"last": "Schuller",
"suffix": ""
}
],
"year": 2016,
"venue": "INTERSPEECH",
"volume": "",
"issue": "",
"pages": "495--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Schmitt, Fabien Ringeval, and Bj\u00f6rn W. Schuller. 2016. At the border of acoustics and lin- guistics: Bag-of-audio-words for the recognition of emotions in speech. In INTERSPEECH, pages 495- 499.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A lstm based framework for handling multiclass imbalance in dga botnet detection",
"authors": [
{
"first": "Duc",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Mac",
"suffix": ""
},
{
"first": "Van",
"middle": [],
"last": "Tong",
"suffix": ""
},
{
"first": "Hai-Anh",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Giang",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2017,
"venue": "Neurocomputing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.neucom.2017.11.018"
]
},
"num": null,
"urls": [],
"raw_text": "Duc Tran, Hieu Mac, Van Tong, Hai-Anh Tran, and Giang Nguyen. 2017. A lstm based framework for handling multiclass imbalance in dga botnet detec- tion. Neurocomputing, 275.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Corpus creation and emotion prediction for Hindi-English code-mixed social media text",
"authors": [
{
"first": "Deepanshu",
"middle": [],
"last": "Vijay",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Bohra",
"suffix": ""
},
{
"first": "Vinay",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Syed Sarfaraz Akhtar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shrivastava",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "128--135",
"other_ids": {
"DOI": [
"10.18653/v1/N18-4018"
]
},
"num": null,
"urls": [],
"raw_text": "Deepanshu Vijay, Aditya Bohra, Vinay Singh, Syed Sarfaraz Akhtar, and Manish Shrivastava. 2018. Corpus creation and emotion prediction for Hindi-English code-mixed social media text. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Student Research Workshop, pages 128-135, New Orleans, Louisiana, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Attention-based LSTM for aspectlevel sentiment classification",
"authors": [
{
"first": "Yequan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "606--615",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1058"
]
},
"num": null,
"urls": [],
"raw_text": "Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for aspect- level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 606-615, Austin, Texas. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "CNN model architecture code-mixed data. The models proposed are CNN, LSTM, Bi-directional LSTM, attention based Bidirectional LSTM and transformer based models like BERT, RoBERTa and ALBERT."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "LSTM block structure"
},
"TABREF1": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Tweet count per class promote additional research."
},
"TABREF3": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": ""
},
"TABREF5": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Accuracy of Deep Learning Models"
}
}
}
}