ACL-OCL / Base_JSON /prefixS /json /semeval /2020.semeval-1.100.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:14:36.584422Z"
},
"title": "SemEval-2020 Task 9: Overview of Sentiment Analysis of Code-Mixed Tweets",
"authors": [
{
"first": "Parth",
"middle": [],
"last": "Patwa",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Aguilar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Houston",
"location": {
"region": "TX"
}
},
"email": ""
},
{
"first": "Sudipta",
"middle": [],
"last": "Kar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Houston",
"location": {
"region": "TX"
}
},
"email": ""
},
{
"first": "Suraj",
"middle": [],
"last": "Pandey",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIIT Delhi",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Pykl",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NTNU",
"location": {
"country": "Norway"
}
},
"email": "[email protected]"
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIIT Delhi",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Houston",
"location": {
"region": "TX"
}
},
"email": "[email protected]"
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Wipro AI Labs",
"location": {
"country": "India"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present the results of the SemEval-2020 Task 9 on Sentiment Analysis of Code-Mixed Tweets (SentiMix 2020). 1 We also release and describe our Hinglish (Hindi-English) and Spanglish (Spanish-English) corpora annotated with word-level language identification and sentence-level sentiment labels. These corpora are comprised of 20K and 19K examples, respectively. The sentiment labels are-Positive, Negative, and Neutral. SentiMix attracted 89 submissions in total including 61 teams that participated in the Hinglish contest and 28 submitted systems to the Spanglish competition. The best performance achieved was 75.0% F1 score for Hinglish and 80.6% F1 for Spanglish. We observe that BERT-like models and ensemble methods are the most common and successful approaches among the participants.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present the results of the SemEval-2020 Task 9 on Sentiment Analysis of Code-Mixed Tweets (SentiMix 2020). 1 We also release and describe our Hinglish (Hindi-English) and Spanglish (Spanish-English) corpora annotated with word-level language identification and sentence-level sentiment labels. These corpora are comprised of 20K and 19K examples, respectively. The sentiment labels are-Positive, Negative, and Neutral. SentiMix attracted 89 submissions in total including 61 teams that participated in the Hinglish contest and 28 submitted systems to the Spanglish competition. The best performance achieved was 75.0% F1 score for Hinglish and 80.6% F1 for Spanglish. We observe that BERT-like models and ensemble methods are the most common and successful approaches among the participants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The evolution of social media texts such as blogs, micro-blogs (e.g., Twitter), and chats (e.g., WhatsApp and Facebook messages) has created many new opportunities for information access and language technologies. However, it has also posed many new challenges making it one of the current prime research areas in Natural Language Processing (NLP).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Current language technologies primarily focus on English (Young, 2020 ), yet social media platforms demand methods that can also process other languages as they are inherently multilingual environments. 2 Besides, multilingual communities around the world regularly express their thoughts in social media employing and alternating different languages in the same utterance. This mixing of languages, also known as code-mixing or code-switching, 3 is a norm in multilingual societies and is one of the many NLP challenges that social media has facilitated.",
"cite_spans": [
{
"start": 57,
"end": 69,
"text": "(Young, 2020",
"ref_id": "BIBREF70"
},
{
"start": 203,
"end": 204,
"text": "2",
"ref_id": null
},
{
"start": 445,
"end": 446,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition to the writing aspects in social media, such as flexible grammar, permissive spelling, arbitrary punctuation, slang, and informal abbreviations (Baldwin et al., 2015; Eisenstein, 2013) , code-mixing has introduced a diverse set of linguistic challenges. For instance, multilingual speakers tend to code-mix using a single alphabet regardless of whether the languages involved belong to different writing systems (i.e., language scripts). This behavior is known as transliteration, and code-mixers rely on the phonetic patterns of their writing (i.e., the actual sound) to convey their thoughts in the foreign language (i.e., the language adapted to a new script) (Sitaram et al., 2019) . Another common pattern in code-mixing is the alternation of languages at the word level. This behavior often happens by inflecting words from one language with the rules of another language (Solorio and Liu, 2008) . For instance, in the second example below, the word pushes is the result of conjugating the English verb push according to Spanish grammar rules for the present tense in third person (in this case, the inflection -es). The Hinglish example shows that phonetic Latin script typing is a popular practice in India, instead of using Devanagari script to write Hindi words. We capture both transliteration and word-level code-mixing inflections in the Hinglish and Spanglish corpora of this competition, respectively.",
"cite_spans": [
{
"start": 156,
"end": 178,
"text": "(Baldwin et al., 2015;",
"ref_id": "BIBREF4"
},
{
"start": 179,
"end": 196,
"text": "Eisenstein, 2013)",
"ref_id": "BIBREF24"
},
{
"start": 675,
"end": 697,
"text": "(Sitaram et al., 2019)",
"ref_id": "BIBREF56"
},
{
"start": 890,
"end": 913,
"text": "(Solorio and Liu, 2008)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Code-Mixing Challenges",
"sec_num": "1.1"
},
{
"text": "Aye HI aur HI enjoy EN kare HI Eng. Trans.: come and enjoy No SP me SP pushes EN please EN Eng. Trans.: Don't push me, please",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Code-Mixing Challenges",
"sec_num": "1.1"
},
{
"text": "Considering the previous challenges, code-mixing demands new research methods where the focus goes beyond simply combining monolingual resources to address this linguistic phenomenon. Codemixing poses difficulties in a variety of language pairs and on multiple tasks along the NLP stack, such as word-level language identification, part-of-speech tagging, dependency parsing, machine translation, and semantic processing (Sitaram et al., 2019) . Conventional NLP systems heavily rely on monolingual resources to address code-mixed text, limiting them when properly handling issues such as phonetic typing and word-level code-mixing.",
"cite_spans": [
{
"start": 421,
"end": 443,
"text": "(Sitaram et al., 2019)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Code-Mixing Challenges",
"sec_num": "1.1"
},
{
"text": "Naturally, code-mixing is more common in geographical regions with a high percentage of bi-or multilingual speakers, such as in Texas and California in the US, Hong Kong and Macao in China, many European and African countries, and the countries in South-East Asia. Multilingualism and code-mixing are also widespread in India, which has more than 400 languages (Eberhard et al., 2020) with about 30 languages having more than 1 million speakers. Language diversity and dialect changes trigger Indians to frequently change and mix languages, particularly in speech and social media contexts. As of 2020, Hindi and Spanish have over 630 million and over 530 million speakers (Eberhard et al., 2020) , respectively, ranking them in 3rd and 4th place based on the number of speakers worldwide, which speaks of the relevancy of using these languages in our code-mixing competition.",
"cite_spans": [
{
"start": 361,
"end": 384,
"text": "(Eberhard et al., 2020)",
"ref_id": null
},
{
"start": 673,
"end": 696,
"text": "(Eberhard et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Code-Mixing as a Global Linguistic Phenomenon",
"sec_num": "1.2"
},
{
"text": "This paper provides an overview of the SemEval-2020 Task 9 competition on sentiment analysis of codemixed social media text (SentiMix). Specifically, we provide code-mixed text annotated with word-level language identification and sentence-level sentiment labels (negative, neutral, and positive). We release our Hinglish (Hindi-English) and Spanglish (Spanish-English) corpora, which are comprised of 20K and 19K tweets, respectively. We describe general statistics of the corpora as well as the baseline for the competition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SentiMix Overview",
"sec_num": "1.3"
},
{
"text": "We received 61 final submissions for Hinglish and 28 for Spanglish, adding to a total number of 89 submissions. We received 33 system description papers. We provide an overview of the participants' results and describe their methods at a high level. Notably, the majority of these methods employed BERT-like and ensemble models to reach competitive results, with the best performers reaching 75.0% and 80.6% F1 scores for Hinglish and Spanglish on held-out test data, respectively. We hope that this shared task will continue to catch the NLP community's attention on the linguistic code-mixing phenomenon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SentiMix Overview",
"sec_num": "1.3"
},
{
"text": "Linguists (Verma, 1976; Bokamba, 1988; Singh, 1985) studied the phenomena of code-mixing and intrasentential code-switching and found that processing code-mixed language is much more complicated than monolingual text. Code-mixing is often found on social media which contains a lot of nonstandard spellings of words and unnecessary capitalization (Das and Gamb\u00e4ck, 2014) , making the task more difficult. Naturally, the difficulty will increase as the amount of code-mixing increases. To quantify the level of code-switching between languages in a sentence, Gamb\u00e4ck and Das (2016) introduced a measure called Code Mixing Index (CMI) which considers the number of tokens of each language in a sentence and the number of tokens where the language switches.",
"cite_spans": [
{
"start": 10,
"end": 23,
"text": "(Verma, 1976;",
"ref_id": "BIBREF65"
},
{
"start": 24,
"end": 38,
"text": "Bokamba, 1988;",
"ref_id": "BIBREF15"
},
{
"start": 39,
"end": 51,
"text": "Singh, 1985)",
"ref_id": "BIBREF55"
},
{
"start": 347,
"end": 370,
"text": "(Das and Gamb\u00e4ck, 2014)",
"ref_id": "BIBREF20"
},
{
"start": 558,
"end": 580,
"text": "Gamb\u00e4ck and Das (2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Finding the sentiment from code-mixed text has been attempted by some researchers. Mohammad et al. (2013) used SVM-based classifiers to detect sentiment in tweets and text messages using semantic information. Bojanowski et al. (2017) proposed a skip-gram based word representation model that classifies the sentiment of tweets and provides an extensive vocabulary list for language. Giatsoglou et al. (2017) trained lexicon-based document vectors, word embedding, and hybrid systems with the polarity of words to classify the sentiment of a tweet. Sharma et al. (2016) attempted shallow parsing of code-mixed data obtained from online social media, and tried word-level identification of code-mixed data to classify the sentiment. Some researchers also tried normalizing the text with lexicon lookup for sentiment analysis of code-mixed data (Sharma et al., 2015) .",
"cite_spans": [
{
"start": 83,
"end": 105,
"text": "Mohammad et al. (2013)",
"ref_id": "BIBREF42"
},
{
"start": 209,
"end": 233,
"text": "Bojanowski et al. (2017)",
"ref_id": "BIBREF14"
},
{
"start": 383,
"end": 407,
"text": "Giatsoglou et al. (2017)",
"ref_id": "BIBREF27"
},
{
"start": 548,
"end": 568,
"text": "Sharma et al. (2016)",
"ref_id": "BIBREF52"
},
{
"start": 842,
"end": 863,
"text": "(Sharma et al., 2015)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To advance research in code-mixed language processing, few workshops have also been conducted. Four successful series of Mixed Script Information Retrieval have been organized at the Forum for Information Retrieval Evaluation (FIRE) (SahaRoy et al., 2013; Sequiera et al., 2015; Banerjee et al., 2016) . Three workshops on Computational Approaches to Linguistic Code-Switching (CALCS) have been conducted which included shared tasks on language identification and Named Entity Recognition (NER) in code-mixed data (Solorio et al., 2014a; Molina et al., 2016; Aguilar et al., 2018) . For our SentiMix Spanglish dataset, we adopt the SentiStrength (Vilares et al., 2015) annotation mechanism and conduct the annotation process over the unified corpus from the three CALCS workshops.",
"cite_spans": [
{
"start": 233,
"end": 255,
"text": "(SahaRoy et al., 2013;",
"ref_id": "BIBREF48"
},
{
"start": 256,
"end": 278,
"text": "Sequiera et al., 2015;",
"ref_id": "BIBREF50"
},
{
"start": 279,
"end": 301,
"text": "Banerjee et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 514,
"end": 537,
"text": "(Solorio et al., 2014a;",
"ref_id": "BIBREF58"
},
{
"start": 538,
"end": 558,
"text": "Molina et al., 2016;",
"ref_id": "BIBREF43"
},
{
"start": 559,
"end": 580,
"text": "Aguilar et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 646,
"end": 668,
"text": "(Vilares et al., 2015)",
"ref_id": "BIBREF66"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Although code-mixing has received some attention recently, properly annotated data is still scarce. We run a shared task to perform sentiment analysis of code-mixed tweets crawled from social media. Each tweet is classified into one of the three polarity classes -Positive, Negative, Neutral. Each tweet also has word-level language marking. We release two datasets -Spanglish and Hinglish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "3"
},
{
"text": "We used CodaLab 4,5 to release the datasets and evaluate submissions. Initially, the participants had access only to train and validation data. They could check their system's performance on the validation set on a public leaderboard. Later, a previously unseen test set was released, and the performance on the test set was used to rank the participants. Only the first three submissions on the test set by each participant were considered, to avoid over-fitting on the test set. The ranking was done based on the best out of the three submissions. There was no distinction between constrained and unconstrained systems, but the participants were asked to report what additional resources they have used for each submitted run.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "3"
},
{
"text": "We release 20k labeled tweets for Hinglish and \u2248 19k labeled tweets for Spanglish. In both the datasets, 6 in addition to the tweet level sentiment label, each tweet also has a word-level language label. The detailed distribution is provided in Table 1 . Some annotated examples are provided in Table 2 . Although this task focuses on sentiment analysis, the data has word-level language marking and can be used for other NLP tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 245,
"end": 252,
"text": "Table 1",
"ref_id": null
},
{
"start": 295,
"end": 302,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task Description",
"sec_num": "3"
},
{
"text": "To evaluate the performance and rank the participants, we use weighted F1 score on the test data, across the positives, negatives, and neutral examples. The F1 scores are calculated for each class and then their average is weighted by support (number of true instances for each class). We use a weighted F1 score since the number of instances per class is not equal. Other than the F1 score, we also calculate precision and recall for each class to analyze and have a better understanding of false positives and false negatives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "3.1"
},
{
"text": "The datasets consist of tweets labeled into one of the three classes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "\u2022 Positive (Pos): Tweets which express happiness, praise a person, group, country or a product, or applaud something. Hinglish example: \"bholy bhayaa. Ufffff dil jeet liya ap ne. Love you imran bhai. Mind blowing ap ki acting hai.\" (bholy bhayaa, you won hearts. love you imran bhai your acting is mind blowing). Spanglish example: \"We all here waiting pa ke juege mex :)\" (We all here waiting for Mexico to play :)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "\u2022 Negative (Neg): Tweets which attack a person, group, product or country, express disgust or unhappiness towards something, or criticize something. Hinglish example: \"You efficiency of anchoring a program is continuously deteriorating. Ab to dekhne ki himmat hi nahi\" (Your efficiency of anchoring is continuously deteriorating. Now can't even dare to watch it) Spanglish example: \"Eres una cualkiera yes u are.\" (You are a tramp, yes you are.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "\u2022 Neutral (Neu): Tweets which state facts, give news or are advertisements. In general those which don't fall into the above 2 categories. Hinglish example: \"Nahi wo is news ko defend kerne ki koshesh ker rhe hain h\" (No, they are trying to defend this news). Spanglish example: \"My phone looks ratchet todo crack\" (My phone looks ratchet all crack). Table 1 : Class-wise statistics of the dataset for train, validation, and test set. We put special care to make a balanced class-wise distribution for Hinglish.",
"cite_spans": [],
"ref_spans": [
{
"start": 351,
"end": 358,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "Both the Hinglish and Spanglish datasets are released using the previous sentiment label scheme. However, each dataset has been annotated separately as the studies were independent before the organization of this competition. We provide the data collection and annotation details in the following subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Split",
"sec_num": null
},
{
"text": "Data Collection: First, we make a list of all the Hindi tokens from the dataset provided by (Patra et al., 2018) . From that list, we remove those tokens which are common to Hindi and English (example 'the' can be used in both the languages). Then we use Twitter API 7 to crawl those tweets from twitter which have at least one word from the list. The list has 10786 tokens. Some words from the list are: kuch, tu, gaya, raha, aaj, apne, tum, gaye, sath etc.",
"cite_spans": [
{
"start": 92,
"end": 112,
"text": "(Patra et al., 2018)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hinglish",
"sec_num": "4.1"
},
{
"text": "Language and Sentiment Annotation: For word-level language marking we use an automated tool released by Bhat et al. (2014) . The tokens are labeled into HIN -Hindi, ENG -English, or O -other. For tweet level sentiment labels, we took the help of around 60 annotators who were bilingual/multilingual, proficient in Hindi and had Hindi as their first or second language. Each tweet was shown to two annotators, and it was selected if their annotations matched, else it was discarded. They used a simple website designed for this purpose to annotate the data. Each tweet was shown on a page that had a radio button for each label. The annotators first had to enter their unique id, then they could either select a sentiment option for a tweet and send or choose to skip the tweet.",
"cite_spans": [
{
"start": 104,
"end": 122,
"text": "Bhat et al. (2014)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hinglish",
"sec_num": "4.1"
},
{
"text": "Statistics: Table 1 gives detailed class-wise distribution of the tweets. Although Neutral is the majority class for Hinglish, the dataset is not too imbalanced. The class-wise distribution is similar for all three splits. Table 2 shows some examples of tweets marked with language and sentiment tags. The average CMI for Hinglish train, validation, and test set is 25.32, 25.53, and 25.13 respectively. The inter-annotator agreement is 55%.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": null
},
{
"start": 223,
"end": 230,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hinglish",
"sec_num": "4.1"
},
{
"text": "Data Collection: We use the Spanish-English data from the CALCS workshops (Solorio et al., 2014b; Molina et al., 2016; Aguilar et al., 2018) . In the first workshop (Solorio et al., 2014b) , the data was collected by crawling tweets from specific locations with a strong presence of Spanish and English speakers (e.g., California and Texas). The collection process was conducted using common words from each language through the Twitter API. 7 In the second workshop (Molina et al., 2016) , the organizers provided a new test set collected with a more elaborated process. They selected big cities where bilingual speakers are common (e.g., New York and Miami). Then, they localized Spanish radio stations that showed code-mixed tweets. Such radio stations led to users that also practice code-mixing. Similar to the third workshop (Aguilar et al., 2018) , we take the CALCS data and extend it for sentiment analysis. It is worth noting that a large number of tweets in the corpora only contain monolingual text (i.e., no code-mixing). Considering that, and after merging the two corpora, we prioritize the tweets that show code-mixed text to build the SentiMix corpus. We ended up incorporating 280 monolingual tweets per language (English, Spanish) in the test set.",
"cite_spans": [
{
"start": 74,
"end": 97,
"text": "(Solorio et al., 2014b;",
"ref_id": "BIBREF59"
},
{
"start": 98,
"end": 118,
"text": "Molina et al., 2016;",
"ref_id": "BIBREF43"
},
{
"start": 119,
"end": 140,
"text": "Aguilar et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 165,
"end": 188,
"text": "(Solorio et al., 2014b)",
"ref_id": "BIBREF59"
},
{
"start": 467,
"end": 488,
"text": "(Molina et al., 2016)",
"ref_id": "BIBREF43"
},
{
"start": 831,
"end": 853,
"text": "(Aguilar et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spanglish",
"sec_num": "4.2"
},
{
"text": "Annotation: Since we use the data from the previous CALCS workshops, we did not need to undergo the token-level annotation process for language identification (LID). We adopted the CALCS LID label scheme, which is comprised of the following eight classes: lang1 (English), lang2 (Spanish), mixed (partially in both languages), ambiguous (either one or the other language), fw (a language different than lang1 and lang2), ne (named entities), other, and unk (unrecognizable words). For the annotations of the sentiment labels, we follow the SentiStrength 8 strategy (Thelwall et al., 2010; Vilares et al., 2015) . That is, we provide positive and negative sliders to the annotators. Each slider denotes the strength for the corresponding sentiment, and the annotators can choose the level of the sentiment they perceived from the text (see Figure 1) . The range of the sliders is discrete and included strengths from 1 to 5 with 1 being no strength (i.e., no positive or negative sentiment) and 5 the strongest level. Using two independent sliders allowed the annotators to process the positive and negative signals without excluding one from the other, letting them provide mixed sentiments for the given text (Berrios et al., 2015) . Once the sentiment strengths were specified, we converted them into a 3-way sentiment scale (i.e., positive, negative, and neutral). We simply subtract the negative strength from the positive strength, and mark the text as positive if the result was greater than zero, negative if less than zero, or neutral otherwise.",
"cite_spans": [
{
"start": 565,
"end": 588,
"text": "(Thelwall et al., 2010;",
"ref_id": "BIBREF64"
},
{
"start": 589,
"end": 610,
"text": "Vilares et al., 2015)",
"ref_id": "BIBREF66"
},
{
"start": 1210,
"end": 1232,
"text": "(Berrios et al., 2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 839,
"end": 848,
"text": "Figure 1)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Spanglish",
"sec_num": "4.2"
},
{
"text": "We annotate each tweet with the help of three annotators from Amazon Mechanical Turk. 9 We regulate the annotations by using quality questions within every assignment 10 of a HIT (Human Intelligence Task). Every assignment has ten tweets, two of them were for quality control (i.e., the annotation was already known) and the other eight tweets were the ones to annotate. 11 The annotators had to have at least one quality control tweet right so that the assignment (i.e., the ten tweets) was not automatically rejected. Since the sentiment analysis task is arguably arbitrary, we provided multiple valid levels of strength for the quality control tweets. If an assignment was rejected, then another annotator was automatically required to complete the HIT until three annotations were accepted. Also, we automatically approved HITs if their 3-way sentiment inter-annotator agreement was over 66%. 12 Otherwise, we evaluated manually the annotations and decide whether to extend the assignments or mark the sentiment labels ourselves for the trivial cases. After merging the annotations, we gave a pass over the data and manually corrected annotations that were unambiguously wrong.",
"cite_spans": [
{
"start": 86,
"end": 87,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spanglish",
"sec_num": "4.2"
},
{
"text": "Statistics: The Spanglish class-level distribution of the partitions appear in Table 1 . Notably, the data is highly imbalanced towards the positive class covering about 56% in the entire Spanglish corpus, while the negative and neutral classes account for around 16% and 27%, respectively. The reason for this imbalance distribution is that we did not collect the data following a sentiment-oriented crawling strategy (e.g., searching by sentiment-related keywords). Instead, we just extended the original corpus, which happens to be mostly positive. The intention to proceed in this way is to enrich the original corpus annotations with sentiment-level labels. Moreover, the splits do not share the same distribution (i.e., development and test are more skewed than training) because we were annotating data on-demand rather than having available the entire corpus at any stage of the competition. Some annotated examples are provided in Table 2 . The average CMI for the train, validation, and test sets are 21.84, 20.52, and 17.23, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 1",
"ref_id": null
},
{
"start": 940,
"end": 947,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Spanglish",
"sec_num": "4.2"
},
{
"text": "We develop our baseline system using the pre-trained multilingual BERT (M-BERT; Devlin et al. (2019) ). M-BERT was trained on 104 languages' entire Wikipedia dump and the WordPiece (Wu et al., 2016) vocabulary of this model contains 110K sub-word tokens from these 104 languages. To balance the risk of low-resource languages being under-represented or over-fitted due to small training resources during pretraining, exponentially smoothed weighting was performed on the data during pre-training data creation and vocabulary creation. Although M-BERT was trained on monolingual data from different languages, it is capable of multilingual generalization in code-switching scenarios (Pires et al., 2019) .",
"cite_spans": [
{
"start": 80,
"end": 100,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF21"
},
{
"start": 181,
"end": 198,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF68"
},
{
"start": 682,
"end": 702,
"text": "(Pires et al., 2019)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5"
},
{
"text": "We use the Transformers (Wolf et al., 2019) library to implement our framework and we fine-tune the pre-trained BERT-Base, Multilingual Cased model separately for each of the two languages. Based on our observation on the training split for each dataset, we set the highest sequence length to 40 and 56 tokens for Spanglish and Hinglish, respectively. Then, we fine-tune the model for three epochs using AdamW (Loshchilov and Hutter, 2019) Table 2 : Examples of labeled tweets. Code-mixing often refers to the juxtaposition of linguistic units from two or more languages in a single conversation or sometimes even a single utterance. These examples emphasize on the fact that people don't do only phrase, or tag-mixing as it was a belief in the linguistic forum until now.",
"cite_spans": [
{
"start": 24,
"end": 43,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF67"
},
{
"start": 410,
"end": 439,
"text": "(Loshchilov and Hutter, 2019)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 440,
"end": 447,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5"
},
{
"text": "We received an overwhelming response for both Hinglish and Spanglish. 61 teams submitted their systems for Hinglish and 28 teams submitted their systems for Spanglish. 16 teams submitted to both Hinglish and Spanglish. We received 33 system description papers in total. The embeddings and techniques used by the participants are tabulated in Table 5 . The team names, Codalab names, and their corresponding description papers are provided in Appendix (Table 6 ). We provide a summary of the top teams below (Codalab usernames are mentioned in parentheses) : Top Hinglish Systems @ SentiMix",
"cite_spans": [],
"ref_spans": [
{
"start": 342,
"end": 349,
"text": "Table 5",
"ref_id": null
},
{
"start": 451,
"end": 459,
"text": "(Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "\u2022 KK2018 (kk2018) used pre-trained XLM-R which was trained with 100 languages. They trained it with adversarial (intentionally designed to make model cause a mistake) examples. To create adversarial examples, they used the formula proposed by (Miyato et al., 2016) where the perturbation is created using the gradient of the loss function.",
"cite_spans": [
{
"start": 243,
"end": 264,
"text": "(Miyato et al., 2016)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "\u2022 MSR India (genius1237) used embeddings from XLM-R as inputs to a classification layer. They also do so with multiligual BERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "\u2022 Reed (gopalvinay) Finetuned BERT and claimed that pre-training of BERT is not of much use. They also tried bag-of-words based feedforward networks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "\u2022 BAKSA (ayushk) used XLM-R multilingual embeddings ( a transformerbased masked language model trained on 100 languages) followed by ensemble model of CNN and self attention architecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "Top Spanglish Systems @ SentiMix",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "\u2022 XLP (LiangZhao) augmented the data using machine translation. Then they used pre-trained embeddings made by Facebook Research (XLMs)(Lample and Conneau, 2019) followed by CNN classifier of linear classifier (fully connected layer). They optimized a weighted loss function based on the complexity of code-mixing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "\u2022 Voice@SRIB (asking28) applied multiple pre-processing steps and used Ensemble model by combining CNN, self-attention and LSTM based model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "\u2022 Palomino-Ochoa (dpalominop) combined a transfer learning scheme based on ULMFit (Howard and Ruder, 2018) with the-state-of-the-art language model BERT.",
"cite_spans": [
{
"start": 82,
"end": 106,
"text": "(Howard and Ruder, 2018)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "\u2022 HPCC-YNU (kongjun) used word and character embeddings as input to BiLSTM with attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "7 Results and Analysis (Avg.) scores of the Postive, Neutral, and Negative classes. We report Precision (P), Recall (R), and F1 score for each class separately. In each column, the boldfaced scores are the highest score in that column.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "In the previous section, we briefly described the top systems. Here, we group and summarize various techniques used by the systems (Codalab usernames are mentioned in parentheses) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "\u2022 Word Embedding: Three popular word embedding ways explored by participants. Word2Vec, Glove, FastText. Some participants used character-embedding. Additional resources were also used by participants to train their own embeddings. Table 4 : Top 15 results for the Spanglish dataset. The systems are ordered by the Weighted Average F1 (Avg.) scores of the Postive, Neutral, and Negative classes. We report Precision (P), Recall (R), and F1 score for each class separately. In each column, the boldfaced score is the highest score in that column.",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 239,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "was tried by Zyy1510 (zyy1510). Teams like TueMix (guzimanis), WESSA (ahmed0sultan), C1 (lakshadvani) reported their experiments with Logistic Regression, whereas yet another popular choice Random Forest has been used by IRLab DAIICT (apurva19), C1 (lakshadvani). SVM was tried by quite a few teams -IUST (Taha), JUNLP (sainik.mahata), WESSA (ahmed0sultan), C1 (lakshadvani), IIITG-ADBU (abaruah).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "\u2022 RNN: RNN, GRU, LSTM, along with their bi-directional varients were explored by several teams. Some of them are gundapusunil (gundapusinil), Team Swift (aditya malte), CS-Embed (francesita), GULD@NUIG (koustava), IIT Gandhinagar (vivek IITGN) etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "\u2022 CNN for text: Although RNN is the more popular choices for NLP tasks, quite a few teams also used CNN for text. Some of them are IUST (Taha), FII-UAIC (Lavinia AP), NLP-CIC (ajason08), NITS-Hinglish-SentiMix (rns2020), Zyy1510 (zyy1510), HCMS (the0ne, talent404).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "\u2022 Transformer, BERT and related language models: The recent trend in NLP is to use highly capable language models like BERT and XLNet. The popular choice, BERT, was tried by MeisterMorxrc (MeisterMorxrc), HinglishNLP (nirantk), IRLab DAIICT (apurva19), WESSA (ahmed0sultan), C1 (lakshadvani), IIITG-ADBU (abaruah). Some researchers like Deep Learning Brasil -NLP (verissimo.manoel) experimented with XLNet. XLmR was used by Will go (will go) , kk2018 (kk2018), FiSSA (jupitter) etc. These type of models gave the best results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "\u2022 Ensembles: Some teams like Voice@SRIB (asking28), UPB (eduardgzaharia, clementincercel) etc. used ensemble methods. in all cases, ensembles performed better than the their individual models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "\u2022 Special Mentions: Apart from common practices and architectures quite a few participants explored interesting dimensions and added significant value to this endeavor. We strongly believe these dimensions need to be explored and discussed further.:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "XLP (LiangZhao) used Cross-lingual embeddings which could an interesting way for code-mixed language processing where we have scarcity of annotated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "UPB (eduardgzaharia, clementincercel) used capsule network with biGRU and showed promising results. The use of capsule networks in NLP tasks need further exploration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "ULD@NUIG (koustava) explored an interesting way to phoneme based Generative Morphemes learning approach. Sub-word based embedding is an interesting new way in the NLP community, but what is the best sub-word unit to choose is still unresolved. Morpheme based approach could be a good alternative, especially for highly spelling variant code-mixed data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "IIT Gandhinagar (vivek IITGN) tried a new direction by generating sentences using language modeling. Language modeling for code-mixed data is still an under-researched problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "HPCC-YNU (kongjun) used a Bilingual Vector Gating Mechanism. Vector gating technique got certain success in document classification kinds of applications, but its applications in other NLP dimension demands further exploration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "Will go (will go) used Bert and Pseudo labeling. Pseudo Labeling can be a useful strategy for code-mixed languages especially when annotated data is scarce. .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "kk2018 (kk2018) reported unique ways to apply adversarial network and its usage in code-mixing. They got very good results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "LIMSI UPV (somban) gave a way to merge RNN and CNN architecture together for the betterment of sentiment analysis. This could be an interesting way to explore in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participation and Top Performing Systems",
"sec_num": "6"
},
{
"text": "SentiMix, sentiment analysis of code-mixed tweets at SemEval 2020 received an overwhelming response for both Hinglish and Spanglish. 61 teams submitted their systems for Hinglish and 28 teams submitted their systems for Spanglish. The best performance achieved was 75.0 % F1 score for Hinglish and 80.6% for Spanglish. We received a total of 33 system description papers. BERT-like models were the most successful among participants. Although the SentiMix task mainly focused on sentiment analysis, the data will serve the NLP community or whoever is interested in the code-mixing problem for these particular languages and in general. Properly annotated code-mixed data is still scarce. The success of SentiMix motivates us to go further and organize similar events in the future. We plan to add more languages, especially from regions that have a high percentage of bi-or multilingual speakers. We also plan to enrich our datasets with annotations for other tasks (NER, emotion recognition, translation etc). We strongly believe that codemixing is a new horizon of interest in the NLP community and needs to be further explored in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "Hinglish: https://competitions.codalab.org/competitions/20654 5 Spanglish: https://competitions.codalab.org/competitions/20789 6 Both the datasets are available at https://zenodo.org/record/3974927#.XyxAZCgzZPZ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://developer.twitter.com/ 8 http://sentistrength.wlv.ac.uk/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://requester.mturk.com/ 10 An assignment is done by a single annotator.11 We use the assignment review policy ScoreMyKnownAnswers/2011-09-01.12 We use the HIT review policy SimplePlurality/2011-09-01.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Results for all the participants are available at https://ritual-uh.github.io/sentimix2020/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": " Table 6 : The teams that participated in Sentimix-2020 and submitted system description papers with the corresponding reference thereof. Teams are sorted alphabetically.",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 8,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "2020. C1 at SemEval-2020 task 9: Sentimix: Sentiment analysis for code-mixed social media text using feature engineering",
"authors": [
{
"first": "Laksh",
"middle": [],
"last": "Advani",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Suraj",
"middle": [],
"last": "Maharjan",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laksh Advani, Clement Lu, and Suraj Maharjan. 2020. C1 at SemEval-2020 task 9: Sentimix: Sentiment analysis for code-mixed social media text using feature engineering. In Proceedings of the 14th International Work- shop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Named Entity Recognition on Code-Switched Data: Overview of the CALCS 2018 Shared Task",
"authors": [
{
"first": "Gustavo",
"middle": [],
"last": "Aguilar",
"suffix": ""
},
{
"first": "Fahad",
"middle": [],
"last": "Alghamdi",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Soto",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching",
"volume": "",
"issue": "",
"pages": "138--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gustavo Aguilar, Fahad AlGhamdi, Victor Soto, Mona Diab, Julia Hirschberg, and Thamar Solorio. 2018. Named Entity Recognition on Code-Switched Data: Overview of the CALCS 2018 Shared Task. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 138-147, Melbourne, Australia, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "NLP-CIC at SemEval-2020 Task 9: Analysing sentiment in code-switching language using a simple deep-learning classifier",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Angel",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Segun Taofeek Aroyehun",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Tamayo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Angel, Segun Taofeek Aroyehun, Antonio Tamayo, and Alexander Gelbukh. 2020. NLP-CIC at SemEval- 2020 Task 9: Analysing sentiment in code-switching language using a simple deep-learning classifier. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "FII-UAIC at SemEval-2020 Task 9: Sentiment analysis for code-mixed social media text using cnn",
"authors": [
{
"first": "Lavinia",
"middle": [],
"last": "Aparaschivei",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Palihovici",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "G\u00eefu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lavinia Aparaschivei, Andrei Palihovici, and Daniela G\u00eefu. 2020. FII-UAIC at SemEval-2020 Task 9: Sentiment analysis for code-mixed social media text using cnn. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Shared tasks of the 2015 workshop on noisy user-generated text: Twitter lexical normalization and named entity recognition",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Catherine De Marneffe",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Young-Bum",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Workshop on Noisy User-generated Text",
"volume": "",
"issue": "",
"pages": "126--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Baldwin, Marie Catherine de Marneffe, Bo Han, Young-Bum Kim, Alan Ritter, and Wei Xu. 2015. Shared tasks of the 2015 workshop on noisy user-generated text: Twitter lexical normalization and named entity recognition. In Proceedings of the Workshop on Noisy User-generated Text, pages 126-135, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Overview of the mixed script information retrieval (MSIR)",
"authors": [
{
"first": "Somnath",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Kunal",
"middle": [],
"last": "Chakma",
"suffix": ""
},
{
"first": "Sudip",
"middle": [],
"last": "Kumar Naskar",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Sivaji",
"middle": [],
"last": "Bandyopadhyay",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of FIRE 2016. FIRE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Somnath Banerjee, Kunal Chakma, Sudip Kumar Naskar, Amitava Das, Paolo Rosso, Sivaji Bandyopadhyay, and Monojit Choudhury. 2016. Overview of the mixed script information retrieval (MSIR). In Proceedings of FIRE 2016. FIRE, December.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "LIMSI UPV at SemEval-2020 Task 9: Recurrent convolutional neural network for code-mixed sentiment analysis",
"authors": [
{
"first": "Somnath",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Sahar",
"middle": [],
"last": "Ghannay",
"suffix": ""
},
{
"first": "Sophie",
"middle": [],
"last": "Rosset",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Vilnat",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Somnath Banerjee, Sahar Ghannay, Sophie Rosset, Anne Vilnat, and Paolo Rosso. 2020. LIMSI UPV at SemEval- 2020 Task 9: Recurrent convolutional neural network for code-mixed sentiment analysis. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Will go at SemEval-2020 Task 9: An accurate approach for sentiment analysis on hindi-english tweets based on bert and pseudo label strategy",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Weilong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Zhuang",
"suffix": ""
},
{
"first": "Mingyuan",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Bao, Weilong Chen, Wei Bai, Yan Zhuang, Mingyuan Cheng, and Xiangyu Ma. 2020. Will go at SemEval- 2020 Task 9: An accurate approach for sentiment analysis on hindi-english tweets based on bert and pseudo label strategy. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "NITS-Hinglish-SentiMix at SemEval-2020 Task 9: Sentiment analysis for code-mixed social media text using an ensemble model",
"authors": [
{
"first": "Jyoti",
"middle": [],
"last": "Subhra",
"suffix": ""
},
{
"first": "Nivedita",
"middle": [],
"last": "Baroi",
"suffix": ""
},
{
"first": "Ringki",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Thoudam Doren",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Subhra Jyoti Baroi, Nivedita Singh, Ringki Das, and Thoudam Doren Singh. 2020. NITS-Hinglish-SentiMix at SemEval-2020 Task 9: Sentiment analysis for code-mixed social media text using an ensemble model. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "IIITG-ADBU at SemEval-2020 task 9: Svm for sentiment analysis of english-hindi code-mixed text",
"authors": [
{
"first": "Arup",
"middle": [],
"last": "Baruah",
"suffix": ""
},
{
"first": "Ferdous",
"middle": [
"Ahmed"
],
"last": "Kaushik Amar Das",
"suffix": ""
},
{
"first": "Kuntal",
"middle": [],
"last": "Barbhuiya",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dey",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arup Baruah, Kaushik Amar Das, Ferdous Ahmed Barbhuiya, and Kuntal Dey. 2020. IIITG-ADBU at SemEval- 2020 task 9: Svm for sentiment analysis of english-hindi code-mixed text. In Proceedings of the 14th In- ternational Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "TueMix at SemEval-2020 Task 9: Logistic regression with linguistic feature set for sentiment analysis of code-mixed social media text",
"authors": [
{
"first": "Elizabeth",
"middle": [],
"last": "Bear",
"suffix": ""
},
{
"first": "Diana",
"middle": [
"Constantina"
],
"last": "H\u00f6fels",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Manolescu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elizabeth Bear, Diana Constantina H\u00f6fels, and Mihai Manolescu. 2020. TueMix at SemEval-2020 Task 9: Logis- tic regression with linguistic feature set for sentiment analysis of code-mixed social media text. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Eliciting mixed emotions: a meta-analysis comparing models, types, and measures",
"authors": [
{
"first": "Raul",
"middle": [],
"last": "Berrios",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Totterdell",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Kellett",
"suffix": ""
}
],
"year": 2015,
"venue": "Frontiers in Psychology",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raul Berrios, Peter Totterdell, and Stephen Kellett. 2015. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures. Frontiers in Psychology, 6:428.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "HinglishNLP at SemEval-2020 Task 9: Fine-tuned language models for hinglish sentiment detection",
"authors": [
{
"first": "Meghana",
"middle": [],
"last": "Bhange",
"suffix": ""
},
{
"first": "Nirant",
"middle": [],
"last": "Kasliwal",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meghana Bhange and Nirant Kasliwal. 2020. HinglishNLP at SemEval-2020 Task 9: Fine-tuned language models for hinglish sentiment detection. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Iiit-h system submission for fire2014 shared task on transliterated search",
"authors": [
{
"first": "Ahmad",
"middle": [],
"last": "Irshad",
"suffix": ""
},
{
"first": "Vandan",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Mujadia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tammewar",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Forum for Information Retrieval Evaluation, FIRE '14",
"volume": "",
"issue": "",
"pages": "48--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irshad Ahmad Bhat, Vandan Mujadia, Aniruddha Tammewar, Riyaz Ahmad Bhat, and Manish Shrivastava. 2014. Iiit-h system submission for fire2014 shared task on transliterated search. In Proceedings of the Forum for Information Retrieval Evaluation, FIRE '14, page 48-53, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Code-mixing, language variation, and linguistic theory:: Evidence from bantu languages",
"authors": [
{
"first": "G",
"middle": [],
"last": "Eyamba",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bokamba",
"suffix": ""
}
],
"year": 1988,
"venue": "Lingua",
"volume": "76",
"issue": "1",
"pages": "21--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eyamba G. Bokamba. 1988. Code-mixing, language variation, and linguistic theory:: Evidence from bantu languages. Lingua, 76(1):21 -62.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "FiSSA at SemEval-2020 Task 9: Fine-tuned for feelings",
"authors": [
{
"first": "Bertelt",
"middle": [],
"last": "Braaksma",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Scholtens",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Van Suijlekom",
"suffix": ""
},
{
"first": "Remy",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ahmet\u00fcst\u00fcn",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bertelt Braaksma, Richard Scholtens, Stan van Suijlekom, Remy Wang, and Ahmet\u00dcst\u00fcn. 2020. FiSSA at SemEval-2020 Task 9: Fine-tuned for feelings. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Word-level language identification using CRF: Code-switching shared task report of MSR India system",
"authors": [
{
"first": "Gokul",
"middle": [],
"last": "Chittaranjan",
"suffix": ""
},
{
"first": "Yogarshi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "73--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gokul Chittaranjan, Yogarshi Vyas, Kalika Bali, and Monojit Choudhury. 2014. Word-level language identifica- tion using CRF: Code-switching shared task report of MSR India system. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 73-79, Doha, Qatar, October. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Overview of FIRE 2014 track on transliterated search",
"authors": [
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Gokul",
"middle": [],
"last": "Chittaranjan",
"suffix": ""
},
{
"first": "Parth",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of FIRE 2014. FIRE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Monojit Choudhury, Gokul Chittaranjan, Parth Gupta, and Amitava Das. 2014. Overview of FIRE 2014 track on transliterated search. In Proceedings of FIRE 2014. FIRE, December.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Unsupervised crosslingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02116"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross- lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Identifying languages at the word level in code-mixed Indian social media text",
"authors": [
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 11th International Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "378--387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amitava Das and Bj\u00f6rn Gamb\u00e4ck. 2014. Identifying languages at the word level in code-mixed Indian social media text. In Proceedings of the 11th International Conference on Natural Language Processing, pages 378- 387, Goa, India, December. NLP Association of India.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Deep Learning Brasil -NLP at SemEval-2020 Task 9: Sentiment analysis of codemixed tweets using ensemble of language models",
"authors": [
{
"first": "Manoel",
"middle": [],
"last": "Verissimo",
"suffix": ""
},
{
"first": "Santos",
"middle": [],
"last": "Neto",
"suffix": ""
},
{
"first": "Ayrton",
"middle": [],
"last": "Denner Da",
"suffix": ""
},
{
"first": "Silva",
"middle": [],
"last": "Amaral",
"suffix": ""
},
{
"first": "F F Da",
"middle": [],
"last": "N\u00e1dia",
"suffix": ""
},
{
"first": "Anderson",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Silva",
"middle": [],
"last": "Da",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Soares",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manoel Verissimo dos Santos Neto, Ayrton Denner da Silva Amaral, N\u00e1dia F F da Silva, and Anderson da Silva Soares. 2020. Deep Learning Brasil -NLP at SemEval-2020 Task 9: Sentiment analysis of code- mixed tweets using ensemble of language models. In Proceedings of the 14th International Workshop on Se- mantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Phonological factors in social media writing",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Workshop on Language Analysis in Social Media",
"volume": "",
"issue": "",
"pages": "11--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein. 2013. Phonological factors in social media writing. In Proceedings of the Workshop on Lan- guage Analysis in Social Media, pages 11-19, Atlanta, Georgia, June. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Comparing the level of code-switching in corpora",
"authors": [
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "1850--1855",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bj\u00f6rn Gamb\u00e4ck and Amitava Das. 2016. Comparing the level of code-switching in corpora. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1850-1855, Portoro\u017e, Slovenia, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "JUNLP at SemEval-2020 Task 9: Sentiment analysis of hindi-english code mixed data using grid search cross validation",
"authors": [
{
"first": "Avishek",
"middle": [],
"last": "Garain",
"suffix": ""
},
{
"first": "Sainik",
"middle": [],
"last": "Kumar Mahata",
"suffix": ""
},
{
"first": "Dipankar",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avishek Garain, Sainik Kumar Mahata, and Dipankar Das. 2020. JUNLP at SemEval-2020 Task 9: Sentiment analysis of hindi-english code mixed data using grid search cross validation. In Proceedings of the 14th In- ternational Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Sentiment analysis leveraging emotions and word embeddings",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Giatsoglou",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Manolis",
"suffix": ""
},
{
"first": "Konstantinos",
"middle": [],
"last": "Vozalis",
"suffix": ""
},
{
"first": "Athena",
"middle": [],
"last": "Diamantaras",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Vakali",
"suffix": ""
},
{
"first": "Konstantinos",
"middle": [],
"last": "Sarigiannidis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ch",
"suffix": ""
}
],
"year": 2017,
"venue": "Expert Systems with Applications",
"volume": "69",
"issue": "",
"pages": "214--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Giatsoglou, Manolis G. Vozalis, Konstantinos Diamantaras, Athena Vakali, George Sarigiannidis, and Kon- stantinos Ch. Chatzisavvas. 2017. Sentiment analysis leveraging emotions and word embeddings. Expert Systems with Applications, 69:214 -224.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Reed at SemEval-2020 Task 9: Fine-tuning and bag-of-words approaches to code-mixed sentiment analysis",
"authors": [
{
"first": "Vinay",
"middle": [],
"last": "Gopalan",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Hopkins",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vinay Gopalan and Mark Hopkins. 2020. Reed at SemEval-2020 Task 9: Fine-tuning and bag-of-words ap- proaches to code-mixed sentiment analysis. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "ULD@NUIG at SemEval-2020 Task 9: Generative morphemes with an attention model for sentiment analysis in code-mixed text",
"authors": [
{
"first": "Koustava",
"middle": [],
"last": "Goswami",
"suffix": ""
},
{
"first": "Priya",
"middle": [],
"last": "Rani",
"suffix": ""
},
{
"first": "Theodorus",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "John",
"middle": [
"P"
],
"last": "Fransen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccrae",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koustava Goswami, Priya Rani, Bharathi Raja Chakravarthi, Theodorus Fransen, and John P McCrae. 2020. ULD@NUIG at SemEval-2020 Task 9: Generative morphemes with an attention model for sentiment analysis in code-mixed text. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval- 2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "2020. gundapusunil at SemEval-2020 Task 9: Syntactic semantic lstm architecture for sentiment analysis of code-mixed data",
"authors": [
{
"first": "Sunil",
"middle": [],
"last": "Gundapu",
"suffix": ""
},
{
"first": "Radhika",
"middle": [],
"last": "Mamidi",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunil Gundapu and Radhika Mamidi. 2020. gundapusunil at SemEval-2020 Task 9: Syntactic semantic lstm architecture for sentiment analysis of code-mixed data. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Universal language model fine-tuning for text classification",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1801.06146"
]
},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "IUST at SemEval-2020 Task 9: Sentiment analysis for code-mixed social media text using deep neural networks and linear baselines",
"authors": [
{
"first": "Taha",
"middle": [],
"last": "Soroush Javdan",
"suffix": ""
},
{
"first": "Behrouz",
"middle": [],
"last": "Shangipour",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Minaei-Bidgoli",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soroush Javdan, Taha Shangipour ataei, and Behrouz Minaei-Bidgoli. 2020. IUST at SemEval-2020 Task 9: Sen- timent analysis for code-mixed social media text using deep neural networks and linear baselines. In Proceed- ings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "HPCC-YNU at SemEval-2020 Task 9: A bilingual vector gating mechanism for sentiment analysis of code-mixed text",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Jin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xuejie",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Kong, Jin Wang, and Xuejie Zhang. 2020. HPCC-YNU at SemEval-2020 Task 9: A bilingual vector gating mechanism for sentiment analysis of code-mixed text. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "BAKSA at SemEval-2020 Task 9: Bolstering cnn with self-attention for sentiment analysis of code mixed text",
"authors": [
{
"first": "Ayush",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Harsh",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Keshav",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Ashutosh",
"middle": [],
"last": "Modi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ayush Kumar, Harsh Agarwal, Keshav Bansal, and Ashutosh Modi. 2020. BAKSA at SemEval-2020 Task 9: Bolstering cnn with self-attention for sentiment analysis of code mixed text. In Proceedings of the 14th In- ternational Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Cross-lingual language model pretraining",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.07291"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "CS-Embed at SemEval-2020 Task 9: The effectiveness of code-switched word embeddings for sentiment analysis",
"authors": [
{
"first": "Frances",
"middle": [
"A"
],
"last": "Laureano De Leon",
"suffix": ""
},
{
"first": "Florimond",
"middle": [],
"last": "Gu\u00e9niat",
"suffix": ""
},
{
"first": "Harish Tayyar",
"middle": [],
"last": "Madabushi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frances A. Laureano De Leon, Florimond Gu\u00e9niat, and Harish Tayyar Madabushi. 2020. CS-Embed at SemEval- 2020 Task 9: The effectiveness of code-switched word embeddings for sentiment analysis. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "2020. kk2018 at SemEval-2020 Task 9: Adversarial training for code-mixing sentiment classification",
"authors": [
{
"first": "Jiaxiang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xuyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Ouyang",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zhengjie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Weiyue",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaxiang Liu, Xuyi Chen, Shikun Feng, Shuohuan Wang, Xuan Ouyang, Yu Sun, Zhengjie Huang, and Weiyue Su. 2020. kk2018 at SemEval-2020 Task 9: Adversarial training for code-mixing sentiment classification. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "XLP at SemEval-2020 Task 9: Cross-lingual models with focal loss for sentiment analysis of code-mixing language",
"authors": [
{
"first": "Yili",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Hao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yili Ma, Liang Zhao, and Jie Hao. 2020. XLP at SemEval-2020 Task 9: Cross-lingual models with focal loss for sentiment analysis of code-mixing language. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Team Swift at SemEval-2020 Task 9: Tiny data specialists through domain-specific pre-training on code-mixed data",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Malte",
"suffix": ""
},
{
"first": "Pratik",
"middle": [],
"last": "Bhavsar",
"suffix": ""
},
{
"first": "Sushant",
"middle": [],
"last": "Rathi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Malte, Pratik Bhavsar, and Sushant Rathi. 2020. Team Swift at SemEval-2020 Task 9: Tiny data spe- cialists through domain-specific pre-training on code-mixed data. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Virtual adversarial training for semi-supervised text classification",
"authors": [
{
"first": "Takeru",
"middle": [],
"last": "Miyato",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"M"
],
"last": "Dai",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"J"
],
"last": "Goodfellow",
"suffix": ""
}
],
"year": 2016,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takeru Miyato, Andrew M. Dai, and Ian J. Goodfellow. 2016. Virtual adversarial training for semi-supervised text classification. ArXiv, abs/1605.07725.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "NRC-canada: Building the state-of-the-art in sentiment analysis of tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation",
"volume": "2",
"issue": "",
"pages": "321--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. NRC-canada: Building the state-of-the-art in sentiment analysis of tweets. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 321-327, Atlanta, Georgia, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Overview for the second shared task on language identification in code-switched data",
"authors": [
{
"first": "Giovanni",
"middle": [],
"last": "Molina",
"suffix": ""
},
{
"first": "Fahad",
"middle": [],
"last": "Alghamdi",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Ghoneim",
"suffix": ""
},
{
"first": "Abdelati",
"middle": [],
"last": "Hawwari",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Rey-Villamizar",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Second Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "40--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giovanni Molina, Fahad AlGhamdi, Mahmoud Ghoneim, Abdelati Hawwari, Nicolas Rey-Villamizar, Mona Diab, and Thamar Solorio. 2016. Overview for the second shared task on language identification in code-switched data. In Proceedings of the Second Workshop on Computational Approaches to Code Switching, pages 40-49, Austin, Texas, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Palomino-Ochoa at SemEval-2020 Task 9: Robust system based on transformer for code-mixed sentiment classification",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Palomino",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [],
"last": "Ochoa-Luna",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Palomino and Jos\u00e9 Ochoa-Luna. 2020. Palomino-Ochoa at SemEval-2020 Task 9: Robust system based on transformer for code-mixed sentiment classification. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "IRlab DAIICT at SemEval-2020 Task 9: Machine learning and deep learning methods for sentiment analysis of code-mixed tweets",
"authors": [
{
"first": "Apurva",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Abhimanyu",
"middle": [],
"last": "Singh Bisht",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Majumder",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Apurva Parikh, Abhimanyu Singh Bisht, and Prasenjit Majumder. 2020. IRlab DAIICT at SemEval-2020 Task 9: Machine learning and deep learning methods for sentiment analysis of code-mixed tweets. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Sentiment analysis of code-mixed indian languages: An overview of sail code-mixed shared task @icon-2017",
"authors": [
{
"first": "Dipankar",
"middle": [],
"last": "Braja Gopal Patra",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Braja Gopal Patra, Dipankar Das, and Amitava Das. 2018. Sentiment analysis of code-mixed indian languages: An overview of sail code-mixed shared task @icon-2017. CoRR, abs/1803.06745.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "How multilingual is multilingual BERT?",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4996--5001",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Overview and datasets of FIRE 2013 track on transliterated search",
"authors": [
{
"first": "Rishiraj",
"middle": [],
"last": "Saharoy",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Komal",
"middle": [],
"last": "Agarwal",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of FIRE 2013. FIRE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rishiraj SahaRoy, Monojit Choudhury, Prasenjit Majumder, and Komal Agarwal. 2013. Overview and datasets of FIRE 2013 track on transliterated search. In Proceedings of FIRE 2013. FIRE, December.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Half of messages on Twitter aren't in English",
"authors": [
{
"first": "Stan",
"middle": [],
"last": "Schroeder",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stan Schroeder. 2010. Half of messages on Twitter aren't in English [STATS], February. http://mashable.com/2010/02/24/half-messages-twitter-english/.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Overview of FIRE-2015 shared task on mixed script information retrieval",
"authors": [
{
"first": "Royal",
"middle": [],
"last": "Sequiera",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Parth",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Shubham",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Somnath",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Sudip",
"middle": [],
"last": "Kumar Naskar",
"suffix": ""
},
{
"first": "Sivaji",
"middle": [],
"last": "Bandyopadhyay",
"suffix": ""
},
{
"first": "Gokul",
"middle": [],
"last": "Chittaranjan",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Kunal",
"middle": [],
"last": "Chakma",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of FIRE 2015. FIRE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Royal Sequiera, Monojit Choudhury, Parth Gupta, Paolo Rosso, Shubham Kumar, Somnath Banerjee, Sudip Ku- mar Naskar, Sivaji Bandyopadhyay, Gokul Chittaranjan, Amitava Das, and Kunal Chakma. 2015. Overview of FIRE-2015 shared task on mixed script information retrieval. In Proceedings of FIRE 2015. FIRE, December.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Text normalization of code mix and sentiment analysis",
"authors": [
{
"first": "S",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Srinivas",
"suffix": ""
},
{
"first": "R",
"middle": [
"C"
],
"last": "Balabantaray",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI)",
"volume": "",
"issue": "",
"pages": "1468--1473",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Sharma, P. Srinivas, and R. C. Balabantaray. 2015. Text normalization of code mix and sentiment analysis. In 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pages 1468-1473.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Shallow parsing pipeline -Hindi-English code-mixed social media text",
"authors": [
{
"first": "Arnav",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Sakshi",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Raveesh",
"middle": [],
"last": "Motlani",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "Radhika",
"middle": [],
"last": "Mamidi",
"suffix": ""
},
{
"first": "Dipti",
"middle": [
"M"
],
"last": "Sharma",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1340--1345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arnav Sharma, Sakshi Gupta, Raveesh Motlani, Piyush Bansal, Manish Shrivastava, Radhika Mamidi, and Dipti M. Sharma. 2016. Shallow parsing pipeline -Hindi-English code-mixed social media text. In Proceed- ings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1340-1345, San Diego, California, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Cross-lingual embeddings for sentiment analysis of hinglish social media text",
"authors": [],
"year": null,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranaydeep Singh and Els Lefever. 2020. LT3 at SemEval-2020 Task 9: Cross-lingual embeddings for senti- ment analysis of hinglish social media text. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Voice@SRIB at SemEval-2020 Task 9 and 12: Stacked ensembling method for sentiment and offensiveness detection in social media",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Surya Pratap Singh",
"middle": [],
"last": "Parmar",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek Singh and Surya Pratap Singh Parmar. 2020. Voice@SRIB at SemEval-2020 Task 9 and 12: Stacked ensembling method for sentiment and offensiveness detection in social media. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Grammatical constraints on code-mixing: Evidence from Hindi-English. Canadian Journal of Linguistics/Revue canadienne de linguistique",
"authors": [
{
"first": "Rajendra",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "30",
"issue": "",
"pages": "33--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajendra Singh. 1985. Grammatical constraints on code-mixing: Evidence from Hindi-English. Canadian Jour- nal of Linguistics/Revue canadienne de linguistique, 30(1):33-45.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "A survey of code-switched speech and language processing",
"authors": [
{
"first": "Sunayana",
"middle": [],
"last": "Sitaram",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Khyathi Raghavi Chandu",
"suffix": ""
},
{
"first": "Krishna",
"middle": [],
"last": "Sai",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Rallabandi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Black",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.00784"
]
},
"num": null,
"urls": [],
"raw_text": "Sunayana Sitaram, Khyathi Raghavi Chandu, Sai Krishna Rallabandi, and Alan W Black. 2019. A survey of code-switched speech and language processing. arXiv preprint arXiv:1904.00784.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Learning to predict code-switching points",
"authors": [
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "Empirical Methods on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "973--981",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thamar Solorio and Yang Liu. 2008. Learning to predict code-switching points. In Empirical Methods on Nat- ural Language Processing, EMNLP-2008, pages 973-981, Honolulu, Hawaii. Association for Computational Linguistics.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Overview for the first shared task on language identification in code-switched data",
"authors": [
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Blair",
"suffix": ""
},
{
"first": "Suraj",
"middle": [],
"last": "Maharjan",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Ghoneim",
"suffix": ""
},
{
"first": "Abdelati",
"middle": [],
"last": "Hawwari",
"suffix": ""
},
{
"first": "Fahad",
"middle": [],
"last": "Alghamdi",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "62--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thamar Solorio, Elizabeth Blair, Suraj Maharjan, Steven Bethard, Mona Diab, Mahmoud Ghoneim, Abdelati Hawwari, Fahad AlGhamdi, Julia Hirschberg, Alison Chang, and Pascale Fung. 2014a. Overview for the first shared task on language identification in code-switched data. In Proceedings of the First Workshop on Com- putational Approaches to Code Switching, pages 62-72, Doha, Qatar, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Overview for the first shared task on language identification in code-switched data",
"authors": [
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Blair",
"suffix": ""
},
{
"first": "Suraj",
"middle": [],
"last": "Maharjan",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Ghoneim",
"suffix": ""
},
{
"first": "Abdelati",
"middle": [],
"last": "Hawwari",
"suffix": ""
},
{
"first": "Fahad",
"middle": [],
"last": "Alghamdi",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Computational Approaches to Code Switching",
"volume": "",
"issue": "",
"pages": "62--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thamar Solorio, Elizabeth Blair, Suraj Maharjan, Steven Bethard, Mona Diab, Mahmoud Ghoneim, Abdelati Hawwari, Fahad AlGhamdi, Julia Hirschberg, Alison Chang, and Pascale Fung. 2014b. Overview for the first shared task on language identification in code-switched data. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 62-72. ACL.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "MSR India at SemEval-2020 Task 9: Multilingual models can do code-mixing too",
"authors": [
{
"first": "Anirudh",
"middle": [],
"last": "Srinivasan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anirudh Srinivasan. 2020. MSR India at SemEval-2020 Task 9: Multilingual models can do code-mixing too. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "IIT Gandhinagar at SemEval-2020 Task 9: Code-mixed sentiment classification using candidate sentence generation and selection",
"authors": [
{
"first": "Vivek",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Mayank",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vivek Srivastava and Mayank Singh. 2020. IIT Gandhinagar at SemEval-2020 Task 9: Code-mixed sentiment classification using candidate sentence generation and selection. In Proceedings of the 14th International Work- shop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "HCMS at SemEval-2020 Task 9: A neural approach to sentiment analysis for code-mixed texts",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Harsha Vardhan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Srivastava and V.Harsha Vardhan. 2020. HCMS at SemEval-2020 Task 9: A neural approach to sentiment analysis for code-mixed texts. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "WESSA at SemEval-2020 Task 9: Code-mixed sentiment analysis using transformers",
"authors": [
{
"first": "Ahmed",
"middle": [],
"last": "Sultan",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Salim",
"suffix": ""
},
{
"first": "Amina",
"middle": [],
"last": "Gaber",
"suffix": ""
},
{
"first": "Islam",
"middle": [
"El"
],
"last": "Hosary",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmed Sultan, Mahmoud Salim, Amina Gaber, and Islam El Hosary. 2020. WESSA at SemEval-2020 Task 9: Code-mixed sentiment analysis using transformers. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Sentiment strength detection in short informal text",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Thelwall",
"suffix": ""
},
{
"first": "Kevan",
"middle": [],
"last": "Buckley",
"suffix": ""
},
{
"first": "Georgios",
"middle": [],
"last": "Paltoglou",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Arvid",
"middle": [],
"last": "Kappas",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of the American society for information science and technology",
"volume": "61",
"issue": "12",
"pages": "2544--2558",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Thelwall, Kevan Buckley, Georgios Paltoglou, Di Cai, and Arvid Kappas. 2010. Sentiment strength detection in short informal text. Journal of the American society for information science and technology, 61(12):2544-2558.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Code-switching: Hindi-english",
"authors": [
{
"first": "S",
"middle": [
"K"
],
"last": "Verma",
"suffix": ""
}
],
"year": 1976,
"venue": "Lingua",
"volume": "38",
"issue": "2",
"pages": "153--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S.K. Verma. 1976. Code-switching: Hindi-english. Lingua, 38(2):153 -165.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Sentiment analysis on monolingual, multilingual and code-switching twitter corpora",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vilares",
"suffix": ""
},
{
"first": "Miguel",
"middle": [
"A"
],
"last": "Alonso",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "2--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Vilares, Miguel A. Alonso, and Carlos G\u00f3mez-Rodr\u00edguez. 2015. Sentiment analysis on monolingual, mul- tilingual and code-switching twitter corpora. In Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 2-8, Lisboa, Portugal, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Huggingface's transformers: State-of-theart natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the- art natural language processing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Oriol Vinyals",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "MeisterMorxrc at SemEval-2020 Task 9: Fine-tune bert and multitask learning for sentiment analysis of code-mixed tweets",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chenghao",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Wu, Peng Wang, and Chenghao Huang. 2020. MeisterMorxrc at SemEval-2020 Task 9: Fine-tune bert and multitask learning for sentiment analysis of code-mixed tweets. In Proceedings of the 14th International Work- shop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "The digital language divide. The Guardian. Retrieved",
"authors": [
{
"first": "Holly",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holly Young. 2020. The digital language divide. The Guardian. Retrieved July 28, 2020. http://labs.theguardian.com/digital-language-divide/.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "UPB at SemEval-2020 Task 9: Identifying sentiment in code-mixed social media texts using transformers and multi-task learning",
"authors": [
{
"first": "George-Eduard",
"middle": [],
"last": "Zaharia",
"suffix": ""
},
{
"first": "George-Alexandru",
"middle": [],
"last": "Vlad",
"suffix": ""
},
{
"first": "Dumitru-Clementin",
"middle": [],
"last": "Cercel",
"suffix": ""
},
{
"first": "Traian",
"middle": [],
"last": "Rebedea",
"suffix": ""
},
{
"first": "Costin-Gabriel",
"middle": [],
"last": "Chiru",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George-Eduard Zaharia, George-Alexandru Vlad, Dumitru-Clementin Cercel, Traian Rebedea, and Costin-Gabriel Chiru. 2020. UPB at SemEval-2020 Task 9: Identifying sentiment in code-mixed social media texts using trans- formers and multi-task learning. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "Zyy1510 team at SemEval-2020 Task 9: Sentiment analysis for code-mixed social media text with sub-word level representations",
"authors": [
{
"first": "Yueying",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Hongling",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kunjie",
"middle": [],
"last": "Dong",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yueying Zhu, Xiaobing Zhou, Hongling Li, and Kunjie Dong. 2020. Zyy1510 team at SemEval-2020 Task 9: Sentiment analysis for code-mixed social media text with sub-word level representations. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"text": "A screenshot of the Spanglish annotation interface.",
"num": null,
"uris": null
},
"TABREF1": {
"html": null,
"content": "<table><tr><td colspan=\"2\">Language Tweet</td><td>Class</td></tr><tr><td>Spanglish</td><td colspan=\"2\">@username Negative</td></tr><tr><td>Spanglish</td><td>Tengo lang2 hungry lang1 mhm unk (I'm hungry mhm)</td><td>Neutral</td></tr><tr><td/><td>Congratulations ENG</td><td/></tr><tr><td>Hinglish</td><td/><td/></tr></table>",
"text": "optimizer (\u03b7 = 2e \u22125 ). other ha lang2 pos ambiguous have lang1 fun lang1 its lang1 pretty lang1 te lang2 subes lang2 al lang2 horse lang1 its lang1 cute lang1 lol lang1 (@username ah, then have fun, it's pretty, you ride the horse, it's cute lol)Positive Spanglish Cuando lang2 Mis lang2 parents lang1 me lang2 dejan lang2 ir lang2 el lang2 date lang1 me ambiguous Keda lang2 Mal lang2 / other . otherother No lang2 MAMEN lang2 (When my parents let me go, my date is cancelled / . -You're kidding me!) Sir ENG we ENG proud ENG of ENG you ENG .. O Aap HIN pr HIN pura HIN jakeen HIN hai HIN .. O aap HIN bohat HIN achaa HIN n home HIN minister ENG Honga HIN .. O ) O (Congratulations sir we are proud of you.. We believe in you.. You will be a very good home minister.. ) Positive Hinglsih Hostelite ENG k ENG naam HIN pe HIN dhabba HIN ho HIN tum HIN (you are a blot on the name of a hostelite) Negative Hinglish Warm ENG up ENG match ENG to ENG theek HIN thaak HIN chal HIN ra HIN hai",
"type_str": "table",
"num": null
},
"TABREF2": {
"html": null,
"content": "<table><tr><td colspan=\"2\">Rank System</td><td/><td>Positive</td><td/><td/><td>Neutral</td><td/><td/><td>Negative</td><td>Avg.</td></tr><tr><td/><td/><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>F1</td></tr><tr><td>1</td><td>KK2018</td><td colspan=\"3\">84.3 76.0 79.9</td><td colspan=\"3\">65.2 73.1 68.9</td><td colspan=\"3\">78.5 75.4 76.9</td><td>75.0</td></tr><tr><td>2</td><td>Genius1237</td><td colspan=\"3\">81.0 77.8 79.3</td><td colspan=\"3\">65.7 64.3 65.0</td><td colspan=\"3\">72.0 77.0 74.4</td><td>72.6</td></tr><tr><td>3</td><td>olenet</td><td colspan=\"3\">78.2 74.4 76.2</td><td colspan=\"3\">62.8 65.3 64.0</td><td colspan=\"3\">75.2 75.7 75.5</td><td>71.5</td></tr><tr><td>4</td><td>gopalanvinay</td><td colspan=\"3\">80.7 74.6 77.5</td><td colspan=\"3\">61.4 67.5 64.3</td><td colspan=\"3\">74.5 71.6 73.0</td><td>71.3</td></tr><tr><td>5</td><td>ayushk</td><td colspan=\"3\">78.8 73.8 76.2</td><td colspan=\"3\">60.9 67.5 64.0</td><td colspan=\"3\">75.3 70.6 72.9</td><td>70.7</td></tr><tr><td>6</td><td>Taha</td><td colspan=\"3\">78.6 72.8 75.6</td><td colspan=\"3\">60.6 70.1 65.0</td><td colspan=\"3\">76.2 67.9 71.8</td><td>70.6</td></tr><tr><td>7</td><td>Miriam</td><td colspan=\"3\">78.0 77.3 77.6</td><td colspan=\"3\">62.6 60.1 61.3</td><td colspan=\"3\">70.7 74.9 72.7</td><td>70.2</td></tr><tr><td>8</td><td colspan=\"4\">HugoLerogeron 79.2 74.7 76.9</td><td colspan=\"3\">60.3 63.9 62.1</td><td colspan=\"3\">70.6 70.0 70.3</td><td>69.5</td></tr><tr><td>9</td><td>somban</td><td colspan=\"3\">78.6 72.9 75.6</td><td colspan=\"3\">59.4 65.0 62.1</td><td colspan=\"3\">71.8 69.3 70.5</td><td>69.1</td></tr><tr><td>10</td><td>aditya malte</td><td colspan=\"3\">80.3 69.0 74.2</td><td colspan=\"3\">57.0 73.5 64.2</td><td colspan=\"3\">77.3 62.2 69.0</td><td>69.0</td></tr><tr><td>11</td><td colspan=\"4\">MeisterMorxrc 79.9 70.1 74.7</td><td colspan=\"3\">59.5 65.0 62.1</td><td colspan=\"3\">70.2 71.9 71.0</td><td>69.0</td></tr><tr><td>12</td><td>nirantk</td><td colspan=\"3\">78.9 70.8 74.6</td><td colspan=\"3\">58.3 67.4 62.5</td><td colspan=\"3\">73.2 67.6 70.2</td><td>68.9</td></tr><tr><td>13</td><td>apurva19</td><td colspan=\"3\">78.8 75.8 77.3</td><td colspan=\"3\">61.2 60.8 61.0</td><td colspan=\"3\">67.4 70.8 69.1</td><td>68.8</td></tr><tr><td>14</td><td>c1pher</td><td colspan=\"3\">79.7 69.7 74.4</td><td colspan=\"3\">56.5 73.5 63.9</td><td colspan=\"3\">78.3 60.7 68.4</td><td>68.7</td></tr><tr><td>15</td><td>will go</td><td colspan=\"3\">77.2 70.5 73.7</td><td colspan=\"3\">57.8 70.2 63.4</td><td colspan=\"3\">75.9 63.4 69.1</td><td>68.6</td></tr><tr><td>45</td><td>Baseline</td><td colspan=\"3\">72.8 68.8 70.7</td><td colspan=\"3\">56.2 60.2 58.1</td><td colspan=\"3\">69.1 67.4 68.3</td><td>65.4</td></tr></table>",
"text": "Table 4show top 15 participants 13 for Hinglish and Spanglish respectively. For Hinglish the top 15 participants lie between 75% and 68.6% F1 score. The participants in the middle of the table are quite close to each other. 44 participants beat the baseline whereas 17 could not. For Spanglish, the top 15 F1 scores lie between 80.6% and 71.0% and most are in mid 70s. 22 teams were able to beat the baseline whereas 6 could not. The results are much better for positive than for other two classes due to the data imbalance.",
"type_str": "table",
"num": null
},
"TABREF3": {
"html": null,
"content": "<table/>",
"text": "Top 15 Results for the Hinglish dataset. The systems are ordered by the Weighted Average F1",
"type_str": "table",
"num": null
}
}
}
}