|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:15:24.810949Z" |
|
}, |
|
"title": "XLP at SemEval-2020 Task 9: Cross-lingual Models with Focal Loss for Sentiment Analysis of Code-Mixing Language", |
|
"authors": [ |
|
{ |
|
"first": "Yili", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "OPPO Research Institute", |
|
"location": { |
|
"settlement": "Beijing", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "OPPO Research Institute", |
|
"location": { |
|
"settlement": "Beijing", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Jie", |
|
"middle": [], |
|
"last": "Hao", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "OPPO Research Institute", |
|
"location": { |
|
"settlement": "Beijing", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we present an approach for sentiment analysis in code-mixed language on twitter defined in SemEval-2020 Task 9. Our team (referred as LiangZhao) employ different multilingual models with weighted loss focused on complexity of code-mixing in sentence, in which the best model achieved f1-score of 0.806 and ranked 1st of subtask-Sentimix Spanglish. The performance of method is analyzed and each component of our architecture is demonstrated.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we present an approach for sentiment analysis in code-mixed language on twitter defined in SemEval-2020 Task 9. Our team (referred as LiangZhao) employ different multilingual models with weighted loss focused on complexity of code-mixing in sentence, in which the best model achieved f1-score of 0.806 and ranked 1st of subtask-Sentimix Spanglish. The performance of method is analyzed and each component of our architecture is demonstrated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Sentiment analysis is in the area of research that perform the automatic comprehension of the subjective information from user-generated data, which helps to gain the views on certain topics. Due to the rise of social media such as micro-blogs (e.g., Twitter) and the trend of global communications, they have accelerated the use of multilingual expressions, raising the concerns on code-mixing behavior (Patwa et al., 2020) . To develop cross-lingual encoders that can encode any sentence into a shared embedding space, by using monolingual transfer learning, multilingual extensions of pretrained (Lample et al., 2019) encoders have been shown effective.", |
|
"cite_spans": [ |
|
{ |
|
"start": 404, |
|
"end": 424, |
|
"text": "(Patwa et al., 2020)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 599, |
|
"end": 620, |
|
"text": "(Lample et al., 2019)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As for code-mixed text, more complicated than cross-lingual sentence, it is crucial to consider the complexity of texts written in several different languages because different types of integration correlate with different social contexts (Gualberto A. et al., 2016) . Sometimes, the user may post blogs in non-native language with grammar mistakes or even prefer to express the sentiment in the native language. The phenomena has encouraged the researchers to analyze the sentiment from multilingual code-mixed texts. Because Spanish and English share a lot of words with Latin roots, sometimes words with the same origin take a separate path in each language, or words with different origins resemble each other by coincidence, but have different meanings. For example,\u00e9xito from Spanish means success, which resembles exit from English, with different meaning and sentiment. In the task, the number of words in a sentence vary from different languages dramatically. Intuitively, the language that has a bigger presence in the tweet would contain the sentiment of the sentence. To tackle the problem, we adopt the focal loss through calculating the ratio of each language in code-mixing text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 239, |
|
"end": 266, |
|
"text": "(Gualberto A. et al., 2016)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of the paper is structured as follows: Section 2 provides the detailed implementation method. Section 3 presents the results and performance of our models as well as experiment settings. Concluded remarks and future directions of our work are summarized in Section 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Normally, deep learning models have a simple data processing pipeline, while in the task data is very messy. Therefore we have used a more detailed method according to characteristic of the code-mixing data (URLs, emoji, hash symbol etc.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation details 2.1 Preprocessing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "First, user name mentioned and URL are all removed because they are useless for sentiment prediction. Special characters like \"RT\" representing re-tweet is also deleted. Moreover, we also remove the hash Figure 1 : Illustration of our model symbol from hash-tags as it can be problematic for tokenizers to work with. As for non-text symbol like emoji and emoticon, we use the (emoji, 2019) library from python and emoticon dictionary from wiki (List of emoticons, 2020) respectively to transform the symbols to text. Next all characters into lowercase and stop words are removed. Afterwords, we employ fastBPE to generate and apply BPE codes to get post-BPE vocabulary using vocabulary of XLM model for 100 languages including Hindi, Spanish, and English. Sentence size is limited to 256. This is enough for nearly all of the tweets after processing.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 204, |
|
"end": 212, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Implementation details 2.1 Preprocessing", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In order to get more training data and based on the statistics of dataset, we have utilized machine translation (Sennrich et al., 2016) for generating more text to boost up the performance. After the original code-mixed text is translated to the target language Spanish, both source sentences and translated sentences are mixed to train a model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data augmentation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "To extract valid representation features of tweet, two state-of-the-art pre-trained sentence embedding models are utilized. Details are deliberated in the following section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained Models for Feature Encoding", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "\u2022 XLMs: We use pretrained embeddings made available by Facebook research (Lample et al., 2019), which is unsupervised that only relies on monolingual data, and support 100 languages including English and Spanish. After fine-tuning an XLM model on the training corpus, the model is still able to make accurate predictions at test time in code-mixed languages, for which there is not enough training data. This approach is usually referred to as \"zero-shot cross-lingual classification\". Based on the pretrained XLM model, the sentence is indexed by vocabulary and then independently fed into the pretrained transformer model, which is also optimized during training. The single column of last hidden layer of transformer model is used as the representation of sentence, fed into a projection layer using linear transformation. While for CNN model, all columns of last hidden layer are utilized as the sentence embedding.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained Models for Feature Encoding", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "\u2022 MUSE: MUSE are multilingual embeddings based on fastText (Conneau et al., 2017) , available in different languages, where the words are mapped into the same vector space across languages. We use the average representations of all words in a sentence, which is modified during training as well.", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 81, |
|
"text": "(Conneau et al., 2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-trained Models for Feature Encoding", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "Two models are examined with MUSE and XLM respectively: CNN based and linear layer based. The output is global max pooled and fed into the fully connected layer with dropout rate 0.5 and the final layer is a softmax layer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Output layer", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Linear Classifier: The pretrained embeddings are just directly fed into a linear layer, also referred as fully connected layer and softmax afterwards to get the final predictions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Output layer", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "As analysis above, non-native English speaker may misuse English due to the culture differences and lack of vocabulary, and so on. The monolingual corpus in Spanish will be more accurate in the expression of the sentiment than multilingual. On the other hand, the quality of monolingual sample may be decreased due to error from augmentation data from translation. In view of data analysis of training and test corpus, we also found that the percentage of each language e.g., English and Spanish is biased. According to the statistics, the percentage of Spanish words is twice more than English in training dataset, and almost three times in valid and test data. The test data also have 560 monolingual sentences, in which half are in English and the other are in Spanish. In this case, the model is prone to learn the unbalanced semantic information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Optimized loss", |
|
"sec_num": "2.3.3" |
|
}, |
|
{ |
|
"text": "To benefit the gain from the samples and focus on the majority language model, we weighed the loss L W based on the complexity of code-mixing (Gamb ack et al., 2014) .The formula is listed as followings, where \u03b2 is the percentage of Spanish words in a sentence, CE is the initial cross entropy, \u03b3 > 0 and \u03b1 is a constant positive scaling factor. To better explore the trend, weighted loss with different hyper-parameters is shown in shown in Figure 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 165, |
|
"text": "(Gamb ack et al., 2014)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 442, |
|
"end": 450, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Optimized loss", |
|
"sec_num": "2.3.3" |
|
}, |
|
{ |
|
"text": "L W = \u03b1 * CE * (1 \u2212 \u03b2) \u03b3 + \u03b1 * CE * \u03b2 \u03b3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Optimized loss", |
|
"sec_num": "2.3.3" |
|
}, |
|
{ |
|
"text": "The \u03b3 is a focusing parameter that control the loss. Larger values of \u03b3 correspond to large losses for low complexity of code-mixing sentences. When \u03b3 < 1, the model is prone to learn the multilingual data and on the contrary, if \u03b3 > 1, it's more likely to learn the monolingual data. When \u03b3 equals 1, the loss is just the cross entropy as default.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Optimized loss", |
|
"sec_num": "2.3.3" |
|
}, |
|
{ |
|
"text": "Subtask in Spanish of the SemEval-2020 task 9 is to predict the sentiment of a given code-mixed tweet. The sentiment labels are positive, negative, or neutral, and the code-mixed languages will be English-Spanish. Besides the sentiment labels, also the language labels at the word level are provided. The word-level language tags are en (English), spa (Spanish), hi (Hindi), mixed, and univ (e.g., symbols, @ mentions, hashtags). Hyper-parameter optimization is performed using a simple grid search. All models are trained with epochs with a batch size of 8 and an initial learning rate 0.000005 by Adam optimizer. The linear layers are dropped out with a probability of 0.5. Unless otherwise stated, default settings are used for other parameters. In the process of searching for optimal architecture and parameters, we experimented CNN and fully connected layer (marked as FC) respectively with MUSE and XLM.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results evaluation 3.1 Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To explore and compare the optimal parameters \u03b1 and \u03b3, as shown in Figure 3 , there is an obvious increasing tendency of f1 score until \u03b1 >1.5 when \u03b3 <= 1.0, and reaches the highest score as \u03b3 = 0.25 and second highest as \u03b3 = 1.0, which indicates that the model has found optimal parameters prone to high level of code-mixing data. Based on the results of validation set, to select best model, we expect that the best performance is always achieved in optimal parameters as above which are \u03b3= 0.25 or \u03b3= 1.0. The scores are summarized in Table 1 . XLM model with a fully connected layer achieved best when \u03b3= 0.25, and from its class-wise scores, we conclude that the model performs best in classification of positive samples, while worst in neutral samples. The result can be caused by unbalanced distribution of data and complexity of code-mixing, such as the expression of positive sentiment mainly focused in specific language. CNN based model has not shown significant increase in performance compared to linear classifier.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 75, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 538, |
|
"end": 545, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments and Results evaluation 3.1 Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "ConclusionIn this paper, we have introduced a novel approach with weighted loss of different multilingual models with weighted loss focused on complexity of code-mixing sentences for sentiment analysis task in SemEval-2020. The method is effective in situation where the distribution of different languages is unbalanced, and has a better control of language preference for sentiment by the level of how languages mix. Moreover, we conclude that the quality of word representations used has a significant impact on the performance of a model. Results indicate the potency of XLM on code-mixed lingual classification, leading to 4-5 % increase in f1 score compared to MUSE. In the future, we will continue to do model optimization and also try ensemble models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Overview of Sentiment Analysis of Code-Mixed Tweets", |
|
"authors": [ |
|
{ |
|
"first": "Patwa", |
|
"middle": [], |
|
"last": "Parth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aguilar", |
|
"middle": [], |
|
"last": "Gustavo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kar", |
|
"middle": [], |
|
"last": "Sudipta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pandey", |
|
"middle": [], |
|
"last": "Suraj", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gamb\u00e4ck", |
|
"middle": [], |
|
"last": "Srinivas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chakraborty", |
|
"middle": [], |
|
"last": "Bj\u00f6rn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Solorio", |
|
"middle": [], |
|
"last": "Tanmoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Das", |
|
"middle": [], |
|
"last": "Thamar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Amitava", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Patwa Parth, Aguilar Gustavo, Kar Sudipta, Pandey Suraj, PYKL Srinivas, Gamb\u00e4ck Bj\u00f6rn ,Chakraborty Tanmoy, Solorio Thamar and Das Amitava. December, 2020. SemEval-2020 Task 9: Overview of Sentiment Analysis of Code-Mixed Tweets. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval- 2020). Barcelona, Spain. Association for Computational Linguistics,", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "XNLI: Evaluating cross-lingual sentence representations", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rinott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of EMNLP 2018", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2475--2485", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Conneau A., Rinott R., Lample G., Williams A., Bowman S., Schwenk H., and Stoyanov V. 2018. XNLI: Evalu- ating cross-lingual sentence representations. In Proceedings of EMNLP 2018, pp. 2475-2485, 2018b.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Cross-lingual Language Model Pretraining", |
|
"authors": [ |
|
{ |
|
"first": "Lample", |
|
"middle": [], |
|
"last": "Guillaume", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Conneau", |
|
"middle": [], |
|
"last": "Alexis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "7059--7069", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lample Guillaume and Conneau Alexis. 2019. Cross-lingual Language Model Pretraining. In Advances in Neural Information Processing Systems 32, pages 7059-7069", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Improving Neural Machine Translation Models with Monolingual Data", |
|
"authors": [ |
|
{ |
|
"first": "Sennrich", |
|
"middle": [], |
|
"last": "Rico", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haddow", |
|
"middle": [], |
|
"last": "Barry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Birch", |
|
"middle": [], |
|
"last": "Alexandra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "86--96", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sennrich Rico, Haddow Barry, and Birch Alexandra. 2016. Improving Neural Machine Translation Models with Monolingual Data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "What is the effect of Importance Weighting in Deep Learning?", |
|
"authors": [ |
|
{ |
|
"first": "Jonathon", |
|
"middle": [], |
|
"last": "Byrd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zachary", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Lipton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 36 th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathon Byrd and Zachary C. Lipton. 2019. What is the effect of Importance Weighting in Deep Learning? In Proceedings of the 36 th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6000--6010", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "List of emoticons -Wikipedia, The Free Encyclopedia", |
|
"authors": [], |
|
"year": 2020, |
|
"venue": "Wikipedia contributors", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wikipedia contributors. List of emoticons -Wikipedia, The Free Encyclopedia. 2020. URL https://en. wikipedia.org/w/index.php?title=List_of_emoticons&oldid=949712309", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Character-level Convolutional Networks for Text Classification", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Lecun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 28th International Conference on Neural Information Processing Systems", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "649--657", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "X. Zhang, J. Zhao and Y. LeCun. 2015. Character-level Convolutional Networks for Text Classification. In Proceedings of the 28th International Conference on Neural Information Processing Systems, Volume 1, pages 649-657", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Unsupervised Data Augmentation for Consistency Training", |
|
"authors": [ |
|
{ |
|
"first": "Xie", |
|
"middle": [], |
|
"last": "Qizhe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dai", |
|
"middle": [], |
|
"last": "Zihang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hovy", |
|
"middle": [], |
|
"last": "Eduard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luong", |
|
"middle": [], |
|
"last": "Minh-Thang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Le", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1904.12848" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xie Qizhe, Dai Zihang, Hovy Eduard, Luong Minh-Thang and Le Quoc V. 2019 Unsupervised Data Augmenta- tion for Consistency Training. arXiv preprint arXiv:1904.12848", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Benefits of Data Augmentation for NMTbased Text Normalization of User-Generated Content", |
|
"authors": [ |
|
{ |
|
"first": "Claudia", |
|
"middle": [], |
|
"last": "Matos Veliz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hoste", |
|
"middle": [], |
|
"last": "De Clercq Orphee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Veronique", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 5th Workshop on Noisy Usergenerated Text", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matos Veliz Claudia, de clercq Orphee and Hoste Veronique. 2019. Benefits of Data Augmentation for NMT- based Text Normalization of User-Generated Content. In Proceedings of the 5th Workshop on Noisy User- generated Text (W-NUT 2019)", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Noise or music? Investigating the usefulness of normalisation for robust sentiment analysis on social media data", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Van Hee Cynthia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Van De Kauter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Marjan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "De Clercq Orphee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Revue Traitement Automatique des Langues", |
|
"volume": "58", |
|
"issue": "1", |
|
"pages": "63--87", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Van Hee Cynthia, Van de Kauter, Marjan, de clercq Orphee, Lefever Els and Hoste Veronique. 2018. Noise or music? Investigating the usefulness of normalisation for robust sentiment analysis on social media data. Revue Traitement Automatique des Langues , 58(1):63-87.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Qanet: Combining local convolution with global self-attention for reading comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Adams", |
|
"middle": [ |
|
"Wei" |
|
], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Dohan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Norouzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.09541" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541, 2018.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Understanding Language Preference for Expression of Opinion and Sentiment: What do Hindi-English Speakers do on Twitter?", |
|
"authors": [ |
|
{ |
|
"first": "Rudra", |
|
"middle": [], |
|
"last": "Koustav", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rijhwani", |
|
"middle": [], |
|
"last": "Shruti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Begum", |
|
"middle": [], |
|
"last": "Rafiya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Choudhury", |
|
"middle": [], |
|
"last": "Monojit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bali", |
|
"middle": [], |
|
"last": "Kalika", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ganguly", |
|
"middle": [], |
|
"last": "Niloy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of EMNLP 2016", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1131--1141", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rudra Koustav, Rijhwani Shruti, Begum Rafiya, Choudhury Monojit, Bali Kalika and Ganguly Niloy. 2016. Understanding Language Preference for Expression of Opinion and Sentiment: What do Hindi-English Speakers do on Twitter? In Proceedings of EMNLP 2016, pages 1131-1141", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Simple Tools for Exploring Variation in Code-switching for Linguists", |
|
"authors": [ |
|
{ |
|
"first": "Guzman", |
|
"middle": [], |
|
"last": "Gualberto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Serigos", |
|
"middle": [], |
|
"last": "Jacqueline", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Bullock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Almeida", |
|
"middle": [], |
|
"last": "Toribio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jacqueline", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Second Workshop on Computational Approaches to Code Switching", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "12--20", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guzman Gualberto A. , Serigos Jacqueline, Bullock Barbara E. , and Toribio, Almeida Jacqueline. 2016. Simple Tools for Exploring Variation in Code-switching for Linguists. In Proceedings of the Second Workshop on Computational Approaches to Code Switching, pages 12-20", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "EN-ES-CS: An English-Spanish Code-Switching Twitter Corpus for Multilingual Sentiment Analysis", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Vilares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [ |
|
"Alonso" |
|
], |
|
"last": "Pardo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "G\u00f3mez-Rodr\u00edguez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4149--4153", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Vilares, Miguel Alonso Pardo, and Carlos G\u00f3mez-Rodr\u00edguez. 2016. EN-ES-CS: An English-Spanish Code-Switching Twitter Corpus for Multilingual Sentiment Analysis. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 4149-4153.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "On Measuring the Complexity of Code-Mixing", |
|
"authors": [ |
|
{ |
|
"first": "Gamb", |
|
"middle": [], |
|
"last": "Bj Orn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amitava", |
|
"middle": [], |
|
"last": "Ack", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 11th International Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--7", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bj orn Gamb ack and Amitava Das. 2014. On Measuring the Complexity of Code-Mixing. In Proceedings of the 11th International Conference on Natural Language Processing, Goa, India, pages 1-7.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Weighted loss with different hyper-parameters \u2022 CNN Classifier: The CNN classifier is composed of three 2-D convolution layers with filter widths ranging from three to five. Each convolution layer has 100 filters. The intermediate layers use ReLu activation.", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Mean f1 score for validation set of different hyper-parameters with XLM-FC", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>Models</td><td colspan=\"4\">Parameters Valid f1-score Test f1-score</td></tr><tr><td>organizer baseline</td><td>-</td><td/><td>-</td><td>0.656</td></tr><tr><td>MUSE-FC</td><td colspan=\"2\">(\u03b3 =1.0)</td><td>0.4092</td><td>0.738</td></tr><tr><td>MUSE-FC</td><td colspan=\"2\">(\u03b3 =0.25)</td><td>0.4104</td><td>0.742</td></tr><tr><td>MUSE-CNN</td><td colspan=\"2\">(\u03b3 =1.0)</td><td>0.4761</td><td>0.739</td></tr><tr><td>MUSE-CNN</td><td colspan=\"2\">(\u03b3 =0.25)</td><td>0.4878</td><td>0.755</td></tr><tr><td>XLM-FC</td><td colspan=\"2\">(\u03b3 =1.0)</td><td>0.5211</td><td>0.776</td></tr><tr><td>XLM-FC</td><td colspan=\"2\">(\u03b3 =0.25)</td><td>0.5214</td><td>0.806</td></tr><tr><td>XLM-CNN</td><td colspan=\"2\">(\u03b3 =1.0)</td><td>0.5157</td><td>0.794</td></tr><tr><td>XLM-CNN</td><td colspan=\"2\">(\u03b3 =0.25)</td><td>0.5294</td><td>0.805</td></tr><tr><td>Table 1: 1. The trial data have 2000 tweets.</td><td/><td/><td/></tr><tr><td colspan=\"5\">2. The train data have 15004 tweets. (They include trial data as well).</td></tr><tr><td colspan=\"5\">3. The train data after split have 12002 tweets.(used for training).</td></tr><tr><td colspan=\"2\">4. The validation data have 2998 tweets.</td><td/><td/></tr><tr><td>5. The test data have 3789 tweets.</td><td/><td/><td/></tr><tr><td colspan=\"5\">Moreover, 5000 backPrecision Recall F1-score</td></tr><tr><td>Positive</td><td colspan=\"2\">0.883</td><td>0.926</td><td>0.904</td></tr><tr><td>Negative</td><td colspan=\"2\">0.599</td><td>0.395</td><td>0.476</td></tr><tr><td>Neutral</td><td colspan=\"2\">0.181</td><td>0.209</td><td>0.194</td></tr><tr><td colspan=\"2\">Weighted</td><td>-</td><td>-</td><td>0.806</td></tr></table>", |
|
"text": "Performance metrics of different models on validation and test sets. The average f1 scores of validation set are reported for ten runs using different random seeds to choose hyper-parameters, and the test scores are generated by using the trained model to predict on released labeled test data. -translated data are added into training. Validation subset is used as an unbiased accuracy evaluation in order to fine-tune hyper parameters during training. To evaluate the performance of the system, Precision, Recall, and F-measure are measured.", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"text": "Performance metrics of Class-wise Classification 3.2 Experiment Setup and Results", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |