Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S18-1016",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:43:45.781270Z"
},
"title": "LT3 at SemEval-2018 Task 1: A classifier chain to detect emotions in tweets",
"authors": [
{
"first": "Luna",
"middle": [],
"last": "De Bruyne",
"suffix": "",
"affiliation": {
"laboratory": "Language and Translation Technology Team",
"institution": "Ghent University Groot",
"location": {
"addrLine": "Brittanni\u00eblaan 45",
"postCode": "9000",
"settlement": "Ghent",
"country": "Belgium"
}
},
"email": "[email protected]"
},
{
"first": "Orph\u00e9e",
"middle": [],
"last": "De Clercq",
"suffix": "",
"affiliation": {
"laboratory": "Language and Translation Technology Team",
"institution": "Ghent University Groot",
"location": {
"addrLine": "Brittanni\u00eblaan 45",
"postCode": "9000",
"settlement": "Ghent",
"country": "Belgium"
}
},
"email": "[email protected]"
},
{
"first": "V\u00e9ronique",
"middle": [
"Hoste"
],
"last": "Lt",
"suffix": "",
"affiliation": {
"laboratory": "Language and Translation Technology Team",
"institution": "Ghent University Groot",
"location": {
"addrLine": "Brittanni\u00eblaan 45",
"postCode": "9000",
"settlement": "Ghent",
"country": "Belgium"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents an emotion classification system for English tweets, submitted for the SemEval shared task on Affect in Tweets, subtask 5: Detecting Emotions. The system combines lexicon, n-gram, style, syntactic and semantic features. For this multi-class multilabel problem, we created a classifier chain. This is an ensemble of eleven binary classifiers, one for each possible emotion category, where each model gets the predictions of the preceding models as additional features. The predicted labels are combined to get a multilabel representation of the predictions. Our system was ranked eleventh among thirty five participating teams, with a Jaccard accuracy of 52.0% and macro-and micro-average F1scores of 49.3% and 64.0%, respectively.",
"pdf_parse": {
"paper_id": "S18-1016",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents an emotion classification system for English tweets, submitted for the SemEval shared task on Affect in Tweets, subtask 5: Detecting Emotions. The system combines lexicon, n-gram, style, syntactic and semantic features. For this multi-class multilabel problem, we created a classifier chain. This is an ensemble of eleven binary classifiers, one for each possible emotion category, where each model gets the predictions of the preceding models as additional features. The predicted labels are combined to get a multilabel representation of the predictions. Our system was ranked eleventh among thirty five participating teams, with a Jaccard accuracy of 52.0% and macro-and micro-average F1scores of 49.3% and 64.0%, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Most research in the domain of sentiment analysis focuses on the automatic prediction of polarity or valence in text, but also the detection of emotions has attracted growing interest in the last couple of years . Although emotion detection is a rather new research focus in NLP, the study of emotions has a long history in fields like psychology and neuroimaging. Many different frameworks exist, but the specific emotion approach, in which emotions are classified as specific discrete categories, predominates. In a lot of those approaches, some emotions are considered more basic than others, with Ekman's theory of six basic emotions (joy, sadness, anger, fear, disgust, and surprise) (Ekman, 1992) as the most well-known. Another popular theory is Plutchik's wheel of emotions (Plutchik, 1980) , in which joy, sadness, anger, fear, disgust, surprise, trust, and anticipation are considered most basic.",
"cite_spans": [
{
"start": 689,
"end": 702,
"text": "(Ekman, 1992)",
"ref_id": "BIBREF3"
},
{
"start": 782,
"end": 798,
"text": "(Plutchik, 1980)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Emotion analysis in NLP makes use of the frameworks developed by psychologists, mostly by employing categorical models of (basic) emotions. In traditional emotion classification tasks, a 'document' or sentence is classified under one or more emotion classes (or classified as neutral/no class when no emotions are present). Such emotion classification systems have been developed and tested on different kinds of data, including fairy tales (Alm et al., 2005) , newspaper headlines (Strapparava and Mihalcea, 2007) , chat messages (e.g. Holzman and Pottenger, 2003; Brooks et al., 2013) , and tweets (e.g. Mohammad, 2012; Wang et al., 2012) . The big advantage of using tweet datasets is the relative ease with which twitter data can be collected and the possibility of using hashtags as emotion labels (distant supervision approach).",
"cite_spans": [
{
"start": 441,
"end": 459,
"text": "(Alm et al., 2005)",
"ref_id": "BIBREF0"
},
{
"start": 482,
"end": 514,
"text": "(Strapparava and Mihalcea, 2007)",
"ref_id": "BIBREF14"
},
{
"start": 537,
"end": 565,
"text": "Holzman and Pottenger, 2003;",
"ref_id": "BIBREF4"
},
{
"start": 566,
"end": 586,
"text": "Brooks et al., 2013)",
"ref_id": "BIBREF1"
},
{
"start": 606,
"end": 621,
"text": "Mohammad, 2012;",
"ref_id": "BIBREF7"
},
{
"start": 622,
"end": 640,
"text": "Wang et al., 2012)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For this paper, we used the data that was collected for the SemEval shared task on Affect in Tweets , a collection of tweets annotated for eleven emotions: anger, anticipation, disgust, fear, joy, love, optimism, pessimism, sadness, surprise, and trust . We participated in Subtask 5: Detecting Emotions (English emotion classification).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is structured as follows: in Section 2 we describe how we first analyzed the data in order to get more insight in the task. Section 3 discusses how the data was preprocessed and which information sources were extracted. Next, in Section 4 the actual experimental setup and results are discussed and we end this paper with a conclusion in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first analyzed the training data provided by the task organizers, which consisted of 6838 tweets. We found that disgust, anger and joy were present in the largest numbers (present in about 35 to 40% of the tweets), while surprise and trust only occur in around 5% of the tweets (Figure 1 ). Only three percent of the tweets was annotated as neutral.",
"cite_spans": [],
"ref_spans": [
{
"start": 281,
"end": 290,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data analysis",
"sec_num": "2"
},
{
"text": "As can be derived from Figure 2 , most tweets contained two or three emotions (together 70%), and in only about 1% of the tweets five or more (max six) emotions were present. We also calculated the correlations and found ten emotion pairs that were moderately or highly correlated ( phi \u2265 0.30 for moderate correlation, phi \u2265 0.50 for high correlation, according to Cohen's conventions on effect size (Cohen, 1988) ). The correlated pairs are shown in Table 1 and suggest that the classification performance can be boosted when correlations between emotion categories are implemented in the model.",
"cite_spans": [
{
"start": 401,
"end": 414,
"text": "(Cohen, 1988)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 23,
"end": 31,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 452,
"end": 459,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data analysis",
"sec_num": "2"
},
{
"text": "In order to get more insight into the data, we reannotated a subset of 500 tweets from the training set. In Table 2 , inter-annotator agreement (IAA) scores per emotion class between the gold labels and our annotations are presented. Except for anger and joy these scores are rather low. Overall, we assigned less emotion classes to a tweet than the official annotators. We often disagreed with the gold labels and had the feeling that the anno-Pair phi anger -joy -0.44 anger -optim.",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 115,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data analysis",
"sec_num": "2"
},
{
"text": "-0.37 disg. -optim.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data analysis",
"sec_num": "2"
},
{
"text": "-0.41 joy -disg.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data analysis",
"sec_num": "2"
},
{
"text": "-0.46 joy -sadn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data analysis",
"sec_num": "2"
},
{
"text": "- tators of the official labels focused too much on lexical clues instead of keeping the context and the perspective of the writer of the tweet in mind. This leads us to presume that the threshold to assign an emotion label to a tweet when two out of seven annotators agreed might have been a bit too generous. We further noticed that some tweets appeared twice in the data set, but not completely identically: we suspect that one of them was the original tweet with emotion hashtag and the other one with the hashtag removed. An example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data analysis",
"sec_num": "2"
},
{
"text": "(1) a. Whatever you decide to do make sure it makes you #happy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data analysis",
"sec_num": "2"
},
{
"text": "b. Whatever you decide to do make sure it makes you .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data analysis",
"sec_num": "2"
},
{
"text": "Since labels differed depending on the presence or absence of the emotion hashtag, we decided to keep both variants in our training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data analysis",
"sec_num": "2"
},
{
"text": "3 Preprocessing & Feature Extraction",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data analysis",
"sec_num": "2"
},
{
"text": "While we did not remove the 'almost identical' tweets from the data set, there were also some tweets in the training set that were completely identical but had been assigned other emotion labels. For those tweets, we took the majority class for each binary emotion category, and removed all other instances. This reduced our training set from 6838 to 6782 tweets. No duplicates were present in the development set, so the amount of 886 tweets was preserved. In the updated training set, as well as in the development and test set, all user names were replaced with the generic @ID. All tweets were processed with Weka (Witten et al., 2016) using the Affective Tweets package , in order to extract lexicon and word embedding features. We used the default preprocessing settings for each filter. For the other features, we performed word and sentence tokenization (using NLTK), stemming (using spaCy), lowercasing, and POS-tagging (simple and detailed, corresponding to spaCy's POS and Tag function).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.1"
},
{
"text": "For our supervised classification system, we employed features that measure different aspects of the tweet. These can be subsumed under five different categories: lexicon features (see Table 3 for an overview), n-gram features (binary, n equal to 3, 4 and 5 for characters and n equal to 1 or 2 for tokens), and various style, syntactic and semantic features (see Table 4 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 185,
"end": 192,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 364,
"end": 371,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Feature extraction",
"sec_num": "3.2"
},
{
"text": "Regarding the latter category, both features from traditional and distributional semantics were integrated. We first took the synset depth (distance to root) of all content words (calculated with WordNet (Miller, 1995) ) and averaged the scores to get a mean synset depth for the tweet. Furthermore, we included two types of features from distributional semantics, namely word embeddings and word clusters. The word embeddings were extracted with Weka Affective Tweets, using pre-trained embeddings from 10 million tweets taken from the Edinburgh Twitter Corpus (Petrovic et al., 2010) . For the word clusters, we downloaded a subset of around 1.5M tweets from the SemEval 2018 AIT DISC corpus . We first created word embeddings with word2vec using both skipgram and continuous bow and afterwards applied k-means clustering on the resulting word vectors. We experimented with various cluster sizes (800 of size 100, 1000 of size 100 and 800 of size 300). These clusters were implemented as binary features.",
"cite_spans": [
{
"start": 204,
"end": 218,
"text": "(Miller, 1995)",
"ref_id": "BIBREF5"
},
{
"start": 562,
"end": 585,
"text": "(Petrovic et al., 2010)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature extraction",
"sec_num": "3.2"
},
{
"text": "We trained different models on the training set and tested them on the development set, using scikitlearn (Pedregosa et al., 2011) . For the baseline ex- periments, we used an SVM classifier with linear kernel (LinearSVC) and used the lexicon features from the Weka Affective Tweets package. The results for each binary classifier are shown in Table 5 (second column). Combining the predictions of these eleven binary classifiers resulted in a jaccard accuracy of 42.7%.",
"cite_spans": [
{
"start": 106,
"end": 130,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 344,
"end": 352,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Baseline & Binary Experiments",
"sec_num": "4.1"
},
{
"text": "Before optimizing the separate classifiers, we took a more detailed look at the lexicon features and the clusters to assess whether it is beneficial to use only a part of the lexicons (e.g. only the emotion lexicons) or whether it is better to use all lexicons (even polarity lexicons). We found that the combination of all lexicons (including the valence-arousal-dominance lexicon of Warriner et al. (2013)) gave the highest performance. As regards the clusters, we tried all cluster types on each emotion category and picked the cluster that gave the highest performance on that particular category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline & Binary Experiments",
"sec_num": "4.1"
},
{
"text": "For every emotion category, we tested different classifiers on different combinations of features. The classifiers we used, were SVM, SGD (linear SVM with stochastic gradient descent learning), Logistic Regression, and Random Forest. Table 5 shows the F1-scores (in bold) on the positive class for the best performing classifiers and feature combinations, which are significantly higher than the baseline results. We joined the predictions of these optimized binary classifiers, and achieved a jaccard accuracy of 47.7%. ",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 242,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Baseline & Binary Experiments",
"sec_num": "4.1"
},
{
"text": "Because the emotion categories are highly correlated (see Section 2), we envisaged to implement these relations in the model by using a classifier chain. We combined the best performing classifier per emotion category in a chain that passes predicted labels on to the next classifiers. We ordered the classifiers by performance on the positive class F1-score on the baseline (the emotion that is easiest to predict first, the emotion that is the most difficult to predict last). On the development set, this classifier chain approach led to a jaccard accuracy of 52.37%, which is significantly higher than the score without classifier chain (47.7%, see Section 4.1). In our final model, the training and development data were joined, resulting in a combined training set of 7668 tweets. During the evaluation period, we achieved 52.0% jaccard accuracy, 64.0% micro-avg F1-score and 49.3% macro-avg F1-score on the held-out test set (see Table 6 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 937,
"end": 944,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Classifier Chain",
"sec_num": "4.2"
},
{
"text": "As can be derived from low 20% for most emotions). The model had most trouble with recognizing positive instances of surprise, pessimism, and trust, but also love and anticipation were more challenging. For these categories, the false negative rate was thus very high. We assume that these bad results are mostly due to a lack of sufficient training data for these categories. We evaluated all features by computing the ANOVA F-values, and extracted the hundred most predictive features for each emotion category. For all emotions, the top 100 features consisted exclusively of lexical information. In none of the emotion categories, style or syntactic features occurred in this top 100. However, features regarding labels of preceding classifiers belonged to the most predictive features for all emotions except for optimism and surprise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "Our emotion classification system for English tweets achieved 52.0% jaccard accuracy on the held-out test set. We started from binary classifiers which we optimized for each emotion category separately, and combined them in a classifier chain. We proved that passing on labels from previously predicted emotions categories improves the performance significantly. For future work, it would be interesting to investigate the model's performance on other datasets than twitter data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Emotions from text: machine learning for text-based emotion prediction",
"authors": [
{
"first": "Cecilia",
"middle": [],
"last": "Ovesdotter Alm",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on human language technology and empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "579--586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cecilia Ovesdotter Alm, Dan Roth, and Richard Sproat. 2005. Emotions from text: machine learning for text-based emotion prediction. In Proceedings of the conference on human language technology and empirical methods in natural language processing, pages 579-586. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Statistical affect detection in collaborative chat",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Brooks",
"suffix": ""
},
{
"first": "Katie",
"middle": [],
"last": "Kuksenok",
"suffix": ""
},
{
"first": "Megan",
"middle": [
"K"
],
"last": "Torkildson",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Perry",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Robinson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Ona",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Ariana",
"middle": [],
"last": "Anicello",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Zukowski",
"suffix": ""
},
{
"first": "Cecilia",
"middle": [
"R"
],
"last": "Harris",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aragon",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference on Computer supported cooperative work",
"volume": "",
"issue": "",
"pages": "317--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Brooks, Katie Kuksenok, Megan K Torkild- son, Daniel Perry, John J Robinson, Taylor J Scott, Ona Anicello, Ariana Zukowski, Paul Harris, and Cecilia R Aragon. 2013. Statistical affect detec- tion in collaborative chat. In Proceedings of the 2013 conference on Computer supported coopera- tive work, pages 317-328. ACM.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Statistical power analysis for the behavioral sciences",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1988,
"venue": "hilsdale. NJ: Lawrence Earlbaum Associates",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Cohen. 1988. Statistical power analysis for the behavioral sciences . hilsdale. NJ: Lawrence Earl- baum Associates, 2.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An argument for basic emotions",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Ekman",
"suffix": ""
}
],
"year": 1992,
"venue": "Cognition & emotion",
"volume": "6",
"issue": "3-4",
"pages": "169--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion, 6(3-4):169-200.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Classification of emotions in internet chat: An application of machine learning using speech phonemes",
"authors": [
{
"first": "E",
"middle": [],
"last": "Lars",
"suffix": ""
},
{
"first": "William",
"middle": [
"M"
],
"last": "Holzman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pottenger",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lars E Holzman and William M Pottenger. 2003. Clas- sification of emotions in internet chat: An applica- tion of machine learning using speech phonemes. Technical report, Leigh University.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39- 41.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Emotion intensities in tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 6th Joint Conference on Lexical and Computational Semantics, *SEM @ACM 2017",
"volume": "",
"issue": "",
"pages": "65--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad and Felipe Bravo-Marquez. 2017. Emotion intensities in tweets. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics, *SEM @ACM 2017, Vancouver, Canada, August 3-4, 2017, pages 65-77.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "# emotional tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "246--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad. 2012. # emotional tweets. In Pro- ceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceed- ings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 246-255. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Wassa-2017 Shared Task on Emotion Intensity",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad and Felipe Bravo-Marquez. 2017. Wassa-2017 Shared Task on Emotion Intensity. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA-2017), Copenhagen, Den- mark.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semeval-2018 Task 1: Affect in tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad, Felipe Bravo-Marquez, Mo- hammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 Task 1: Affect in tweets. In Proceed- ings of International Workshop on Semantic Evalu- ation (SemEval-2018), New Orleans, LA, USA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Understanding emotions: A dataset of tweets to study interactions between affect categories",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th Edition of the Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad and Svetlana Kiritchenko. 2018. Understanding emotions: A dataset of tweets to study interactions between affect categories. In Proceedings of the 11th Edition of the Language Resources and Evaluation Conference, Miyazaki, Japan.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Pas- sos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The edinburgh twitter corpus",
"authors": [
{
"first": "Sasa",
"middle": [],
"last": "Petrovic",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Lavrenko",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 Workshop on Computational Linguistics in a World of Social Media",
"volume": "",
"issue": "",
"pages": "25--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sasa Petrovic, Miles Osborne, and Victor Lavrenko. 2010. The edinburgh twitter corpus. In Proceedings of the NAACL HLT 2010 Workshop on Computa- tional Linguistics in a World of Social Media, pages 25-26. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A general psychoevolutionary theory of emotion",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Plutchik",
"suffix": ""
}
],
"year": 1980,
"venue": "Emotion: Theory, research, and experience",
"volume": "1",
"issue": "",
"pages": "3--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Plutchik. 1980. A general psychoevolutionary theory of emotion. In Robert Plutchik and Henry Kellerman, editors, Emotion: Theory, research, and experience: Vol. 1. Theories of emotion, pages 3-33. Academic Press, New York.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semeval-2007 task 14: Affective text",
"authors": [
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 4th International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "70--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlo Strapparava and Rada Mihalcea. 2007. Semeval- 2007 task 14: Affective text. In Proceedings of the 4th International Workshop on Semantic Evalu- ations, pages 70-74. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Harnessing twitter \"big data\" for automatic emotion identification",
"authors": [
{
"first": "Wenbo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Krishnaprasad",
"middle": [],
"last": "Thirunarayan",
"suffix": ""
},
{
"first": "Amit P",
"middle": [],
"last": "Sheth",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 ASE/IEEE international conference on social computing and 2012 ASE/IEEE international conference on privacy, security, risk and trust, SOCIALCOM-PASSAT'12",
"volume": "",
"issue": "",
"pages": "587--592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenbo Wang, Lu Chen, Krishnaprasad Thirunarayan, and Amit P Sheth. 2012. Harnessing twitter \"big data\" for automatic emotion identification. In Pro- ceedings of the 2012 ASE/IEEE international con- ference on social computing and 2012 ASE/IEEE international conference on privacy, security, risk and trust, SOCIALCOM-PASSAT'12, pages 587- 592, Washington, DC. IEEE Computer Society.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Norms of valence, arousal, and dominance for 13,915 english lemmas. Behavior Research",
"authors": [
{
"first": "Amy",
"middle": [
"Beth"
],
"last": "Warriner",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Kuperman",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Brysbaert",
"suffix": ""
}
],
"year": 2013,
"venue": "Methods",
"volume": "45",
"issue": "4",
"pages": "1191--1207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amy Beth Warriner, Victor Kuperman, and Marc Brys- baert. 2013. Norms of valence, arousal, and dom- inance for 13,915 english lemmas. Behavior Re- search Methods, 45(4):1191-1207.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Data Mining: Practical machine learning tools and techniques",
"authors": [
{
"first": "Eibe",
"middle": [],
"last": "Ian H Witten",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"J"
],
"last": "Hall",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pal",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian H Witten, Eibe Frank, Mark A Hall, and Christo- pher J Pal. 2016. Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Proportion of training tweets in which the specified emotion is present (%).",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Proportion of training tweets in which a specific amount of emotion classes is present (%).",
"uris": null
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>Pair anger -disg. joy -love joy -optim. sadn. -pessim. 0.30 phi 0.68 0.40 0.52</td></tr><tr><td>Emotion Anger Anticipation 0.259 Kappa Emotion 0.678 Optimism Pessimism 0.124 Kappa 0.436 Disgust 0.132 Sadness 0.537 Fear 0.399 Surprise 0.276 Joy 0.717 Trust 0.367 Love 0.470</td></tr></table>",
"text": "0.33 surpr. -pessim. -0.40 Phi coefficients for moderate or high negative (left) and positive (right) correlations between emotion pairs.",
"num": null,
"html": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"text": "",
"num": null,
"html": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td>Style avg word/sent. length POS n-grams Syntax # words and sents POS freq. # capitals POS 1 st token clusters Semantics synset depth embeddings # punct. marks presence imp. # non-standard words presence fut. # connectives</td></tr></table>",
"text": "Lexicons used for feature extraction.",
"num": null,
"html": null
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"text": "Style, syntactic and semantic features.",
"num": null,
"html": null
},
"TABREF6": {
"type_str": "table",
"content": "<table/>",
"text": "F1-scores on the positive class for the binary classifiers in the baseline (BL) setup (italics) and with the optimal classifier and feature sets (in bold).",
"num": null,
"html": null
},
"TABREF7": {
"type_str": "table",
"content": "<table><tr><td>Evaluation dev set held-out test set</td><td>jaccard micro F1 macro F1 0.524 0.644 0.478 0.520 0.640 0.493</td></tr></table>",
"text": "the number of false positives is rather low for all emotion classes (be-",
"num": null,
"html": null
},
"TABREF8": {
"type_str": "table",
"content": "<table><tr><td>P anger 0 0.72 0.28 optim. 0 0.65 G 0 1 G 0 1 0.17 0.83 1 0.16 antic. 0 0.89 0.11 pess. 0 0.98 1 0.68 0.32 1 0.86 disg. 0 0.75 0.25 sadn. 0 0.91 1 0.21 0.79 1 0.46 fear 0 0.97 0.03 surpr. 0 &gt;0.99 &lt;0.01 P 1 0.35 0.84 0.02 0.14 0.09 0.54 1 0.42 0.58 1 0.98 0.02 joy 0 0.89 0.11 trust 0 0.94 0.06 1 0.20 0.80 1 0.82 0.18 love 0 0.95 0.05 1 0.52 0.48</td></tr></table>",
"text": "Jaccard accuracy, micro averaged F1-score and macro averaged F1-score of the optimized model on the development and held-out test set.",
"num": null,
"html": null
},
"TABREF9": {
"type_str": "table",
"content": "<table/>",
"text": "Confusion matrices for the results on the held out test set. P = predicted labels; G = gold labels.",
"num": null,
"html": null
}
}
}
}