|
{ |
|
"paper_id": "S14-1002", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:33:07.916363Z" |
|
}, |
|
"title": "Generating a Word-Emotion Lexicon from #Emotional Tweets", |
|
"authors": [ |
|
{ |
|
"first": "Anil", |
|
"middle": [], |
|
"last": "Bandhakavi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Robert Gordon University", |
|
"location": { |
|
"country": "Scotland, UK" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Nirmalie", |
|
"middle": [], |
|
"last": "Wiratunga", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Robert Gordon University", |
|
"location": { |
|
"country": "Scotland, UK" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Stewart", |
|
"middle": [], |
|
"last": "Massie", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Robert Gordon University", |
|
"location": { |
|
"country": "Scotland, UK" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Research in emotion analysis of text suggest that emotion lexicon based features are superior to corpus based n-gram features. However the static nature of the general purpose emotion lexicons make them less suited to social media analysis, where the need to adopt to changes in vocabulary usage and context is crucial. In this paper we propose a set of methods to extract a word-emotion lexicon automatically from an emotion labelled corpus of tweets. Our results confirm that the features derived from these lexicons outperform the standard Bag-of-words features when applied to an emotion classification task. Furthermore, a comparative analysis with both manually crafted lexicons and a state-of-the-art lexicon generated using Point-Wise Mutual Information, show that the lexicons generated from the proposed methods lead to significantly better classification performance.", |
|
"pdf_parse": { |
|
"paper_id": "S14-1002", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Research in emotion analysis of text suggest that emotion lexicon based features are superior to corpus based n-gram features. However the static nature of the general purpose emotion lexicons make them less suited to social media analysis, where the need to adopt to changes in vocabulary usage and context is crucial. In this paper we propose a set of methods to extract a word-emotion lexicon automatically from an emotion labelled corpus of tweets. Our results confirm that the features derived from these lexicons outperform the standard Bag-of-words features when applied to an emotion classification task. Furthermore, a comparative analysis with both manually crafted lexicons and a state-of-the-art lexicon generated using Point-Wise Mutual Information, show that the lexicons generated from the proposed methods lead to significantly better classification performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Emotion mining or affect sensing is the computational study of natural language expressions in order to quantify their associations with different emotions (e.g. anger, fear, joy, sadness and surprise). It has a number of applications for the industry, commerce and government organisations, but uptake has arguably been slow. This in part is due to the challenges involved with modelling subjectivity and complexity of the emotive content. However, use of qualitative metrics to capture emotive strength and extraction of features from these metrics has in recent years shown promise (Shaikh, 2009) . A general-purpose emotion lexicon (GPEL) is a commonly used resource that allows qualitative assessment of a piece of emotive text. Given a word and an emotion, the lexicon provides a score to quantify the strength of emotion expressed by that word. Such lexicons are carefully crafted and are utilised by both supervised and unsupervised algorithms to directly aggregate an overall emotion score or indirectly derive features for emotion classification tasks (Mohammad, 2012a) , (Mohammad, 2012b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 585, |
|
"end": 599, |
|
"text": "(Shaikh, 2009)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 1062, |
|
"end": 1079, |
|
"text": "(Mohammad, 2012a)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1082, |
|
"end": 1099, |
|
"text": "(Mohammad, 2012b)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Socio-linguistics suggest that social media is a popular means for people to converse with individuals, groups and the world in general (Boyd et al., 2010) . These conversations often involve usage of non-standard natural language expressions which consistently evolve. Twitter and Facebook were credited for providing momentum for the 2011 Arab Spring and Occupy Wall street movements (Ray, 2011) , (Skinner, 2011) . Therefore efforts to model social conversations would provide valuable insights into how people influence each other through emotional expressions. Emotion analysis in such domains calls for automated discovery of lexicons. This is so since learnt lexicons can intuitively capture the evolving nature of vocabulary in such domains better than GPELs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 155, |
|
"text": "(Boyd et al., 2010)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 397, |
|
"text": "(Ray, 2011)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 400, |
|
"end": 415, |
|
"text": "(Skinner, 2011)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work we show how an emotion labelled corpus can be leveraged to generate a wordemotion lexicon automatically. Key to this is the availability of a labelled corpus which may be obtained using a distance-supervised approach to labelling (Wang et al., 2012) . In this paper we propose three lexicon generation methods and evaluate the quality of these by deploying them in an emotion classification task. We show through our experiments that the word-emotion lexicon generated using the proposed methods in this paper significantly outperforms GPELs such as WordnetAffect, NRC word-emotion association lexicon and a leaxicon learnt using Point-wise Mutual Information (PMI). Additionally, our lexicons also outperform the traditional Bag-of-Words representation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 262, |
|
"text": "(Wang et al., 2012)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of the paper is organised as follows: In Section 2 we present the related work. In Section 3 we outline the problem. In Section 4 we formulate the different methods proposed to generate the word-emotion lexicons. In Section 5 we discuss experimental results followed by conclusions and future work in Section 6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Computational emotion analysis, draws from cognitive and physiology studies to establish the key emotion categories; and NLP and text mining research to establish features designed to represent emotive content. Emotion analysis has been applied in a variety of domains: fairy tales (Francisco and Gervas, 2006; Alm et al., 2005) ; blogs (Mihalcea and Liu, 2006; Neviarouskaya et al., 2010) , novels (John et al., 2006) , chat messages (E. Holzman and William M, 2003; Ma et al., 2005; Mohammad and Yang, 2011) and emotional events on social media content (Kim et al., 2009) . Comparative studies on emotive word distributions on micro-blogs and personal content (e.g. love letters, suicide notes) have shown that emotions such as disgust are expressed well in tweets. Further, expression of emotion in tweets and love letters have been shown to have similarities(K. Roberts and Harabagiu, 2012) . Emotion classification frameworks provide insights into human emotion expressions (Ekman, 1992; Plutchik, 1980; Parrott, 2001 ). The emotions proposed by (Ekman, 1992) are popular in emotion classification tasks (Mohammad, 2012b; Aman and Szpakowicz, 2008) . Recently there has also been interest in extending this basic emotion framework to model more complex emotions (such as politeness, rudeness, deception, depression, vigour and confusion) (Pearl and Steyvers, 2010; Bollen et al., 2009) . A common theme across these approaches involves the selection of emotion-rich features and learning of relevant weights to capture emotion strength (Mohammad, 2012a; Qadir and Riloff, 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 282, |
|
"end": 310, |
|
"text": "(Francisco and Gervas, 2006;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 311, |
|
"end": 328, |
|
"text": "Alm et al., 2005)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 361, |
|
"text": "(Mihalcea and Liu, 2006;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 362, |
|
"end": 389, |
|
"text": "Neviarouskaya et al., 2010)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 399, |
|
"end": 418, |
|
"text": "(John et al., 2006)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 439, |
|
"end": 467, |
|
"text": "Holzman and William M, 2003;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 468, |
|
"end": 484, |
|
"text": "Ma et al., 2005;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 485, |
|
"end": 509, |
|
"text": "Mohammad and Yang, 2011)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 555, |
|
"end": 573, |
|
"text": "(Kim et al., 2009)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 878, |
|
"end": 894, |
|
"text": "Harabagiu, 2012)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 979, |
|
"end": 992, |
|
"text": "(Ekman, 1992;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 993, |
|
"end": 1008, |
|
"text": "Plutchik, 1980;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 1009, |
|
"end": 1022, |
|
"text": "Parrott, 2001", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 1051, |
|
"end": 1064, |
|
"text": "(Ekman, 1992)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1109, |
|
"end": 1126, |
|
"text": "(Mohammad, 2012b;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1127, |
|
"end": 1153, |
|
"text": "Aman and Szpakowicz, 2008)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1343, |
|
"end": 1369, |
|
"text": "(Pearl and Steyvers, 2010;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1370, |
|
"end": 1390, |
|
"text": "Bollen et al., 2009)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1541, |
|
"end": 1558, |
|
"text": "(Mohammad, 2012a;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1559, |
|
"end": 1582, |
|
"text": "Qadir and Riloff, 2013)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Usefulness of a lexicon: Lexicons such as Wordnet Affect (Strapparava and Valitutti, 2004) and NRC (Saif M. Mohammad, 2013) ) are very valuable resources from which emotion features can be derived for text representation. These are manually crafted and typically contain emotion-rich formal vocabulary. Hybrid approaches that combine features derived from these static lexicons with n-grams have resulted in bet-ter performance than either alone (Mohammad, 2012b) , (Aman and Szpakowicz, 2008) . However the informal and dynamic nature of social media content makes it harder to adopt these lexicons for emotion analysis. An alternative strategy is to derive features from a dynamic (i.e., learnt) lexicon. Here association metrics such as Pointwise Mutual Information (PMI) can be used to model emotion polarity between a word and emotion labelled content (Mohammad, 2012a) . Such approaches will be used as baselines to compare against our proposed lexicon generation strategies. There are other lexicon generation methods proposed by Rao .et. al (Yanghui Rao and Chen, 2013) and Yang .et. al (Yang et al., 2007) . We do not consider these in our comparative evaluation since these methods require rated emotion labels and emoticon classes respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 90, |
|
"text": "(Strapparava and Valitutti, 2004)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 99, |
|
"end": 123, |
|
"text": "(Saif M. Mohammad, 2013)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 446, |
|
"end": 463, |
|
"text": "(Mohammad, 2012b)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 466, |
|
"end": 493, |
|
"text": "(Aman and Szpakowicz, 2008)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 857, |
|
"end": 874, |
|
"text": "(Mohammad, 2012a)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1037, |
|
"end": 1077, |
|
"text": "Rao .et. al (Yanghui Rao and Chen, 2013)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 1082, |
|
"end": 1114, |
|
"text": "Yang .et. al (Yang et al., 2007)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Lexicon generation, relies on the availability of a labelled corpus from which the word-emotion distributions can be discovered. For this purpose we exploit a distance-supervised approach where indirect cues are used to unearth implicit (or distant) labels that are contained in the corpus (Alec Go and Huang, 2009) . We adopt the approach as in (Wang et al., 2012) to corpus labelling where social media content, and in particular Twitter content is sampled for a predefined set of hashtag cues (P. Shaver, 1987) . Here each set of cues represent a given emotion class. Distant-supervision is particularly suited to Twitter-like platforms because people use hashtags to extensively convey or emphasis the emotion behind their tweets (e.g., That was my best weekend ever.#happy!! #satisfied!). Also given that tweets are length restricted (140 characters), modelling the emotional orientation of words in a Tweet is easier compared to longer documents that are likely to capture complex and mixed emotions. This simplicity and access to sample data has made Twitter one of the most popular domains for emotion analysis research (Wang et al., 2012; Qadir and Riloff, 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 290, |
|
"end": 315, |
|
"text": "(Alec Go and Huang, 2009)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 346, |
|
"end": 365, |
|
"text": "(Wang et al., 2012)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 513, |
|
"text": "(P. Shaver, 1987)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1128, |
|
"end": 1147, |
|
"text": "(Wang et al., 2012;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 1148, |
|
"end": 1171, |
|
"text": "Qadir and Riloff, 2013)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We now outline the problem formally. We start with a set of documents D = {d 1 , d 2 , . . . , d n } where each document d i has an associated label C d i indicating the emotion class to which d i belongs. We consider the case where the documents are tweets. For example, a tweet d i nice sunday #awesome may have a label joy indicating that the tweet belongs to the joy emotion class. We also assume that the labels C d i come from a pre-defined set of six emotion classes anger, fear, joy, sad, surprise, love. Since our techniques are generic and do not depend on the number of emotion classes, we will denote the emotion classes as {C j } N j=1 . Let there be K words extracted from the training documents, denoted as {w i } K i=1 . Our task is to derive a lexicon Lex that quantifies the emotional valence of words (from the tweets in D) to emotion classes. In particular, the lexicon may be thought of as a 2d-associative array where Lex[w][c] indicates the emotional valence of the word w to the emotion class c. When there is no ambiguity, we will use Lex(i, j) to refer to the emotional valence of word w i to the emotion class C j . We will quantify the goodness of the lexicons that are generated using various methods by measuring their performance in an emotion classification task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We now outline the various methods for lexicon generation. We first start off with a simple technique for learning lexicons based on just term frequencies (which we will later use as a baseline technique), followed by more sophisticated methods that are based on conceptual models on how tweets are generated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicon Generation Methods", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "A simple way to measure the emotional valence of the word w i to the emotion class C j is to compute the probability of occurrence of w i in a tweet labelled as C j , normalized by its probability across all classes. This leads to:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term Frequency based Lexicon", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Lex(i, j) = p(w i |C j ) N k=1 p(w i |C k )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Term Frequency based Lexicon", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where the conditional probability is simply computed using term frequencies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term Frequency based Lexicon", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(w i |C j ) = f req(w i , C j ) f req(C j )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Term Frequency based Lexicon", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where f req(w i , C j ) is the number of times w i occurs in documents labeled with class C j . f req(C j ) is the total number of documents in C j .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term Frequency based Lexicon", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The formulation in the previous section generates a word-emotion matrix L by observing the term frequencies within a class. However term frequencies alone do not capture the term-class associations, because not all frequently occurring terms exhibit the characteristics of a class. For example, a term sunday that occurs in a tweet nice sunday #awesome labelled joy is evidently not indicative of the class joy; however, the frequency based computation increments the weight of sunday wrt the class joy by virtue of this occurrence. In the following sections, we propose generative models that seek to remedy such problems of the simple term frequency based lexicon.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Iterative methods for Lexicon Generation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As discussed above, though a document is labelled with an emotion class, not all terms relate strongly to the labelled emotion. Some documents may have terms conveying a different emotion than what the document is labelled with, since the label is chosen based on the most prominent emotion in the tweet. Additionally, some words could be emotion-neutral (e.g., sunday in our example tweet) and could be conveying non-emotional information. We now describe two generative models that account for such considerations, and then outline methods to learn lexicons based on them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative models for Documents", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Mixture of Classes Model: Let L C k be the unigram language model (Liu and Croft, 2005) that expresses the lexical character for the emotion class C k ; though microblogs are short text fragments, language modeling approaches have been shown to be effective in similarity assesment between them (Deepak and Chakraborti, 2012) . We model a document d i to be generated from across the emotion class language models:", |
|
"cite_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 87, |
|
"text": "(Liu and Croft, 2005)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 325, |
|
"text": "(Deepak and Chakraborti, 2012)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative models for Documents", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "1. For each word w j in document d i , (a) Lookup the unit vector [\u03bb (1) d ij , . . . , \u03bb (N ) d ij ];", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative models for Documents", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "This unit vector defines a probability distribution over the language models. d ij is high for words in d i since it is likely that majority of the words are sampled from the L C d i language model. The posterior probability in accordance with this model can then be intuitively formulated as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative models for Documents", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "P (d i , C d i |\u03b8) = w j \u2208d i N x=1 \u03bb (x) d ij \u00d7 L Cx (w j ) (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative models for Documents", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "where \u03b8 is the parameters {L C j } N j=1 , \u03bb and C d i is the class label for document d i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative models for Documents", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Class and Neutral Model: We now introduce another model where the words in a document are assumed to be sampled from either the language model of the corresponding (i.e., labelled) emotion class or from the neutral language model, L C . Thus, the generative model for a document d i labelled with emotion class C d i would be as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative models for Documents", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "1. For each word w j in document d i , (a) Lookup the weight \u00b5 d ij ; this parameter determines the mix of the labelled emotion class and the neutral class, for", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative models for Documents", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "w j in d i (b) Choose L C k with a probability of \u00b5 d ij ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative models for Documents", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "and L C with a probability of 1.0 \u2212 \u00b5 d ij (c) Sample w j in accordance with the multinomial distribution of the chosen language model", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative models for Documents", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "The posterior probability in accordance with this model can be intuitively formulated as :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative models for Documents", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "P (d i , C d i |\u03b8) = w j \u2208d i \u00b5 d ij \u00d7 L C d i (w j ) + (1 \u2212 \u00b5 d ij ) \u00d7 L C (w j ) (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative models for Documents", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "where \u03b8 is the parameters {L C j } N j=1 , L C , \u00b5 . Equation 3 models a document to exhibit characteristics of many classes with different levels of magnitude. Equation 4 models a document to be a composition of terms that characterise one class and other general terms; a similar formulation where a document is modeled using a mix of two models has been shown to be useful in characterizing problem-solution documents Deepak and Visweswariah, 2014) . The central idea of the expectation maximization (EM) algorithm is to maximize the probability of the data, given the language models {L C j } N j=1 and L C . The term weights are estimated from the language models (E-step) and the language models are re-estimated (M-step) using the term weights from the E-step. Thus the maximum likelihood estimation process in EM alternates between the E-step and the M-step. In the following sections we detail the EM process for the two generative models separately. We compare and contrast the two variants of the EM algorithm in Table 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 421, |
|
"end": 451, |
|
"text": "Deepak and Visweswariah, 2014)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1024, |
|
"end": 1031, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generative models for Documents", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "We will use a matrix based representation for the language model and the lexicon, to simplify the illustration of the EM steps. Under the matrix notation, L (p) denotes the K \u00d7N matrix at the p th iteration where the i th column is the language model corresponding to the i th class, i.e., L C i . The p th Estep estimates the various \u03bb d ij vectors for all documents based on the language models in L (p\u22121) , whereas the M-step re-learns the language models based on the \u03bb values from the E-step. The steps are detailed as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM with Mixture of Classes Model", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "E-Step: The \u03bb (n)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM with Mixture of Classes Model", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "d ij is simply estimated to the fractional support for the j th word in the i th document (denoted as w ij ) from the n th class language model:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM with Mixture of Classes Model", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03bb (n) d ij = L (p\u22121) Cn (w ij ) x L (p\u22121) Cx (w ij )", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "EM with Mixture of Classes Model", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "M-Step: As mentioned before in Table 1 this step learns the language models from the \u03bb estimates of the previous step. As an example, if a word w is estimated to have come from the joy language model with a weight (i.e., \u03bb) 0.5, it would contribute 0.5 as its count to the joy language model. Thus, every occurrence of a word is split across language models using their corresponding \u03bb estimates:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 38, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "EM with Mixture of Classes Model", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "L (p) Cn [w] = i j I(w ij = w) \u00d7 \u03bb (n) d ij i j \u03bb (n) d ij (6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM with Mixture of Classes Model", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "where the indicator function I(w ij = w) evaluates to 1 if w ij = w is satisfied and 0 otherwise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM with Mixture of Classes Model", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "After any M-Step, the lexicon can be obtained by normalizing the L (p) language models so that the weights for each word adds up to 1.0. i.e.,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM with Mixture of Classes Model", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Lex (p) (i, j) = L (p) C j [w i ] K x=1 L (p) Cx [w i ]", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "EM with Mixture of Classes Model", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "In the above equation, the suffix (i, j) refers to the i th word in the j th class, confirming to our 2darray representation of the language models. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM with Mixture of Classes Model", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "The main difference in this case, when compared to the previous is that we need to estimate a neutral language model L C in addition to the class specific models. We also have fewer parameters to learn since the \u00b5 d ij is a single value rather than a vector of N values as in the previous case. E-Step: \u00b5 d ij is estimated to the relative weight of the word w ij from across the language model of the corresponding class, and the neutral model:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM with Class and Neutral Model", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u00b5 d ij = L (p\u22121) C d i (w ij ) L (p\u22121) C d i (w ij ) + L (p\u22121) C (w ij )", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "EM with Class and Neutral Model", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "Where C d i denotes the class corresponding to the label of the document d i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM with Class and Neutral Model", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "M-Step: In a slight contrast from the M-Step for the earlier case as shown in Table 1 , a word estimated to have a weight (i.e., \u00b5 value) of 0.2 would contribute 20% of its count to the corresponding class' language model, while the remaining would go to the neutral language model L C . Since the class-specific and neutral language models are estimated differently, we have two separate equations:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 85, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "EM with Class and Neutral Model", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "L (p) Cn [w] = i,label(d i )=Cn j I(w ij = w) \u00d7 \u00b5 d ij i,label(d i )=Cn j \u00b5 d ij (9) L (p) C [w] = i j I(w ij = w) \u00d7 (1.0 \u2212 \u00b5 d ij ) i j (1.0 \u2212 \u00b5 d ij ) (10) where label(d i ) = C n", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM with Class and Neutral Model", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "As is obvious, the classspecific language models are contributed to by the documents labelled with the class whereas the neutral language model has contributions from all documents. The normalization to achieve the lexicon is exactly the same as in the mixture of classes case, and hence, is omitted here.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM with Class and Neutral Model", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "In the case of iterative approaches like EM, the initialization is often considered crucial. In our case, we initialize the unigram class language models by simply aggregating the scores of the words in tweets labelled with the respective class. Thus, the joy language model would be the initialized to be the maximum likelihood model to explain the documents labelled joy. In the case of the class and neutral generative model, we additionally build the neutral language model by aggregating counts across all the documents in the corpus (regardless of what their emotion label is).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EM Initialization", |
|
"sec_num": "4.2.4" |
|
}, |
|
{ |
|
"text": "In this section we detail our experimental evaluation. We begin with the details about the Twitter data used in our experiments. We then discuss how we created the folds for a cross validation experiment. Thereafter we detail the classifi-cation task used to evaluate the word-emotion lexicon. Finally we discuss the performance of our proposed methods for lexicon generation in comparison with other manually crafted lexicons, PMI based method for lexicon generation and the standard BoW in an emotion classification task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The data set used in our experiments was a corpus of emotion labelled tweets harnessed by (Wang et al., 2012) . The data set was available in the form of tweet ID's and the corresponding emotion label. The emotion labels comprised namely : anger, fear, joy, sadness, surprise, love and thankfulness. We used the Twitter search API 1 to obtain the tweets by searching with the corresponding tweet ID. After that we decided to consider only tweets that belong to the primary set of emotions defined by Parrott (Parrott, 2001) . The emotion classes in our case included anger, fear, joy, sadness, surprise and love. We had a collection of 0.28 million tweets which we used to carry out a 10 fold cross-validation experiment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 109, |
|
"text": "(Wang et al., 2012)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 508, |
|
"end": 523, |
|
"text": "(Parrott, 2001)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Twitter Dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We decided to generate the folds manually,in order to compare the performance of the different algorithms used in our experiments. We split the collection of 0.28 million tweets into 10 equal size sets to generate 10 folds with different training and test sets in each fold. Also all the folds in our experiments were obtained by stratified sampling, ensuring that we had documents representing all the classes in both the training and test sets. We used the training data in each fold to generate the word-emotion lexicon and measured the performance of it on the test data in an emotion classification task. Table 2 shows the average distribution of the different classes namely: anger, fear, joy, sadness, surprise and love over the 10 folds. Observe that emotions such as joy and sadness had a very high number of representative documents . Emotions such as anger,love and fear were the next most represented emotions. The emotion surprise had very few representative documents compared to that of the other emotions.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 610, |
|
"end": 617, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Twitter Dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We adopted an emotion classification task in order to evaluate the quality of the word-emotion lexicon generated using the proposed methods. Also research in emotion analysis of text suggest that 1 https://dev.twitter.com/docs/using-search Table 2: Average distribution of emotions across the folds Emotion Training Test Anger 58410 6496 Fear 13692 1548 Joy 74108 8235 Sadness 63711 7069 Surprise 2533 282 Love 31127 3464 Total 243855 27095 lexicon based features were effective compared to that of n-gram features in an emotion classification of text (Aman and Szpakowicz, 2008; Mohammad, 2012a ). Therefore we decided to use the lexicon to derive features for text representation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 577, |
|
"end": 604, |
|
"text": "(Aman and Szpakowicz, 2008;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 605, |
|
"end": 620, |
|
"text": "Mohammad, 2012a", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 465, |
|
"text": "Table 2: Average distribution of emotions across the folds Emotion Training Test Anger 58410 6496 Fear 13692 1548 Joy 74108 8235 Sadness 63711 7069 Surprise 2533 282 Love 31127 3464 Total 243855 27095", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating the word-emotion lexicon", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We followed a similar procedure as in (Mohammad, 2012a) to define integer valued features for text representation. We define one feature for each emotion to capture the number of words in a training/test document that are associated with the corresponding emotion. The feature vector for a training/test document was constructed using the wordemotion lexicon. Given a training/test document d we construct the corresponding feature vector d =< count(e 1 ), count(e 2 ), . . . , count(e m )) > of length m (in our case m is 6), wherein count(e i ) represents the number of words in d that exhibit emotion e i . count(e i ) is computed as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating the word-emotion lexicon", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "count(e i ) = w\u2208d I( max j=1,...,m Lex(w, j) = C i )", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "Evaluating the word-emotion lexicon", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "where I(. . .) is the indicator function as used previously. For example if a document has 1 joy word, 2 love words and 1 surprise word the feature vector for the document would be (0, 0, 1, 0, 1, 2) . We used the different lexicon generation methods discussed in sections 4.1, 4.2.2 and 4.2.3 to construct the feature vectors for the documents. In the case of the lexicon generated as in section 4.2.3 the max in equation 11 is computed over m + 1 columns. We also used the lexicon generation method proposed in (Mohammad, 2012a) to construct the feature vectors. PMI was used in (Mohammad, 2012a) to generate a word-emotion lexicon which is as follows :", |
|
"cite_spans": [ |
|
{ |
|
"start": 513, |
|
"end": 530, |
|
"text": "(Mohammad, 2012a)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 199, |
|
"text": "(0, 0, 1, 0, 1, 2)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating the word-emotion lexicon", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Lex(i, j) = log f req(w i , C j ) * f req(\u00acC j ) f req(C j ) * f req(w i , \u00acC j )", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "Evaluating the word-emotion lexicon", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "where f req(w i , C j ) is the number of times ngram w i occurs in a document labelled with emotion C j , f req(w i , \u00acC j ) is the number of times ngram w i occurs in a document not labelled with emotion C j . f req(C j ) and f req(\u00acC j ) are the number of documents labelled with emotion C j and \u00acC j respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating the word-emotion lexicon", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Apart from the aforementioned automatically generated lexicons we also used manually crafted lexicons such as WordNet Affect (Strapparava and Valitutti, 2004) and the NRC word-emotion association lexicon (Saif M. Mohammad, 2013) to construct the feature vectors for the documents. Unlike the automatic lexicons, the general purpose lexicons do not offer numerical scores. Therefore we looked for presence/absence of words in the lexicons to obtain the feature vectors. Furthermore we also represented documents in the standard BoW representation. We performed feature selection using the metric Chisquare 2 , to select the top 500 features to represent documents. Since tweets are very short we incorporated a binary representation for BoW instead of term frequency. For classification we used a multiclass SVM classifier 3 and all the experiments were conducted using the data mining software Weka 2 . We used standard metrics such as Precision, Recall and F-measure to compare the performance of the different algorithms. In the following section we analyse the experimental results for TF-lex (Sec 4.1), EMallclass-lex (Sec 4.2.2), EMclass-corpuslex (Sec 4.2.3), PMI-lex (Mohammad, 2012a) , WNA-lex (Strapparava and Valitutti, 2004) , NRClex (Saif M. Mohammad, 2013) and BoW in an emotion classification task. Also in the case of EM based methods we experimented with different threshold limits \u03b4 shown in Table 1 . We report the results only w.r.t \u03b4 = 1 due to space limitations. Table 3 shows the F-scores obtained for different methods for each emotion. Observe that the F-score for each emotion shown in Table 3 for a method is the average F-score obtained over the 10 test sets (one per fold). We carried a two tail paired t-test 4 between the baselines and our proposed methods to measure statistical significance for performance on the test set in each fold. From the t-test we observed that our proposed methods are statistically significant over the baselines with a confidence of 95% (i.e with p value 0.05). Also note that the best results obtained for an emotion are highlighted in bold. It is evident from the results that the manually crafted lexicons Worndnet Affect and the NRC word-emotion association lexicon are significantly outperformed by all the automatically generated lexicons for all emotions. Also the BoW model significantly outperforms the manually crafted lexicons suggesting that these lexicons are not sufficiently effective for emotion mining in a domain like Twitter.", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 158, |
|
"text": "(Strapparava and Valitutti, 2004)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 228, |
|
"text": "(Saif M. Mohammad, 2013)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1174, |
|
"end": 1191, |
|
"text": "(Mohammad, 2012a)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1202, |
|
"end": 1235, |
|
"text": "(Strapparava and Valitutti, 2004)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 1245, |
|
"end": 1269, |
|
"text": "(Saif M. Mohammad, 2013)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1409, |
|
"end": 1416, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 1484, |
|
"end": 1491, |
|
"text": "Table 3", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1611, |
|
"end": 1618, |
|
"text": "Table 3", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating the word-emotion lexicon", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "When compared with BoW the PMI-lex proposed by (Mohammad, 2012a) achieves a 2% gain w.r.t emotion love, a 0.6% gain w.r.t emotion joy and 1.28% gain w.r.t emotion sadness. However in the case of emotions such as fear and surprise BoW achieves significant gains of 11.17% and 20.96% respectively. The results suggest that the PMI-lex was able to leverage the availability of adequate training examples to learn the patterns about emotions such as anger, joy, sadness and love. However given that not all emotions are widely expressed a lexicon generation method that relies heavily on abundant training data could be ineffective to mine less represented emotions. Now we analyse the results obtained for the lexicons generated from our proposed methods and compare them with BoW and PMI-lex. From the results obtained for our methods in Table 3 it suggests that our methods achieve the best Fscores for 4 emotions namely anger, fear, sadness and love out of the 6 emotions. In particular the EM-class-corpus-lex method obtains the best F-score for 3 emotions namely anger, sadness and love. When compared with BoW and PMI-lex, EM-class-corpus-lex obtains a gain of 0.85% and 0.93% respectively w.r.t emotion anger, 1.85% and 0.57% respectively w.r.t emotion sadness, 18.67% and 16.88% respectively w.r.t emotion love. Our method TF-lex achieves a gain of 5.47% and 16.64% respectively over BoW and PMI-lex w.r.t emotion fear. Furthermore w.r.t emotion surprise all our proposed methods outperform PMI-lex. However BoW still obtains the best F-score for emotion surprise.", |
|
"cite_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 64, |
|
"text": "(Mohammad, 2012a)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 836, |
|
"end": 843, |
|
"text": "Table 3", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "When we compared the results between our own methods EM-class-corpus-lex obtains the best F-scores for emotions anger, joy, sadness and love. We expected that modelling a document to exhibit more than one emotion (EM-allclasslex) would better distinguish the class boundaries. However given that tweets are very short it was observed that modelling a document as a mixture of emotion terms and general terms (EM-classcorpus-lex) yielded better results. However we expect EM-allclass-lex to be more effective in other domains such as blogs, discussion forums wherein the text size is larger compared to tweets. Table 4 summarizes the overall F-scores obtained for the different methods. Note that the F-scores shown in Table 4 are the average overall F-scores over the 10 test sets. Again we conducted a two tail paired t-test 4 between the baselines and our proposed methods to measure the performance gains. It was observed that all our proposed methods are statistically significant over the baselines with a confidence of 95% (i.e with p value 0.05). In Table 4 we italicize all our best performing methods and highlight in bold the best among them. From the results it is evident that our proposed methods obtain significantly better Fscores over all the baselines with EM-class-corpus achieving the best F-score with a gain of 3.21%, 2.9%, 39.03% and 38.7% over PMI-lex, BoW, WNA-lex and NRC-lex respectively. Our findings reconfirm previous findings in the literature that emotion lexicon based features improve over corpus based n-gram features in a emotion classification task. Also our findings suggest that domain specific automatic lexicons are significantly better over manually crafted lexicons.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 610, |
|
"end": 617, |
|
"text": "Table 4", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 718, |
|
"end": 725, |
|
"text": "Table 4", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 1057, |
|
"end": 1064, |
|
"text": "Table 4", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We proposed a set of methods to automatically extract a word-emotion lexicon from an emotion labelled corpus. Thereafter we used the lexicons to derive features for text representation and showed that lexicon based features significantly outperform the standard BoW features in the emotion classification of tweets. Furthermore our lexicons achieve significant improvements over the general purpose lexicons and the PMI based automatic lexicon in the classification experiments. In future we intend to leverage the lexicons to design different text representations and also test them on emotional content from other domains. Automatically generating human-interpretable models (e.g., (Balachandran et al., 2012)) to accompany emotion classifier decisions is another interesting direction for future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "http://www.cs.waikato.ac.nz/ml/weka/ 3 http://www.csie.ntu.edu.tw/ cjlin/liblinear/ 4 http://office.microsoft.com/en-gb/excel-help/ttest-HP005209325.aspx", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Twitter sentiment classification using distant supervision", |
|
"authors": [ |
|
{ |
|
"first": "Richa", |
|
"middle": [], |
|
"last": "Bhayani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Go", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richa Bhayani Alec Go and Lei Huang. 2009. Twit- ter sentiment classification using distant supervision. Processing.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Emotions from text: machine learning for text-based emotion prediction", |
|
"authors": [ |
|
{ |
|
"first": "Cecilia", |
|
"middle": [], |
|
"last": "Ovesdotter Alm", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Sproat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "579--586", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cecilia Ovesdotter Alm, Dan Roth, and Richard Sproat. 2005. Emotions from text: machine learn- ing for text-based emotion prediction. In Proceed- ings of the conference on Human Language Tech- nology and Empirical Methods in Natural Language Processing, HLT '05, pages 579-586, Stroudsburg, PA, USA. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Using roget's thesaurus for fine-grained emotion recognition", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Aman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Szpakowicz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "ternational Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Aman and S. Szpakowicz. 2008. Using roget's the- saurus for fine-grained emotion recognition. In In- ternational Joint Conference on Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Interpretable and reconfigurable clustering of document datasets by deriving word-based rules", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vipin Balachandran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Deepak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deepak", |
|
"middle": [], |
|
"last": "Khemani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Knowl. Inf. Syst", |
|
"volume": "32", |
|
"issue": "3", |
|
"pages": "475--503", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vipin Balachandran, Deepak P, and Deepak Khemani. 2012. Interpretable and reconfigurable clustering of document datasets by deriving word-based rules. Knowl. Inf. Syst., 32(3):475-503.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Modelling public mood and emotion : Twitter sentiment and socio-economic phenomena", |
|
"authors": [ |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bollen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Pepe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huina", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "CoRR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johan Bollen, Alberto Pepe, and Huina Mao. 2009. Modelling public mood and emotion : Twitter senti- ment and socio-economic phenomena. In CoRR.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Tweet, tweet, retweet: Conversational aspects of retweeting on twitter", |
|
"authors": [ |
|
{ |
|
"first": "Danah", |
|
"middle": [], |
|
"last": "Boyd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Golder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gilad", |
|
"middle": [], |
|
"last": "Lotan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 43rd Hawaii International Conference on System Sciences", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danah Boyd, Scott Golder, and Gilad Lotan. 2010. Tweet, tweet, retweet: Conversational aspects of retweeting on twitter. In Proceedings of the 2010 43rd Hawaii International Conference on System Sciences, Washington, DC, USA.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Finding relevant tweets", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Deepak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sutanu", |
|
"middle": [], |
|
"last": "Chakraborti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "WAIM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "228--240", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Deepak and Sutanu Chakraborti. 2012. Finding rel- evant tweets. In WAIM, pages 228-240.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Unsupervised solution post identification from discussion forums", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Deepak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karthik", |
|
"middle": [], |
|
"last": "Visweswariah", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Deepak and Karthik Visweswariah. 2014. Unsu- pervised solution post identification from discussion forums. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Two-part segmentation of text documents", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Deepak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karthik", |
|
"middle": [], |
|
"last": "Visweswariah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nirmalie", |
|
"middle": [], |
|
"last": "Wiratunga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sadiq", |
|
"middle": [], |
|
"last": "Sani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "CIKM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "793--802", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Deepak, Karthik Visweswariah, Nirmalie Wiratunga, and Sadiq Sani. 2012. Two-part segmentation of text documents. In CIKM, pages 793-802.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Classification of emotions in internet chat : An application of machine learning using speech phonemes", |
|
"authors": [ |
|
{ |
|
"first": "Lars", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Holzman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pottenger", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lars E.Holzman and Pottenger William M. 2003. Classification of emotions in internet chat : An application of machine learning using speech phonemes. Technical report, Technical report, Leigh University.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "An argument for basic emotions", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Ekman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Cognition and Emotion", |
|
"volume": "6", |
|
"issue": "3", |
|
"pages": "169--200", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion, 6(3):169-200.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Automated mark up of affective information in english text. Text, Speech and Dialouge", |
|
"authors": [ |
|
{ |
|
"first": "Virginia", |
|
"middle": [], |
|
"last": "Francisco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pablo", |
|
"middle": [], |
|
"last": "Gervas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Lecture Notes in Computer Science", |
|
"volume": "4188", |
|
"issue": "", |
|
"pages": "375--382", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Virginia Francisco and Pablo Gervas. 2006. Auto- mated mark up of affective information in english text. Text, Speech and Dialouge, volume 4188 of Lecture Notes in Computer Science:375-382.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Representing emotinal momentum within expressive internet communication", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Boucouvalas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhe", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 24th IASTED international conference on Internet and multimedia systems and applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "183--188", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David John, Anthony C. Boucouvalas, and Zhe Xu. 2006. Representing emotinal momentum within ex- pressive internet communication. In In Proceed- ings of the 24th IASTED international conference on Internet and multimedia systems and applications, pages 183-188, Anaheim, CA, ACTA Press.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "empatweet: Annotating and detecting emotions on twitter", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johnson", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Guthrie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Roach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3806--3813", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Johnson J. Guthrie K. Roberts, M.A. Roach and S.M. Harabagiu. 2012. \"empatweet: Annotating and de- tecting emotions on twitter\",. In in Proc. LREC, 2012, pp.3806-3813.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Detecting sadness in 140 characters: Sentiment analysis of mourning of michael jackson on twitter", |
|
"authors": [ |
|
{ |
|
"first": "Elsa", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gilbert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Edwards", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erhardt", |
|
"middle": [], |
|
"last": "Graeff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elsa Kim, Sam Gilbert, J.Edwards, and Erhardt Graeff. 2009. Detecting sadness in 140 characters: Senti- ment analysis of mourning of michael jackson on twitter.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Statistical language modeling for information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoyong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruce", |
|
"middle": [], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaoyong Liu and W Bruce Croft. 2005. Statistical language modeling for information retrieval. Tech- nical report, DTIC Document.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Emotion estimation and reasoning based on affective textual interaction", |
|
"authors": [ |
|
{ |
|
"first": "Chunling", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helmut", |
|
"middle": [], |
|
"last": "Prendinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitsuru", |
|
"middle": [], |
|
"last": "Ishizuka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "First International Conference on Affective Computing and Intelligent Interaction (ACII-2005)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "622--628", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chunling Ma, Helmut Prendinger, and Mitsuru Ishizuka. 2005. Emotion estimation and reasoning based on affective textual interaction. In First In- ternational Conference on Affective Computing and Intelligent Interaction (ACII-2005), pages 622-628, Beijing, China.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A corpus-based approach for finding happiness", |
|
"authors": [ |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "AAAI-2006", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rada Mihalcea and Hugo Liu. 2006. A corpus-based approach for finding happiness. In In AAAI-2006", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Spring Symposium on Computational Approaches to Analysing Weblogs", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "139--144", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Spring Symposium on Computational Approaches to Analysing Weblogs, pages 139-144. AAAI press.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Tracking seniment in mail : How genders differ on emotional axes", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Saif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tony", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis(WASSA 2011)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "70--79", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif M. Mohammad and Tony Yang. 2011. Tracking seniment in mail : How genders differ on emotional axes. In In Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sen- timent Analysis(WASSA 2011), pages 70-79, Port- land, Oregon. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "#emotional tweets", |
|
"authors": [ |
|
{ |
|
"first": "Saif", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Sixth International Workshop on Semantic Evaluation", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif Mohammad. 2012a. #emotional tweets. In The First Joint Conference on Lexical and Compu- tational Semantics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012).", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Portable features for classifying emotional text", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Saif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "587--591", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif M. Mohammad. 2012b. Portable features for clas- sifying emotional text. In Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 587-591, Montreal , Canada.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Recognition of affect, judgment, and appreciation in text", |
|
"authors": [ |
|
{ |
|
"first": "Alena", |
|
"middle": [], |
|
"last": "Neviarouskaya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helmut", |
|
"middle": [], |
|
"last": "Prendinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitsuru", |
|
"middle": [], |
|
"last": "Ishizuka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "806--814", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alena Neviarouskaya, Helmut Prendinger, and Mit- suru Ishizuka. 2010. Recognition of affect, judg- ment, and appreciation in text. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10, pages 806-814, Strouds- burg, PA, USA. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Emotion knowledge: Further exploration of a prototype approach", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kirson", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Shaver", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Journal of Personality and Social Psychology", |
|
"volume": "52", |
|
"issue": "", |
|
"pages": "1061--1086", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Kirson P. Shaver, J. Schwartz. 1987. Emotion knowledge: Further exploration of a prototype ap- proach. Journal of Personality and Social Psychol- ogy, Vol 52 no 6:1061 -1086.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Emotions in social psychology", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Parrott", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W Parrott. 2001. Emotions in social psychology. Psy- chology Press, Philadelphia.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Identifying emotions, intentions and attitudes in text using a game with a purpose", |
|
"authors": [ |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Pearl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steyvers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the NAACL-HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, Los Abgeles", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lisa Pearl and Mark Steyvers. 2010. Identifying emo- tions, intentions and attitudes in text using a game with a purpose. In In Proceedings of the NAACL- HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, Los Abgeles, California.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "A general psychoevolutionary theory of emotion", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Plutchik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "Emotion: Theory, research, and experience", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "3--33", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Plutchik. 1980. A general psychoevolutionary the- ory of emotion. In R. Plutchik & H. Kellerman (Eds.), Emotion: Theory, research, and experience:, Vol. 1. Theories of emotion (pp. 3-33). New York: Academic:(pp. 3-33).", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Bootstrapped learning of emotion hashtahs #hashtags4you", |
|
"authors": [ |
|
{ |
|
"first": "Ashequl", |
|
"middle": [], |
|
"last": "Qadir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellen", |
|
"middle": [], |
|
"last": "Riloff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "the 4th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashequl Qadir and Ellen Riloff. 2013. Bootstrapped learning of emotion hashtahs #hashtags4you. In In the 4th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA 2013).", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "The 'story' of digital excess in revolutions of the arab spring", |
|
"authors": [ |
|
{ |
|
"first": "Tapas", |
|
"middle": [], |
|
"last": "Ray", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Media Practice", |
|
"volume": "12", |
|
"issue": "2", |
|
"pages": "189--196", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tapas Ray. 2011. The 'story' of digital excess in rev- olutions of the arab spring. Journal of Media Prac- tice, 12(2):189-196.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Crowdsourcing a word-emotion association lexicon", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Turney Saif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Computational Intelligence", |
|
"volume": "29", |
|
"issue": "3", |
|
"pages": "436--465", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter D. Turney Saif M. Mohammad. 2013. Crowd- sourcing a word-emotion association lexicon. Com- putational Intelligence, 29 (3), 436-465, Wiley Blackwell Publishing Ltd, 2013, 29(3):436-465.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "A Linguistic Interpretation of the OCC Emotion Model for Affect Sensing from Text", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Prendinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ishizuka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"A M" |
|
], |
|
"last": "Shaikh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "45--73", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Prendinger H. Ishizuka M. Shaikh, M.A.M., 2009. A Linguistic Interpretation of the OCC Emotion Model for Affect Sensing from Text, chapter 4, pages 45-73.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Social media and revolution: The arab spring and the occupy movement as seen though three information studies paradigms. Sprouts: Working papers on Information Systems", |
|
"authors": [ |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Skinner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Julia Skinner. 2011. Social media and revolu- tion: The arab spring and the occupy movement as seen though three information studies paradigms. Sprouts: Working papers on Information Systems, 11(169).", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Wordnet-affect: an affective extension of wordnet", |
|
"authors": [ |
|
{ |
|
"first": "Carlo", |
|
"middle": [], |
|
"last": "Strapparava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Valitutti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Istituto per la Ricerca Scienti?ca e Tecnologica I-38050", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carlo Strapparava and Alessandro Valitutti. 2004. Wordnet-affect: an affective extension of wordnet. Technical report, ITC-irst, Istituto per la Ricerca Scienti?ca e Tecnologica I-38050 Povo Trento Italy.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Harnessing twitter \"big data\" for automatic emotion identification", |
|
"authors": [ |
|
{ |
|
"first": "Wenbo", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krishnaprasad", |
|
"middle": [], |
|
"last": "Thirunarayan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amit", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Sheth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 ASE/IEEE International Conference on Social Computing and 2012 ASE/IEEE", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wenbo Wang, Lu Chen, Krishnaprasad Thirunarayan, and Amit P. Sheth. 2012. Harnessing twitter \"big data\" for automatic emotion identification. In Pro- ceedings of the 2012 ASE/IEEE International Con- ference on Social Computing and 2012 ASE/IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Emotion classification using web blog corpora", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"H Y" |
|
], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the IEEE/WIC/ACM International Conference on Web Intelligence, WI '07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "275--278", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Yang, K. H. Y. Lin, and H. H. Chen. 2007. Emo- tion classification using web blog corpora. In Pro- ceedings of the IEEE/WIC/ACM International Con- ference on Web Intelligence, WI '07, pages 275-278, Washington, DC, USA. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Building word-emotion mapping dictionary for online news", |
|
"authors": [ |
|
{ |
|
"first": "Xiaojun", |
|
"middle": [], |
|
"last": "Liu Wenyin Qing Li Yanghui Rao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mingliang", |
|
"middle": [], |
|
"last": "Quan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 4th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liu Wenyin Qing Li Yanghui Rao, Xiaojun Quan and Mingliang Chen. 2013. Building word-emotion mapping dictionary for online news. In In Pro- ceedings of the 4th Workshop on Computational Ap- proaches to Subjectivity, Sentiment and Social Me- dia Analysis, WASSA 2013.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "(b) Choose a language model L from among the K LMs, in accordance with the vector (c) Sample w j in accordance with the multinomial distribution L If d i is labelled with the emotion class C d i , it is likely that the value of \u03bb (n)" |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "EM Algorithm variants States EM with mixture of classes model EM with class and neutral model", |
|
"content": "<table><tr><td>INPUT</td><td>Training data T</td><td>Training data T</td></tr><tr><td>OUTPUT</td><td>Word-Emotion Lexicon</td><td>Word-Emotion Lexicon</td></tr><tr><td>Initialisation</td><td>Learn the initial language models</td><td>Learn the initial language models</td></tr><tr><td/><td>{L C j } N j=1</td><td>{L C j } N j=1 and L C</td></tr><tr><td>Convergence</td><td>While not converged or #Iterations</td><td>While not converged or #Iterations</td></tr><tr><td/><td>< \u03b4, a threshold</td><td>< \u03b4, a threshold</td></tr><tr><td>E-step</td><td>Estimate the \u03bb d ij s based on the</td><td>Estimate \u00b5 d ij based on the current</td></tr><tr><td/><td>current estimate of {L C j } N j=1 (Sec</td><td>estimate of {L C j } N j=1 and L C (Sec</td></tr><tr><td/><td>4.2.2)</td><td>4.2.3)</td></tr><tr><td>M-step</td><td>Estimate the language models</td><td>Estimate the language models</td></tr><tr><td/><td>{L C j } N j=1 using \u03bb d ij s (Sec 4.2.2)</td><td>{L C j } N j=1 and L C using \u00b5 d ij (Sec</td></tr><tr><td/><td/><td>4.2.3)</td></tr><tr><td>Lexicon Induction</td><td>Induce a word-emotion lexicon</td><td>Induce a word-emotion lexicon</td></tr><tr><td/><td>from {L C j } N j=1 (Sec 4.2.2)</td><td>from {L C j } N j=1 and L C (Sec 4.2.3)</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Emotion classification results", |
|
"content": "<table><tr><td>Method</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td colspan=\"2\">Overall F-scores</td></tr><tr><td>Method</td><td>Avg Overall F-</td></tr><tr><td/><td>score</td></tr><tr><td>Baselines</td><td/></tr><tr><td>WNA-lex</td><td>13.17%</td></tr><tr><td>NRC-lex</td><td>13.50%</td></tr><tr><td>Bow</td><td>49.30%</td></tr><tr><td>PMI-lex</td><td>48.99%</td></tr><tr><td>Our automatic lexicons</td><td/></tr><tr><td>TF-lex</td><td>51.45%</td></tr><tr><td>EMallclass-lex</td><td>51.38%</td></tr><tr><td>EMclass-corpus-lex</td><td>52.20%</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |