Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "L16-1008",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:05:52.198931Z"
},
"title": "Sentiment Analysis in Social Networks through Topic Modeling",
"authors": [
{
"first": "Debashis",
"middle": [],
"last": "Naskar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universitat Politecnica de Valencia",
"location": {
"addrLine": "Camino de Vera s",
"country": "n Valencia Spain"
}
},
"email": "[email protected]"
},
{
"first": "Sidahmed",
"middle": [],
"last": "Mokaddem",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universitat Politecnica de Valencia",
"location": {
"addrLine": "Camino de Vera s",
"country": "n Valencia Spain"
}
},
"email": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Rebollo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universitat Politecnica de Valencia",
"location": {
"addrLine": "Camino de Vera s",
"country": "n Valencia Spain"
}
},
"email": "[email protected]"
},
{
"first": "Eva",
"middle": [],
"last": "Onaindia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universitat Politecnica de Valencia",
"location": {
"addrLine": "Camino de Vera s",
"country": "n Valencia Spain"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we analyze the sentiments derived from the conversations that occur in social networks. Our goal is to identify the sentiments of the users in the social network through their conversations. We conduct a study to determine whether users of social networks (twitter in particular) tend to gather together according to the likeness of their sentiments. In our proposed framework, (1) we use ANEW, a lexical dictionary to identify affective emotional feelings associated to a message according to the Russell's model of affection; (2) we design a topic modeling mechanism called Sent LDA, based on the Latent Dirichlet Allocation (LDA) generative model, which allows us to find the topic distribution in a general conversation and we associate topics with emotions; (3) we detect communities in the network according to the density and frequency of the messages among the users; and (4) we compare the sentiments of the communities by using the Russell's model of affect versus polarity and we measure the extent to which topic distribution strengthen likeness in the sentiments of the users of a community. This works contributes with a topic modeling methodology to analyze the sentiments in conversations that take place in social networks.",
"pdf_parse": {
"paper_id": "L16-1008",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we analyze the sentiments derived from the conversations that occur in social networks. Our goal is to identify the sentiments of the users in the social network through their conversations. We conduct a study to determine whether users of social networks (twitter in particular) tend to gather together according to the likeness of their sentiments. In our proposed framework, (1) we use ANEW, a lexical dictionary to identify affective emotional feelings associated to a message according to the Russell's model of affection; (2) we design a topic modeling mechanism called Sent LDA, based on the Latent Dirichlet Allocation (LDA) generative model, which allows us to find the topic distribution in a general conversation and we associate topics with emotions; (3) we detect communities in the network according to the density and frequency of the messages among the users; and (4) we compare the sentiments of the communities by using the Russell's model of affect versus polarity and we measure the extent to which topic distribution strengthen likeness in the sentiments of the users of a community. This works contributes with a topic modeling methodology to analyze the sentiments in conversations that take place in social networks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Despite the amount of research done in sentiment analysis in social networks, the study of dissemination patterns of the emotions is limited. It is well known that social networks exhibit some kind of positive correlation in the polarity of the sentiments associated to sequential pairs of messages (Hillmann and Trier, 2012) , and that the analysis of diffusion mechanisms is clue to determine the dynamic of the evolution of a particular emotion. Ziegler and Lausen analyzed propagation of trust and distrust on social networks, what can be considered the first paper in which sentiment propagation was studied (Ziegler and Lausen, 2005) . Interesting conclusions, like that positive and negative sentiments follow a different propagation pattern (Hillmann and Trier, 2012) , have been drawn from the various investigations on sentiments in social networks. Other works studied the correlation between emotions and information diffusion, finding that those messages emotionally charged were re-tweeted more often (Stieglitz and Dang-Xuan, 2013) , or investigated if the topic and the opinion of the user's contacts affect the own user's opinion (Tang and Fong, 2013) . The ultimate objective of analyzing sentiments in social networks is to be able to predict the attitude of people and infer behaviour patterns like, for example, reactions against negative opinions. In this line, Nguyen et al. studied changes in collective sentiment and predicted the dynamics with statistical models that contemplate the complete network (Nguyen et al., 2012) , achieving a 85% of accuracy in the direction (polarity) of the sentiment. Sentiments can also be used to predict future connections in social networks by finding similar sentiments (Leskovec et al., 2010; Yuan et al., 2014) . The focus of this work is on sentiment analysis in tweets and, particularly, on the identification of the sentiments that users show when they talk about different issues within a same conversation. We apply the Latent-Dirichlet Alloca-tion (LDA) algorithm (Blei et al., 2003) to find the most similar words that uncover the hidden thematic structure (topics) in a single-hashtag tweet collection and then we extract meaningful emotions from each different topic based on the ANEW dictionary (Bradley and Lang, 1999) . Unlike the majority of works that only study polarity of sentiments (coarse-grained classification in positive, negative and neutral sentiments), we propose a more refined classification of sentiments based on the Russell's model (Russell, 2003) . The topic sentiment analysis provides a more precise snapshot of the sentiment distribution in a social network, thus allowing the identification of communities or sub-units of users within the network. Moreover, by combining this analysis with communities detection methods, we can determine if belonging to a determined group affects the user's sentiments. This paper is organized as follows. Section 2. presents Sent LDA, our approach for extracting and analyzing sentiments as well as a brief note on the Russell's model. Next section presents our implementation of the Latent-Dirichlet Allocation (LDA) algorithm for topic modeling in tweet messages. Section 4. explains the process for assigning sentiment scores to tweets, topics and users. In the last section, we present the experimental evaluation, analyzing the representative social graph for the network, the communities formation and the spread of sentiments across the network. The last section concludes and presents our future work.",
"cite_spans": [
{
"start": 299,
"end": 325,
"text": "(Hillmann and Trier, 2012)",
"ref_id": "BIBREF10"
},
{
"start": 613,
"end": 639,
"text": "(Ziegler and Lausen, 2005)",
"ref_id": "BIBREF31"
},
{
"start": 749,
"end": 775,
"text": "(Hillmann and Trier, 2012)",
"ref_id": "BIBREF10"
},
{
"start": 1015,
"end": 1046,
"text": "(Stieglitz and Dang-Xuan, 2013)",
"ref_id": "BIBREF27"
},
{
"start": 1147,
"end": 1168,
"text": "(Tang and Fong, 2013)",
"ref_id": "BIBREF28"
},
{
"start": 1527,
"end": 1548,
"text": "(Nguyen et al., 2012)",
"ref_id": "BIBREF18"
},
{
"start": 1732,
"end": 1755,
"text": "(Leskovec et al., 2010;",
"ref_id": "BIBREF13"
},
{
"start": 1756,
"end": 1774,
"text": "Yuan et al., 2014)",
"ref_id": "BIBREF30"
},
{
"start": 2034,
"end": 2053,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF1"
},
{
"start": 2269,
"end": 2293,
"text": "(Bradley and Lang, 1999)",
"ref_id": "BIBREF4"
},
{
"start": 2526,
"end": 2541,
"text": "(Russell, 2003)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "This section presents the overview of our approach Sent LDA but before we introduce the Russell's model of affect. Using this model enables us to define a more accurate sentiment of the messages and discriminate between cases in which the general sentiment is just positive or negative (for example, when a catastrophe occurs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sent LDA: emotion identification in tweets and topics",
"sec_num": "2."
},
{
"text": "In the Russell's circumflex model of affect, emotions are understood as a combination of varying degrees of two main dimensions, valence (pleasure dimension) and arousal (activation dimension), which are distributed in a 2D circular space (Russell, 1980) . According to the Russell's model, every affective experience is the consequence of a linear combination of valence and arousal dimensions (the so-called core affect space), which is then interpreted as representing a particular emotion. A numerical value for valence ranges from 1 (unpleasant) to 9 (pleasant) and arousal values range from 1 (sleepy) to 9 (awake) (see Figure 1) . In the core affect map, we identify four regions (R1, R2, R3 and R4) along with 16 sentiment words that lie on the perimeter of a circle. Words are labeled with a particular name: excited, sad, unhappy, bored, etc., each having their polar coordinates on the circle (Russell, 1980) . Assuming that affect can be modified by degree of valence and arousal, it seems reasonable to assume that emotions have the potential to lie across all positions in the twodimensions rather than just on a perimeter (Russell, 2003) . The core affect map in Figure 1 will be used to identify the emotional state or sentiment label of a given entity (message, topic or user) according to its valence (X-axis) and arousal (Y-axis) values. Particularly, given an entity e with valence and arousal values (v e , a e ), we used the Euclidean distance for identifying the closest distance sentiment to e. For instance, an entity e with valence and arousal values (v e , a e ) = (7.26, 3.56) falls within region R2 and its associated sentiment label would be S e = {serene}. with sentiment-word hashtags added by tweeters corresponding to the six basic Ekman emotions (Ekman, 1992) . Using emotion-related hashtags to identify the topic of the message has been the predominant choice to create emotion-labeled datasets from tweet messages (Choudhury et al., 2012) , (Purver and Battersby, 2012) , (Qadir and Riloff, 2014) . Unlike these works, our aim is to find the most similar words that uncover the hidden thematic structure (topics) in a single-hashtag tweet collection and extract meaningful emotions from each different topic. The overview of Sent LDA, our sentiment analysis model, is shown in Figure 2 . It includes tweet processing mechanisms, topic modeling method through LDA and sentiment extraction tool for classification of sentiments. We consider three types of entities: tweets or messages, topics and users, all associated to an opinion orientation expressed either with an individual sentiment or a set of sentiments represented through valence and arousal values. For our purpose of analyzing real-time events, we chronologically retrieved as many tweets through the Twitter Search API. We used hashtags for collecting tweets which are posted by several users on the basis of a particular domain (conversation) and we built a corpus of D tweets. The first step is to have a word representation for each message in order to facilitate machine manipulation as well as elimination of noisy words. Processing each individual tweet message involves extracting bags of words, filtering all stop words and extracting stems by using the stemmer tool of the WordNet dictionary (Figure 2, top) . As a result, we obtain the vocabulary for our particular dataset composed of V stem words. Afterwards, the topic model LDA algo-rithm is applied over the tweets' stem sets (vocabulary) and it returns a stem set for each found topic (Figure 2 , middle; details are presented in Section 3.). Finally, the emotional role of the stem sets of tweets and topics is obtained through the emotional base ANEW dictionary (Figure 2 , bottom) and words are annotated by their valence and arousal values (see section 4.). Words are then classified according to the Russell's emotion model. Formally, let's consider a given tweet d \u2208 D, an entity which is tokenized and filtered by eliminating stop words to a bag of words w d . We transform w d to a stem set, s d , through the WordNet stemmer tool, which allows us to get the appropriate stem for each token. The sentiment words associated to s d are spotted based on the ANEW dictionary, where emotional words are annotated by their valence and arousal values (examples of such emotional words are agreement, love, sad, quite, disagree, etc.). Finally, the overall opinion orientation or emotional tweet status conveyed by the user is determined by combining the content of each emotional word identified in the tweets sent by the user and classified according to the Russell's model. Additionally, each topic identified by the LDA algorithm constitutes a new entity, t, and the emotional value of each t is similarly obtained. The final outcome of the sentiment analysis process is a pair of lists: oo d1 , . . . , oo d D for the D tweet messages and oo t1 , . . . , oo t K for the K topics elicited by the LDA algorithm .",
"cite_spans": [
{
"start": 239,
"end": 254,
"text": "(Russell, 1980)",
"ref_id": "BIBREF25"
},
{
"start": 904,
"end": 919,
"text": "(Russell, 1980)",
"ref_id": "BIBREF25"
},
{
"start": 1137,
"end": 1152,
"text": "(Russell, 2003)",
"ref_id": "BIBREF26"
},
{
"start": 1781,
"end": 1794,
"text": "(Ekman, 1992)",
"ref_id": "BIBREF7"
},
{
"start": 1952,
"end": 1976,
"text": "(Choudhury et al., 2012)",
"ref_id": null
},
{
"start": 1979,
"end": 2007,
"text": "(Purver and Battersby, 2012)",
"ref_id": "BIBREF21"
},
{
"start": 2010,
"end": 2034,
"text": "(Qadir and Riloff, 2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 626,
"end": 635,
"text": "Figure 1)",
"ref_id": "FIGREF0"
},
{
"start": 1178,
"end": 1186,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 2315,
"end": 2323,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 3302,
"end": 3317,
"text": "(Figure 2, top)",
"ref_id": "FIGREF1"
},
{
"start": 3552,
"end": 3561,
"text": "(Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 3731,
"end": 3740,
"text": "(Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Sent LDA: emotion identification in tweets and topics",
"sec_num": "2."
},
{
"text": "Our assumption is that multiple scattered sentiments are typically found in a single-hashtag general conversation and that other more useful and suitable sentiment groupings can be extracted by identifying the topics involved in the conversation. The aim of this section is to elicit a topic distribution over the corpus of D tweets and a sentiment distribution of topics. The Latent Dirichlet Allocation (LDA) algorithm (Blei et al., 2003) is one of the most successful topic models to infer the topics discussed in a collection of documents. The probabilistic generative model LDA models the D documents in the corpus as mixtures of K latent topics where each topic is a discrete distribution over the V words of the collection's vocabulary. The LDA generative process results in the joint distribution:",
"cite_spans": [
{
"start": 421,
"end": 440,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic modeling using LDA",
"sec_num": "3."
},
{
"text": "p(w, z, \u03b8, \u03c6|\u03b1, \u03b2) = p(\u03c6|\u03b2)p(\u03b8|\u03b1)p(z|\u03b8)p(w|\u03c6 z ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic modeling using LDA",
"sec_num": "3."
},
{
"text": "where w is the observed words, the unobserved latent variables are \u03c6 (the K \u00d7 V per-topic word distribution matrix), \u03b8 (the D \u00d7 K per-document topic distribution matrix) and z (the D \u00d7 V matrix that represents the topic index assignment for each word w i in document j \u2208 D). Given \u03b8 and \u03c6, drawn from the hyperparameters \u03b1 and \u03b2 for the symmetric Dirichlet distribution, the aim is to learn the latent variables; that is, the words associated with each topic (expressed through the topic distributions \u03c6). LDA therefore sees each document as a set of topic occurrences that appear in an arbitrary order, which is similar to the 'bag-of-words' model. Choosing a topic is performed independently for each word of the document under the constraint of overall compliance of the fixed distribution over topics \u03b8 d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic modeling using LDA",
"sec_num": "3."
},
{
"text": "Rather than learning \u03b8 and \u03c6 directly, we applied the reverse generative process and learnt the posterior distributions of the latent variables by means of collapsed Gibbs sampling, a Markov Chain Monte Carlo algorithm that samples one random variable at a time (Porteous et al., 2008; Griffiths and Steyvers, 2004) . This process learns the probability of a topic z ij in document j being assigned a word w i , given all other topic assignments to all other words. That is, learning the assignment of words to topics z given the observed words w by repeatedly sampling the latent variables z ij regarding the other values of z. As a result of using the collapsed Gibbs sampler for LDA, we can draw the values of \u03b8 and \u03c6. Two key aspects affect the performance of this process: the number of iterations for Gibbs sampling to converge and the selection of the optimal number of topics. Regarding the first issue, we know that convergence is theoretically guarantee with LDA Gibbs sampling but there is no way of knowing how many iterations are required to reach the stationary distribution. In practice, a visual inspection of the log-likelihood can give us an acceptable estimation of convergence.",
"cite_spans": [
{
"start": 262,
"end": 285,
"text": "(Porteous et al., 2008;",
"ref_id": "BIBREF20"
},
{
"start": 286,
"end": 315,
"text": "Griffiths and Steyvers, 2004)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic modeling using LDA",
"sec_num": "3."
},
{
"text": "As for the number of topics, some approaches like Hierarchical Dirichlet Process can be used to estimate the best topic number and this is line we intend to explore in the near future. Nevertheless, a relatively simple way to find the optimal number of topics is by iterating through models with different numbers of topics and select the model with the maximum log-likelihood, given the data. In general, since we aim for discovering topics in tweet messages which are all labeled under the same hashtag, the number of topics in the general conversation will be relatively low. Thus, the parameters of the Dirichlet distribution used in LDA can be assigned small values in order to render only a few topics allocated to each document. Additionally, LDA presents another weakness related to the overlapping topics composition. It is desirable to have independent topics and thus avoiding the appearance of the same word in multiple topics. Consequently, a better fit will be a model with fewer overlapping words. In practice, the best fitting model can also be discovered through analysis of the log-likelihood. Topic Modeling in Twitter. Based on other works on topic modeling in twitter, we justify here the use of LDA for topic modeling and sentiment identification. First we must note that, unlike supervised statistical approaches that require manual annotation of tweets with emotions (Mohammad and Kiritchenko, 2015) or constrain the topic model to use only those topics that correspond to a document's (observed) label set (Ramage et al., 2009) , we focus exclusively on unsupervised learning algorithms. The short and sparse texts of tweets messages pose a serious challenge to the efficacy of topic modeling. Common techniques to overcome this limitation rely upon aggregation strategies as a data preprocessing step for LDA. In (Hong and Davidson, 2010) authors train two models, LDA and an author-topic model (Rosen-Zvi et al., 2010) , in short text environments (Twitter). They introduce several aggregation techniques to obtain topics associated with messages and their authors, and they conclude that a standard LDA model on user aggregated profiles yields better results than the author-topic model. In (Mehrotra et al., 2013) , authors apply various tweet pooling schemes in a standard LDA to finally conclude that hashtag-based pooling, creation of pooled documents for each hashtag, outperforms all other pooling strategies and the unpooled scheme. In general, LDA has proven to work well on tweets (Weng et al., 2010) , particularly when messages are very focused and very few topics are discussed in the composition of the entire tweet (Naveed et al., 2011) . This is precisely the case of our corpus where tweets are all labeled under the same single hashtag.",
"cite_spans": [
{
"start": 1531,
"end": 1552,
"text": "(Ramage et al., 2009)",
"ref_id": "BIBREF23"
},
{
"start": 1839,
"end": 1864,
"text": "(Hong and Davidson, 2010)",
"ref_id": "BIBREF11"
},
{
"start": 1921,
"end": 1945,
"text": "(Rosen-Zvi et al., 2010)",
"ref_id": "BIBREF24"
},
{
"start": 2219,
"end": 2242,
"text": "(Mehrotra et al., 2013)",
"ref_id": "BIBREF14"
},
{
"start": 2518,
"end": 2537,
"text": "(Weng et al., 2010)",
"ref_id": "BIBREF29"
},
{
"start": 2657,
"end": 2678,
"text": "(Naveed et al., 2011)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic modeling using LDA",
"sec_num": "3."
},
{
"text": "In this section, we explain the extraction of sentiments from the stem sets of tweets and topics and how to associate sentiment scores to these entities based on the ANEW dictionary. First, we justify the use of Affective Norms for English Words (ANEW) dictionary versus other dictionaries like SentiWordnet and then we explain the sentiment extraction and score assignation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment extraction with ANEW dictionary",
"sec_num": "4."
},
{
"text": "The aim of sentiment extraction is to compile sentiment words. One of the most efficient approaches for this purpose is the dictionary-based approach. Dictionary-based approaches use dictionaries of emotional words which are associated to a sentiment score. There exist several affect dictionaries in the literature like ANEW dictionary of affect (Bradley and Lang, 1999; Nielsen, 2011) or SentiWordNet (Esuli and Sebastiani, 2006; Baccianella et al., 2010) . The performance of dictionary-based approach can be evaluated according to two aspects: 1) the number of emotional words covered by the dictionary and 2) the nature of sentiment score provided by the dictionary. ANEW, for instance, computes this score with the valence and arousal values of the word, which range from 1 to 9; SentiWord-Net, instead, uses polarity. Hence, ANEW allows us to calculate a more accurate sentiment value which fits better our aim of having a bi-dimensional representation of sentiments as well as to measure the intensity of expressed sentiments. Additionally, the new version of the ANEW dictionary (Nielsen, 2011) provides the mean and standard deviation of normative emotional ratings (valence v and arousal a) for 2477 unique words in English (see next section). Tweet 1 377 471 1138 Tweet 2 1117 1262 2419 Review 1097 1012 7202 Table 1 : #words covered by different dictionaries of affect",
"cite_spans": [
{
"start": 347,
"end": 371,
"text": "(Bradley and Lang, 1999;",
"ref_id": "BIBREF4"
},
{
"start": 372,
"end": 386,
"text": "Nielsen, 2011)",
"ref_id": "BIBREF19"
},
{
"start": 403,
"end": 431,
"text": "(Esuli and Sebastiani, 2006;",
"ref_id": "BIBREF8"
},
{
"start": 432,
"end": 457,
"text": "Baccianella et al., 2010)",
"ref_id": "BIBREF0"
},
{
"start": 1088,
"end": 1103,
"text": "(Nielsen, 2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1255,
"end": 1341,
"text": "Tweet 1 377 471 1138 Tweet 2 1117 1262 2419 Review 1097 1012 7202 Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "ANEW dictionary",
"sec_num": "4.1."
},
{
"text": "We compared the number of emotional words covered by ANEW and SentiWordNet in three of our experiments (results are shown in Table 1 ). The first two corpus shown in Table 1 correspond to microblog tweets (tweets do not contain more than \u223c 12 words) and the third corpus corresponds to a set of beer reviews 1 , where reviews contain more than 30 words in average. As we can observe in Table 1 , the much larger size of SentiWordNet (155287 words) versus ANEW (2477 words) only implies a very insignificant higher coverage or even slightly lower as in the case of the beer reviews.",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 132,
"text": "Table 1",
"ref_id": null
},
{
"start": 166,
"end": 173,
"text": "Table 1",
"ref_id": null
},
{
"start": 386,
"end": 394,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus ANEW SentiWordnet Vocabulary",
"sec_num": null
},
{
"text": "Following, we show an example of a tweet message that comprises three emotional words that exist in the ANEW dictionary along with their valences and arousal values. Feb 24, 8:02pm: @Suvi 90 @Bloody Mary0812 yeah who hate him? It's so strange and sad but we love him so much :) #SPNFamily #GetJensenToOneMillion The aim of this phase is to associate each entity e with a tuple (v e , a e ). The average sentiment score of a tweet message d is calculated with the valence and arousal of the stem words of d that appear in the ANEW dictionary (emotional words of d). Then, the sentiment score of a user u is calculated with the score of his/her tweet messages and we associate the corresponding sentiment label S u as explained in Section 2. In order to combine the mean values of the valence and arousal of the emotional words, we have to assume that individual mean values reported for each stem form a normal distribution. Supposedly, if a stem has a high \u03c3 of valence (equivalently for arousal) then the valence ratings of the word are distributed over a wider range of values; and lower values of \u03c3 imply that ratings are closer to \u00b5. Thus, we used a probability weight based on the probability density function of each word in ANEW to estimate that the stem's valence (arousal) falls exactly at the mean.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating Sentiment scores",
"sec_num": "4.2."
},
{
"text": "\u2022 hate, v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating Sentiment scores",
"sec_num": "4.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X = N i=1 \u03c6i,t\u00b5i \u03c3i N i=1 \u03c6i,t \u03c3i (X, \u03c6, \u00b5, \u03c3)",
"eq_num": "(2)"
}
],
"section": "Calculating Sentiment scores",
"sec_num": "4.2."
},
{
"text": "X mean value of valence (Y , mean value of arousal) N total number of emotional words within the message \u03c6i,t topic distribution for word i in the t th topic \u00b5 word's mean value of valence (equivalently for arousal) \u03c3 word's standard deviation of valence (equivalently for arousal) (2) calculates the sentiment score of a message by estimating the overall mean value of all emotional words within the message (see Table 2 ). Particularly, X (respectively, Y ) is the overall mean value of the valence (respectively, arousal) considering the N emotional words within the message. When calculating the sentiment score of a message without considering the topic distribution \u03c6 returned by LDA, we set \u03c6 i,t to 1. In this case, X and Y (valence and arousal) denote the primary sentiment of the conversation; that is, the sentiment score of the general conversation without considering the word distribution per topic. Then, the sentiment score of a user is calculated as the average emotional value of all the tweets sent by the user. For example, if we amalgamate the three words hate, sad and love of the above message d, the result of the weighted average formula (2) for the valence and arousal is X d = 4.78 and Y d = 5.69, respectively. On the other hand, if topics are taken into account, the sentiment scores of the messages and users will be subject to such particular topic distribution \u03c6. The same formula (2) shows how to calculate the mean value of valence and arousal of a message when the word w i of the message appears in topic t with a probability \u03c6 i,t . ",
"cite_spans": [],
"ref_spans": [
{
"start": 414,
"end": 421,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Calculating Sentiment scores",
"sec_num": "4.2."
},
{
"text": "The purpose of the experimental evaluation is to identify the behaviour of users' sentiments over a conversation. Particularly, we are interested in analyzing the impact of topic distribution of a general conversation in the users' sentiments. Our hypothesis is that the sentiments of users talking about a particular topic are more alike than the sentiments of the same users in the general conversation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5."
},
{
"text": "To lead our study over the proposed framework, we collected over 10,600 tweets with the Search-API related to one of the most important and recent sport events, based on the hashtag (#elclasico). Once the tweets were collected, we kept those involved in a direct conversation (replies, mentions and re-tweets). We will refer to this tweet collection as the primary conversation, where a total of 3,600 users were identified. Network representation. A network representation is required to study the emotional behaviour of users through a primary conversation and to evaluate the users sentiment tendencies on specific topic discussion. A multilayer network is formally defined by (Boccaletti et al., 2014) as a pair M = (G, C) where G = {G 1 , . . . , G K } is a family of graphs, G \u03b1 = (X \u03b1 , E \u03b1 ) is a layer, and C = {E \u03b1\u03b2 \u2286 X \u03b1 \u00d7 X \u03b2 } is the set of connections between two different layers G \u03b1 and G \u03b2 . Elements of E \u03b1 are the intralayer connections and the elements of C are the interlayer or crossed layers. The characteristic of the multiplex network is that all the layers have the same set of nodes X 1 = . . . = X K = X and the cross layers are defined between equivalent nodes E \u03b1\u03b2 = {(x, x), x \u2208 X}. In our case, a topic corresponds to a layer of the multiplex network and the nodes represent the users (Boccaletti et al., 2014) . Topic distribution with LDA. To evaluate the optimal number of topics, LDA was run with different values of K, from K = 3 to K = 7 and we selected the model with the maximum log-likelihood. Figure 3 shows the loglikelihood for each model with respect to the number of iterations represented in the X-axis. As we can observe, the five models tend to convergence at about 500 iterations and K = 3 shows the best log-likelihood. This was also confirmed by checking that K = 3 was the model with fewer overlapping words across topics. Therefore, we selected the 3-topic model identified by LDA, so the multiplex graph is composed of K = 3 layers, one per topic. Table3 shows the top-10 words for the resulting model K = 3. For instance, Topic 3 contains different words with high probability (Bale (0.10), Car (0.06) and Attack (0.04)) which means that the subject of this topic is about Bale's car attack after the 'el clasico' game. Interestingly, Topic 1 reveals a conversation about the Most Valuable Player (MVP (0.04), Vote (0.04) and Player (0.03)). Community detection. For evaluating the effectiveness of our topic-sentiment approach, we propose a community detection method to extract the community structure in large networks 2 . A community is defined as a group or cluster of users who are densely connected with intra-community edges and scattered by inter-community connection (Kauffman et al., 2014) . The modularity measure of a partition is a scalar value [\u22121, 1] that measures the density of links inside communities as compared to links between communities (Newman, 2006) . We used the detection algorithm based on modularity maximization proposed in (Blondel et al., 2008) to identify the existing communities in the conversation through the density and the frequency of the messages among the members of each group. A total of 19 communities were identified in the primary conversation (Table 4) . Our final objective is to analyze the sentiments of the communities identified in the primary conversation and then check if the sentiments of users who talk about the same topic within each community have a higher degree of similarity.",
"cite_spans": [
{
"start": 680,
"end": 705,
"text": "(Boccaletti et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 1317,
"end": 1342,
"text": "(Boccaletti et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 2733,
"end": 2756,
"text": "(Kauffman et al., 2014)",
"ref_id": "BIBREF12"
},
{
"start": 2918,
"end": 2932,
"text": "(Newman, 2006)",
"ref_id": "BIBREF17"
},
{
"start": 3012,
"end": 3034,
"text": "(Blondel et al., 2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 1535,
"end": 1543,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 3249,
"end": 3258,
"text": "(Table 4)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5."
},
{
"text": "Users are depicted in the extracted graph M , which models the conversation about the analyzed hashtag. We calculate the sentiment score of every user u in the primary conversation; that is, we calculate the mean value of valence (X u ) and arousal (Y u ) on the basis of the tweets sent by u as explained in section 4.2., and assess the sentiment of users according to two different measures: \u2022 Polarity. A sentiment is evaluated as positive if X u > 5 and negative otherwise. In general, polarity gives more general emotional overview which is explained by its 2-dimensional discriminants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User's Sentiment Identification and Analysis",
"sec_num": "5.1."
},
{
"text": "\u2022 Primary Sentiment. We associate one of the 16 primary sentiments of the Russell's model to each user. We identify the appropriate region of the model through X u and Y u and then we use Euclidean distance to identify the closest sentiment(s).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User's Sentiment Identification and Analysis",
"sec_num": "5.1."
},
{
"text": "In order to evaluate the similarity of users' sentiments within each community according to both measures, we calculated the Shannon entropy, a measure of the uncertainty in the probability distribution of sentiments (mixture ratio of sentiments present in each community). The first two rows of Table 4 show the entropy results for Polarity and Primary Sentiments, respectively. Below the entropy value in each community, the number of users found in such community is shown between parenthesis. When the community has a single user, it makes no sense to calculate the entropy value (this is indicated with a (x) sign).",
"cite_spans": [],
"ref_spans": [
{
"start": 296,
"end": 303,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "User's Sentiment Identification and Analysis",
"sec_num": "5.1."
},
{
"text": "An entropy value close to zero means the community has a unique predominant sentiment. Higher entropy values are associated to communities in which more than one sentiment appears. Unsurprisingly, the best values were obtained from the polarity level, where only two sentiments -positive and negative -are handled, and a predominant positive sentiment was detected in the network. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User's Sentiment Identification and Analysis",
"sec_num": "5.1."
},
{
"text": "We now study the impact of topic distribution in the sentiments of the communities. The three topic conversations are modeled in three different layers G 1 , G 2 , G 3 of the multiplex network, each associated to the corresponding topic. A layer G i was obtained by retrieving the tweet messages sent by the users about topic i. Thus, each layer contains the set of users participating in the topic conversation, where the same user can participate in more than one topic distribution; that is, the same user can be found in different layers of the network (inter-layer connections). Table 5 shows the structural properties of the whole network and each layer, where the first two columns show the number of tweets and users, respectively. The last column is the value of the sentiment assortativity, that is, the tend of users to be connected with users of similar sentiments. Assortativity is a correlation coefficient between +1 and -1; negative values indicate a negative relationship that connects people with different sentiments; a positive value, however, shows a positive correlation connecting users with similar sentiments. The interesting thing here is that the assortativity in the primary conversation has a lower positive value than the assortativity in the topic conversations, thus indicating that users in the layers have more connections with sentimentally alike users than they have in the primary conversa-tion. This can be interpreted as a first indication that topics strengthen likeness in sentiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 584,
"end": 591,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Topics impact on communities",
"sec_num": "5.2."
},
{
"text": "Subsequently, we analyzed the entropy values in the topic conversations per community according to the primary sentiments of the Russell's model. We calculated X i u and Y i u for every user u and topic i in each community and we obtained the entropy values shown in the last three rows of table 4 along with the number of users. The (-) sign means there are no users associated to that community. In our analysis, we will compare the entropy values of the topic conversations with the results of the Primary Sentiment level (the best value is shown in bold).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topics impact on communities",
"sec_num": "5.2."
},
{
"text": "The first observation is that 11 communities show an entropy reduction when the topic is considered, meaning that topic distribution strengthens the likeness of sentiments in these communities. For instance, in community 2, the biggest community among all (519 users), topic distribution gathers users together in three topic sub-communities that show a much higher sentiment similarity, notably in topic 2 and topic 3. Another interesting observation is community 8, where the same five users happen to be have the same sentiment, what was discovered when we detected the five users talking about the same topic (topic 1). Notice that the entropy value of these five users in the Primary Sentiment is 0.72, an indication that the users show different sentiments when the primary conversation is considered. In community 13, for instance, we observe the existence of 25 users perfectly aligned around the same sentiment in topic 3; and, almost all of the users in community 16 show the same sentiment when conversation about topic 3 is analyzed (note the drop of the entropy from 0.60 in the Primary Sentiment level to 0.09 in topic 3). A similar happening takes place in community 19 between the Primary Sentiment level and topic 3 level although in this case it was found 4 users with exactly the same sentiment in topic 1. On the other hand, there are communities that are only present in some topics, such as communities 3, 7, 8, 11, 12 or 19 . This also shows that the topic layer tends to give more optimized communities based on sentiments.",
"cite_spans": [
{
"start": 1425,
"end": 1446,
"text": "3, 7, 8, 11, 12 or 19",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topics impact on communities",
"sec_num": "5.2."
},
{
"text": "In this paper, we have presented a sentiment detection approach to extract sentiments from single-hashtag twitter conversations to demonstrate that user-level sentiment analysis can be significantly improved when incorporating topic modeling. Following the topic distribution, our Sent LDA approach generates several layers (topics) from the primary conversation, each representing a network of sentiments associated to different messages and users. Experimentation showed that topic modeling is very helpful for sentiment classification of twitter messages since different contextual views of sentiments are obtained. We used various levels of sentiments and we observed that the primary sentiment identification is an appropriate level to analyze users' emotional tendencies. Finally, the detection of communities through the topic distribution analysis highlights a more precise picture of users' sentiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "The corpus is :https://snap.stanford.edu/data/web-BeerAdvocate.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been partially supported by the Spanish MINECO project TIN2014-55637-C2-2-R which is cofounded by FEDER.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "7."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining",
"authors": [
{
"first": "S",
"middle": [],
"last": "Baccianella",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Esuli",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2010,
"venue": "International Conference on Language Resources and Evaluation,LREC",
"volume": "10",
"issue": "",
"pages": "2200--2204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baccianella, S., Esuli, A., and Sebastiani, F. (2010). Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In Interna- tional Conference on Language Resources and Evalua- tion,LREC,2010, volume 10, pages 2200-2204.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "D",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "A",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "M",
"middle": [
"I"
],
"last": "",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of machine Learning research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blei, D. M., Ng, A. Y., and Jordan, M. I. (2003). La- tent dirichlet allocation. Journal of machine Learning research, 3:993-1022.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Fast unfolding of communities in large networks",
"authors": [
{
"first": "V",
"middle": [
"D"
],
"last": "Blondel",
"suffix": ""
},
{
"first": "J.-L",
"middle": [],
"last": "Guillaume",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Lambiotte",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Lefebvre",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Statistical Mechanics: Theory and Experiment",
"volume": "",
"issue": "10",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blondel, V. D., Guillaume, J.-L., Lambiotte, R., and Lefeb- vre, E. (2008). Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, 2008(10):P10008.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The structure and dynamics of multilayer networks",
"authors": [
{
"first": "S",
"middle": [],
"last": "Boccaletti",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Bianconi",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Criado",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Del Genio",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "G\u00f3mez-Garde\u00f1es",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Romance",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sendina-Nadal",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zanin",
"suffix": ""
}
],
"year": 2014,
"venue": "Physics Reports",
"volume": "544",
"issue": "1",
"pages": "1--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boccaletti, S., Bianconi, G., Criado, R., Del Genio, C., G\u00f3mez-Garde\u00f1es, J., Romance, M., Sendina-Nadal, I., Wang, Z., and Zanin, M. (2014). The structure and dynamics of multilayer networks. Physics Reports, 544(1):1-122.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Affective norms for english words (anew): Technical manual and affective ratings",
"authors": [
{
"first": "M",
"middle": [
"M"
],
"last": "Bradley",
"suffix": ""
},
{
"first": "P",
"middle": [
"J"
],
"last": "Lang",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bradley, M. M. and Lang, P. J. (1999). Affective norms for english words (anew): Technical manual and affec- tive ratings. Gainesville, FL: The Center for Research in Psychophysiology, University of Florida.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "nervous or surprised? classification of human affective states in social media",
"authors": [
{
"first": "",
"middle": [],
"last": "Happy",
"suffix": ""
}
],
"year": 2012,
"venue": "AAAI International Conference on Weblogs and Social Media, ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Happy, nervous or surprised? classification of human affective states in social media. In AAAI International Conference on Weblogs and Social Media, ICWSM, 2012.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An argument for basic emotions. Cognition and Emotion",
"authors": [
{
"first": "P",
"middle": [],
"last": "Ekman",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "6",
"issue": "",
"pages": "169--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ekman, P. (1992). An argument for basic emotions. Cog- nition and Emotion, 6(3-4):169-200.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sentiwordnet: A publicly available lexical resource for opinion mining",
"authors": [
{
"first": "A",
"middle": [],
"last": "Esuli",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2006,
"venue": "International Conference on Language Resources and Evaluation,LREC",
"volume": "6",
"issue": "",
"pages": "417--422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Esuli, A. and Sebastiani, F. (2006). Sentiwordnet: A pub- licly available lexical resource for opinion mining. In International Conference on Language Resources and Evaluation,LREC,2006, volume 6, pages 417-422.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Finding scientific topics. National academy of Sciences of the United States of America",
"authors": [
{
"first": "T",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Steyvers",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "101",
"issue": "",
"pages": "5228--5235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Griffiths, T. L. and Steyvers, M. (2004). Finding scien- tific topics. National academy of Sciences of the United States of America, 101(suppl 1):5228-5235.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Dissemination patterns and associated network effects of sentiments in social networks",
"authors": [
{
"first": "R",
"middle": [],
"last": "Hillmann",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Trier",
"suffix": ""
}
],
"year": 2012,
"venue": "IEEE International Conference on Advances in Social Networks Analysis and Mining",
"volume": "",
"issue": "",
"pages": "511--516",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hillmann, R. and Trier, M. (2012). Dissemination pat- terns and associated network effects of sentiments in social networks. In IEEE International Conference on Advances in Social Networks Analysis and Mining, ASONAM, 2012, pages 511-516.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Empirical study of topic modeling in twitter",
"authors": [
{
"first": "L",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "B",
"middle": [
"D"
],
"last": "Davidson",
"suffix": ""
}
],
"year": 2010,
"venue": "In ACM Workshop on Social Media Analytics",
"volume": "",
"issue": "",
"pages": "80--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong, L. and Davidson, B. D. (2010). Empirical study of topic modeling in twitter. In ACM Workshop on Social Media Analytics,SOMA 2010, pages 80-88.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Dyconet: a gephi plugin for community detection in dynamic complex networks",
"authors": [
{
"first": "J",
"middle": [],
"last": "Kauffman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kittas",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Bennett",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tsoka",
"suffix": ""
}
],
"year": 2014,
"venue": "PloS one",
"volume": "9",
"issue": "7",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kauffman, J., Kittas, A., Bennett, L., and Tsoka, S. (2014). Dyconet: a gephi plugin for community detection in dy- namic complex networks. PloS one, 9(7):0101357.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Predicting positive and negative links in online social networks",
"authors": [
{
"first": "J",
"middle": [],
"last": "Leskovec",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Huttenlocher",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kleinberg",
"suffix": ""
}
],
"year": 2010,
"venue": "ACM International conference on World wide web",
"volume": "",
"issue": "",
"pages": "641--650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leskovec, J., Huttenlocher, D., and Kleinberg, J. (2010). Predicting positive and negative links in online social networks. In ACM International conference on World wide web, WWW, 2010, pages 641-650.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving LDA topic models for microblogs via tweet pooling and automatic labeling",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mehrotra",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sanner",
"suffix": ""
},
{
"first": "W",
"middle": [
"L"
],
"last": "Buntine",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Xie",
"suffix": ""
}
],
"year": 2013,
"venue": "ACM Conference on research and development in Information Retrieval, SIGIR 2013",
"volume": "",
"issue": "",
"pages": "889--892",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehrotra, R., Sanner, S., Buntine, W. L., and Xie, L. (2013). Improving LDA topic models for microblogs via tweet pooling and automatic labeling. In ACM Confer- ence on research and development in Information Re- trieval, SIGIR 2013, pages 889-892.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Using hashtags to capture fine emotion categories from tweets",
"authors": [
{
"first": "S",
"middle": [
"M"
],
"last": "Mohammad",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Intelligence",
"volume": "",
"issue": "",
"pages": "301--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad, S. M. and Kiritchenko, S. (2015). Using hashtags to capture fine emotion categories from tweets. Computational Intelligence, pages 301-326.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Searching microblogs: coping with sparsity and document quality",
"authors": [
{
"first": "N",
"middle": [],
"last": "Naveed",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Gottron",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kunegis",
"suffix": ""
},
{
"first": "Alhadi",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "A",
"middle": [
"C"
],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "ACM Conference on Information and Knowledge Management, CIKM 2011",
"volume": "",
"issue": "",
"pages": "183--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naveed, N., Gottron, T., Kunegis, J., and Alhadi, A. C. (2011). Searching microblogs: coping with sparsity and document quality. In ACM Conference on Information and Knowledge Management, CIKM 2011, pages 183- 188.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Modularity and community structure in networks",
"authors": [
{
"first": "M",
"middle": [
"E"
],
"last": "Newman",
"suffix": ""
}
],
"year": 2006,
"venue": "National Academy of Sciences",
"volume": "103",
"issue": "23",
"pages": "8577--8582",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Newman, M. E. (2006). Modularity and community structure in networks. National Academy of Sciences, 103(23):8577-8582.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Predicting collective sentiment dynamics from time-series social media",
"authors": [
{
"first": "L",
"middle": [
"T"
],
"last": "Nguyen",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "ACM International workshop on issues of sentiment discovery and opinion mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nguyen, L. T., Wu, P., Chan, W., Peng, W., and Zhang, Y. (2012). Predicting collective sentiment dynamics from time-series social media. In ACM International work- shop on issues of sentiment discovery and opinion min- ing, WISDOM, 2012, page 6.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A new anew: Evaluation of a word list for sentiment analysis in microblogs",
"authors": [
{
"first": "F",
"middle": [
"\u00c5"
],
"last": "Nielsen",
"suffix": ""
}
],
"year": 2011,
"venue": "ESWC Workshop on 'Making Sense of Microposts': Big things come in small packages, MSM",
"volume": "",
"issue": "",
"pages": "93--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nielsen, F.\u00c5. (2011). A new anew: Evaluation of a word list for sentiment analysis in microblogs. In ESWC Workshop on 'Making Sense of Microposts': Big things come in small packages, MSM, 2011, pages 93-98.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Fast collapsed gibbs sampling for latent dirichlet allocation",
"authors": [
{
"first": "I",
"middle": [],
"last": "Porteous",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Asuncion",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Smyth",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ihler",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2008,
"venue": "ACM International Conference on Knowledge Discovery and Data Mining, SIGKDD",
"volume": "",
"issue": "",
"pages": "569--577",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Porteous, I., Asuncion, A., Newman, D., Smyth, P., Ih- ler, A., and Welling, M. (2008). Fast collapsed gibbs sampling for latent dirichlet allocation. In ACM Inter- national Conference on Knowledge Discovery and Data Mining, SIGKDD, 2008, pages 569-577.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Experimenting with distant supervision for emotion classification",
"authors": [
{
"first": "M",
"middle": [],
"last": "Purver",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Battersby",
"suffix": ""
}
],
"year": 2012,
"venue": "ACL Conference of the European Chapter",
"volume": "",
"issue": "",
"pages": "482--491",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Purver, M. and Battersby, S. (2012). Experimenting with distant supervision for emotion classification. In ACL Conference of the European Chapter of the Association for Computational Linguistics,EACL, 2012, pages 482- 491.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning emotion indicators from tweets: Hashtags, hashtag patterns, and phrases",
"authors": [
{
"first": "A",
"middle": [],
"last": "Qadir",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2014,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1203--1209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qadir, A. and Riloff, E. (2014). Learning emotion in- dicators from tweets: Hashtags, hashtag patterns, and phrases. In Conference on Empirical Methods in Natural Language Processing, EMNLP, 2014, pages 1203-1209.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Ramage",
"suffix": ""
},
{
"first": "D",
"middle": [
"L W"
],
"last": "Hall",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2009,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "248--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramage, D., Hall, D. L. W., Nallapati, R., and Manning, C. D. (2009). Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora. In Con- ference on Empirical Methods in Natural Language Pro- cessing, EMNLP 2009, volume 1, pages 248-256.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning author-topic models from text corpora",
"authors": [
{
"first": "M",
"middle": [],
"last": "Rosen-Zvi",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Chemudugunta",
"suffix": ""
},
{
"first": "T",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Smyth",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Steyvers",
"suffix": ""
}
],
"year": 2010,
"venue": "ACM Transactions on Information Systems, TOIS",
"volume": "28",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rosen-Zvi, M., Chemudugunta, C., Griffiths, T. L., Smyth, P., and Steyvers, M. (2010). Learning author-topic mod- els from text corpora. ACM Transactions on Information Systems, TOIS, 2010, 28(1):4:1-4:38.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Journal of personality and social psychology",
"authors": [
{
"first": "J",
"middle": [
"A"
],
"last": "Russell",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "39",
"issue": "",
"pages": "1161--1178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Russell, J. A. (1980). A circumplex model of affect. Jour- nal of personality and social psychology, 39(6):1161- 1178.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Core affect and the psychological construction of emotion",
"authors": [
{
"first": "J",
"middle": [
"A"
],
"last": "Russell",
"suffix": ""
}
],
"year": 2003,
"venue": "Psychological review",
"volume": "110",
"issue": "1",
"pages": "145--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Russell, J. A. (2003). Core affect and the psycholog- ical construction of emotion. Psychological review, 110(1):145-172.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Emotions and information diffusion in social media -sentiment of microblogs and sharing behavior",
"authors": [
{
"first": "S",
"middle": [],
"last": "Stieglitz",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Dang-Xuan",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Management Information Systems",
"volume": "29",
"issue": "4",
"pages": "217--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stieglitz, S. and Dang-Xuan, L. (2013). Emotions and in- formation diffusion in social media -sentiment of mi- croblogs and sharing behavior. Journal of Management Information Systems, 29(4):217-248.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Sentiment diffusion in large scale social networks",
"authors": [
{
"first": "J",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Fong",
"suffix": ""
}
],
"year": 2013,
"venue": "IEEE International Conference on Consumer Electronics, ICCE",
"volume": "",
"issue": "",
"pages": "244--245",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tang, J. and Fong, A. (2013). Sentiment diffusion in large scale social networks. In IEEE International Conference on Consumer Electronics, ICCE, 2013, pages 244-245.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Twitterrank: finding topic-sensitive influential twitterers",
"authors": [
{
"first": "J",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "E.-P",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2010,
"venue": "ACM International conference on Web search and data mining, WSDM 2010",
"volume": "",
"issue": "",
"pages": "261--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weng, J., Lim, E.-P., Jiang, J., and He, Q. (2010). Twit- terrank: finding topic-sensitive influential twitterers. In ACM International conference on Web search and data mining, WSDM 2010, pages 261-270.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Exploiting sentiment homophily for link prediction",
"authors": [
{
"first": "G",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "P",
"middle": [
"K"
],
"last": "Murukannaiah",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "M",
"middle": [
"P"
],
"last": "Singh",
"suffix": ""
}
],
"year": 2014,
"venue": "ACM Conference on Recommender systems",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan, G., Murukannaiah, P. K., Zhang, Z., and Singh, M. P. (2014). Exploiting sentiment homophily for link pre- diction. In ACM Conference on Recommender systems, RecSys, 2014, pages 17-24.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Propagation models for trust and distrust in social networks",
"authors": [
{
"first": "C.-N",
"middle": [],
"last": "Ziegler",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Lausen",
"suffix": ""
}
],
"year": 2005,
"venue": "Information Systems Frontiers",
"volume": "7",
"issue": "4-5",
"pages": "337--358",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziegler, C.-N. and Lausen, G. (2005). Propagation mod- els for trust and distrust in social networks. Information Systems Frontiers, 7(4-5):337-358.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Representation of Russell's model (R 1 = AL: alert, EX: excited, EL: elated, HA: happy; R 2 = CO: contented, SE: serene, RE: relaxed, CA: calm; R 3 = BO: bored, DE: depressed, UN: unhappy, SA: sad; R 4 = UP: upset, ST: stressed, NE: nervous, TE: tense)2.1. Overview of Sent LDASeveral approaches for extracting emotion from tweet messages exist in the literature. In(Mohammad and Kiritchenko, 2015), authors create a large lexicon from tweets"
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Global Overview of Sent LDA"
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Log-likelihood representation for all Topics"
},
"TABREF1": {
"html": null,
"num": null,
"type_str": "table",
"text": "Notation of different attributesFormula",
"content": "<table/>"
},
"TABREF3": {
"html": null,
"num": null,
"type_str": "table",
"text": "Different word frequency of each topic",
"content": "<table><tr><td>Comm.</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td><td>9</td><td>10</td><td>11</td><td>12</td><td>13</td><td>14</td><td>15</td><td>16</td><td>17</td><td>18</td><td>19</td></tr><tr><td>Polarity</td><td>0.28</td><td>0.00</td><td>0.00</td><td>0.37</td><td>0.00</td><td>0.00</td><td>0.00</td><td>0.00</td><td>x</td><td>0.92</td><td>0.81</td><td>0.00</td><td>0.77</td><td>0.22</td><td>0.00</td><td>0.09</td><td>0.00</td><td>0.00</td><td>0.00</td></tr><tr><td/><td>(83)</td><td>(519)</td><td>(29)</td><td>(57)</td><td>(74)</td><td>(159)</td><td>(69)</td><td>(5)</td><td>(1)</td><td>(3)</td><td>(4)</td><td>(11)</td><td>(40)</td><td>(83)</td><td>(49)</td><td>(92)</td><td>(132)</td><td>(4)</td><td>(27)</td></tr><tr><td>Primary</td><td>0.50</td><td>1.07</td><td>0.36</td><td>1.18</td><td>1.43</td><td>1.42</td><td>0.19</td><td>0.72</td><td>x</td><td>0.92</td><td>1.50</td><td>0.00</td><td>1.30</td><td>1.85</td><td>0.25</td><td>0.60</td><td>0.80</td><td>0.81</td><td>0.82</td></tr><tr><td>Sentiment</td><td>(83)</td><td>(519)</td><td>(29)</td><td>(57)</td><td>(74)</td><td>(159)</td><td>(69)</td><td>(5)</td><td>(1)</td><td>(3)</td><td>(4)</td><td>(11)</td><td>(40)</td><td>(83)</td><td>(49)</td><td>(92)</td><td>(132)</td><td>(4)</td><td>(27)</td></tr><tr><td>Topic 1</td><td>1.78</td><td>0.99</td><td>0.36</td><td>1.06</td><td>1.45</td><td>1.80</td><td>0.30</td><td>0.00</td><td>x</td><td>-</td><td>x</td><td>0.00</td><td>0.95</td><td>1.69</td><td>0.57</td><td>1.00</td><td>0.00</td><td>0.00</td><td>0.00</td></tr><tr><td/><td>(130)</td><td>(147)</td><td>(29)</td><td>(24)</td><td>(61)</td><td>(140)</td><td>(69)</td><td>(5)</td><td>(1)</td><td/><td>(1)</td><td>(11)</td><td>(14)</td><td>(58)</td><td>(20)</td><td>(2)</td><td>(18)</td><td>(4)</td><td>(4)</td></tr><tr><td>Topic 2</td><td>1.70</td><td>0.71</td><td>x</td><td>1.04</td><td>0.00</td><td>2.07</td><td>x</td><td>-</td><td>-</td><td>-</td><td>0.92</td><td>-</td><td>1.50</td><td>1.84</td><td>0.23</td><td>1.58</td><td>0.00</td><td>x</td><td>-</td></tr><tr><td/><td>(108)</td><td>(272)</td><td>(1)</td><td>(29)</td><td>(5)</td><td>(14)</td><td>(1)</td><td/><td/><td/><td>(3)</td><td/><td>(4)</td><td>(18)</td><td>(27)</td><td>(3)</td><td>(5)</td><td>(1)</td><td/></tr><tr><td>Topic 3</td><td>1.55</td><td>0.63</td><td>-</td><td>1.15</td><td>1.21</td><td>2.20</td><td>-</td><td>x</td><td>-</td><td>0.92</td><td>1.00</td><td>-</td><td>0.00</td><td>1.92</td><td>0.81</td><td>0.09</td><td>0.07</td><td>-</td><td>0.26</td></tr><tr><td/><td>(96)</td><td>(145)</td><td/><td>(7)</td><td>(18)</td><td>(9)</td><td/><td>(1)</td><td/><td>(3)</td><td>(2)</td><td/><td>(25)</td><td>(5)</td><td>(4)</td><td>(89)</td><td>(112)</td><td/><td>(23)</td></tr></table>"
},
"TABREF4": {
"html": null,
"num": null,
"type_str": "table",
"text": "Shannon Entropy associated with the different communities identified and its size",
"content": "<table/>"
},
"TABREF5": {
"html": null,
"num": null,
"type_str": "table",
"text": "The Primary Sentiment level shows less uniformity among the users since 16 different sentiments are considered at this level. Nevertheless, if we consider that the max entropy value is 4 (log 2 16) and that no value in Primary Sentiment exceeds 1.85, we can say there is a fairly low uncertainty in the predominant Russell's sentiment in each community.",
"content": "<table><tr><td/><td colspan=\"2\">#tweets #users</td><td>Sentiment</td></tr><tr><td/><td/><td/><td>assortativity</td></tr><tr><td>Primary conversation</td><td>10,600</td><td>3,600</td><td>0.2047</td></tr><tr><td>Topic 1</td><td>4,152</td><td>1,482</td><td>0.446</td></tr><tr><td>Topic 2</td><td>2,500</td><td>1,008</td><td>0.341</td></tr><tr><td>Topic 3</td><td>3,979</td><td>1,235</td><td>0.391</td></tr></table>"
},
"TABREF6": {
"html": null,
"num": null,
"type_str": "table",
"text": "Structural properties of the complete network and each one of the layers",
"content": "<table/>"
}
}
}
}