|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T11:50:30.531775Z" |
|
}, |
|
"title": "Affection Driven Neural Networks for Sentiment Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Rong", |
|
"middle": [], |
|
"last": "Xiang", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Yunfei", |
|
"middle": [], |
|
"last": "Long", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Mingyu", |
|
"middle": [], |
|
"last": "Wan", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jinghang", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Chu-Ren", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Deep neural network models have played a critical role in sentiment analysis with promising results in the recent decade. One of the essential challenges, however, is how external sentiment knowledge can be effectively utilized. In this work, we propose a novel affection-driven approach to incorporating affective knowledge into neural network models. The affective knowledge is obtained in the form of a lexicon under the Affect Control Theory (ACT), which is represented by vectors of three-dimensional attributes in Evaluation, Potency, and Activity (EPA). The EPA vectors are mapped to an affective influence value and then integrated into Long Short-term Memory (LSTM) models to highlight affective terms. Experimental results show a consistent improvement of our approach over conventional LSTM models by 1.0% to 1.5% in accuracy on three large benchmark datasets. Evaluations across a variety of algorithms have also proven the effectiveness of leveraging affective terms for deep model enhancement.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Deep neural network models have played a critical role in sentiment analysis with promising results in the recent decade. One of the essential challenges, however, is how external sentiment knowledge can be effectively utilized. In this work, we propose a novel affection-driven approach to incorporating affective knowledge into neural network models. The affective knowledge is obtained in the form of a lexicon under the Affect Control Theory (ACT), which is represented by vectors of three-dimensional attributes in Evaluation, Potency, and Activity (EPA). The EPA vectors are mapped to an affective influence value and then integrated into Long Short-term Memory (LSTM) models to highlight affective terms. Experimental results show a consistent improvement of our approach over conventional LSTM models by 1.0% to 1.5% in accuracy on three large benchmark datasets. Evaluations across a variety of algorithms have also proven the effectiveness of leveraging affective terms for deep model enhancement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Texts are collections of information that may encode emotions and deliver impacts to information receivers. Recognizing the underlying emotions encoded in a text is essential to understanding the key information it conveys. On the other hand, emotion-embedded text can also provide rich sentiment resources for relevant natural language processing tasks. As such, sentiment analysis (SA) has gained increasing interest among researchers who are keen on the investigation of natural language processing techniques as well as emotion theories to identify sentiment expressions in a natural language context. Typical SA studies analyze subjective documents from the author's perspective using high-frequency word representations and mapping the text (e.g., sentence or document) to categorical labels, e.g., sentiment polarity, with either a discrete label or a real number in a continuum. Recently, the rising use of neural network models has further elevated the performance of SA without involving laborious feature engineering. Typical neural network models such as Convolutional Neural Network (CNN) (Kim, 2014) , Recursive auto-encoders (Socher et al., 2013) , Long-Short Term Memory (LSTM) (Tang et al., 2015a) have shown promising results in a variety of sentiment analysis tasks. In spite of this, neural network models still face two main problems. First, neural network approaches lack direct mechanisms to highlight important components in a text. Second, external resources such as linguistic knowledge, cognition grounded data, and affective lexicons, are not fully employed in neural models. To tackle the first problem, cognition-based attention models have been adopted for sentiment classification using text-embedded information such as users, products, and local context (Tang et al., 2015b; Yang et al., 2016; Chen et al., 2016; Long et al., 2019) . For the second problem, Qian et al. (2016) proposed to add linguistic resources to deep learning models for further improvement. Yet, recent method of integration of additional lexical information are limited to matrix manipulation in attention layer due to the incompatibility of such representations with embedding ones, making it quite inefficient. In this paper, we attempt to address this problem by incorporating an affective lexicon as numerical influence values into affective neural network models through the framework of the Affect Control Theory (ACT). ACT is a social psychological theory pertaining to social interactions (Smith-Lovin and Heise, 1988) . It is based on the assumption that people tend to maintain culturally shared perceptions of identities and behaviors in transient impressions during observation and participation of social events (Joseph, 2016) . In other words, social perceptions, actions, and emotional experiences are governed by a psychological intention to minimize deflections between fundamental sentiments and transient impressions that are inherited from the dynamic behaviors of such interactions. To capture such information, an event in ACT is modeled as a triplet: {actor, behavior, object}. In other words, culturally shared \"fundamental\" sentiments about each of these elements are measured in three dimensions: Evaluation, Potency, and Activity, commonly denoted as (EPA). In the ACT theory, emotions are functions of the differences between fundamental sentiments and transient impressions. The core idea is that each of the entities {actor, behavior, object} in an event has a fundamental emotion (EPA value) that is shared among members of the same culture or community. All the entities in the event as a group generate a transient impression or feeling that might be different from the fundamental sentiment. Previous research (Osgood et al., 1964) used EPA profiles of concepts to measure semantic differential, a survey technique to obtain respondent rates in terms of affective scales (e.g., {good, nice}, {bad, awful} for E; {weak, little}, {strong, big} for P; {calm, passive}, {exciting, active} for A). Existing datasets with average EPA ratings are usually small-sized, such as the dataset provided by Heise (2010) which compiled a few thousands of words from participants of sufficient cultural knowledge. As an illustration of the data form, the culturally shared EPA for the word \"mother\" is [2.74, 2.04, 0.67], which corresponds to {quite good}, {quite powerful}, and {slightly active}.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1102, |
|
"end": 1113, |
|
"text": "(Kim, 2014)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1140, |
|
"end": 1161, |
|
"text": "(Socher et al., 2013)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1164, |
|
"end": 1193, |
|
"text": "Long-Short Term Memory (LSTM)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1194, |
|
"end": 1214, |
|
"text": "(Tang et al., 2015a)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1788, |
|
"end": 1808, |
|
"text": "(Tang et al., 2015b;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1809, |
|
"end": 1827, |
|
"text": "Yang et al., 2016;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1828, |
|
"end": 1846, |
|
"text": "Chen et al., 2016;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1847, |
|
"end": 1865, |
|
"text": "Long et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1892, |
|
"end": 1910, |
|
"text": "Qian et al. (2016)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 2504, |
|
"end": 2533, |
|
"text": "(Smith-Lovin and Heise, 1988)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 2732, |
|
"end": 2746, |
|
"text": "(Joseph, 2016)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 3751, |
|
"end": 3772, |
|
"text": "(Osgood et al., 1964)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 4134, |
|
"end": 4146, |
|
"text": "Heise (2010)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "ACT has demonstrated obvious advantages for sentiment analysis. This is because the model is more cognitively sound and the representation is more comprehensive. The use of ACT is especially effective in long documents, such as review texts composed by a lot of descriptive events or factual reports (Heise, 2010) . Being empirically driven, EPA space enables the universal representation of individuals' sentiment, which can reflect a real evaluation of human participants. More importantly, the interaction between terms in ACT complies with the linguistic principle of the compositional semantic model that the meaning of a sentence is a function of its words (Frege, 1948) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 300, |
|
"end": 313, |
|
"text": "(Heise, 2010)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 663, |
|
"end": 676, |
|
"text": "(Frege, 1948)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In line with the above framework, we propose an affection driven neural network method for sentiment classification by incorporating an ACT lexicon as additional affective knowledge into deep learning models. Instead of transforming sentiment into a feature matrix, we apply EPA values into numeric weights directly so that it is more efficient. Our approach can be generally applied to a wide range of current deep learning algorithms. Among different deep learning algorithms, we choose to demonstrate our method using LSTM as it is quite suited for NLP with proven performance. A series of LSTM models are implemented without using dependency parsing or phrase-level annotation. We identify affective terms with EPA values and transform their corresponding EPA vectors into a feature matrix. Single EPA values are then computed and integrated into deep learning models by a linear concatenation. Evaluations are conducted in three sentiment analysis datasets to verify the effectiveness of the affection-driven network.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The rest of this paper is organized as follows. Section 2 introduces related works in Affect Control Theory, deep learning classifiers and some use of affective knowledge. Section 3 describes detailed design of our proposed method. Performance evaluation and analysis are presented in Section 4. Section 5 concludes the paper with some possible directions for future works.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Related work is reviewed through three sub-sections: affective lexicons under the framework of affective control theory, general studies in neural network sentiment analysis, and specific research of using lexicon knowledge for sentiment analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Affective lexicons with EPA values are available for many languages, yet all are small-sized. Previous works suggest that the inter-cultural agreement on EPA meanings of social concepts is generally high even across different subgroups of society. Cultural-average EPA ratings from a few dozen survey participants have proven to be extremely stable over extended periods (Heise, 2010) . These findings shed some light on the societal conflicts by competing for political ideologies. It is also proved that the number of contested concepts is small relative to the stable and consensual semantic structures that form the basis of our social interactions and shared cultural understanding (Heise, 2010) . To date, the most reliable EPA based affective lexicons are obtained by manual annotation. For example, the EPA lexicon provided by Heise (1987) are manually rated in the evaluation-potency-activity (EPA) dimensions. Although, there is no size indication, this EPA based lexicon is commonly used as a three dimensional affective resource. This professional annotated lexicon are regarded as a highquality lexicon (Bainbridge et al., 1994) and it the main resource used in this work as the external affective resource. In 2010, a new release of this resource includes a collection of five thousand lexical items 1 (Heise, 2010).", |
|
"cite_spans": [ |
|
{ |
|
"start": 371, |
|
"end": 384, |
|
"text": "(Heise, 2010)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 687, |
|
"end": 700, |
|
"text": "(Heise, 2010)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 835, |
|
"end": 847, |
|
"text": "Heise (1987)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1116, |
|
"end": 1141, |
|
"text": "(Bainbridge et al., 1994)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Affective Lexicons under ACT", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "In recent years, neural network methods have greatly improved the performance of sentiment analysis. Commonly used models include Convolutional Neural Networks (CNN) (Socher et al., 2011) , Recursive Neural Network ReNN (Socher et al., 2013) , and Recurrent Neural Networks (RNN) (Irsoy and Cardie, 2014). Long-Short Term Memory model (LSTM), well known for text understanding, is introduced by Tang et al. (2015a) who added a gated mechanism to keep long-term memory. Attentionbased neural networks, mostly built from local context, are proposed to highlight semantically important words and sentences (Yang et al., 2016) . Other methods build attention models using external knowledge, such as user/product information (Chen et al., 2016) and cognition grounded data (Long et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 187, |
|
"text": "(Socher et al., 2011)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 220, |
|
"end": 241, |
|
"text": "(Socher et al., 2013)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 603, |
|
"end": 622, |
|
"text": "(Yang et al., 2016)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 721, |
|
"end": 740, |
|
"text": "(Chen et al., 2016)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 769, |
|
"end": 788, |
|
"text": "(Long et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Deep Neural Networks", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "Previous studies in combining lexicon-based methods and machine learning approach generally diverge into two ways. The first approach uses two weighted classifiers and linearly integrates them into one system. Andreevskaia and Bergler (2008) , for instance, present an ensemble system of two classifiers with precision-based weighting. This method obtained significant gains in both accuracy and recall over corpus-based classifiers and lexicon-based systems. The second approach incorporates lexicon knowledge into learning algorithms. To name a few, Hutto and Gilbert (2014) design a rule-based approach to indicate sentiment scores. Wilson et al. (2005) and Melville et al. (2009) use a general-purpose sentiment dictionary to improve linear classifier. Jovanoski et al. (2016) also prove that sentiment lexicon can contribute to logistic regression models. In neural network models, a remarkable work on utilizing sentiment lexicons is done by Teng et al. (2016) . They treat the sentiment score of a sentence as a weighted sum of prior sentiment scores of negation words and sentiment words. Qian et al. (2016) propose to apply linguistic regularization to sentiment classification with three linguistically motivated structured regularizers based on parse trees, topics, and hierarchical word clusters. Zou et al. 2018adopt a mixed attention mechanism to further highlight the role of sentiment lexicon in the attention layer. Using sentiment polarity in a loss function is one way to employ attention mechanism. However, attention weights are normally obtained using local context information. The computational complexity of reweighing each word by attention requires matrix and softmax manipulation, which slows down the time for training and inference especially with long sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 241, |
|
"text": "Andreevskaia and Bergler (2008)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 636, |
|
"end": 656, |
|
"text": "Wilson et al. (2005)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 661, |
|
"end": 683, |
|
"text": "Melville et al. (2009)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 757, |
|
"end": 780, |
|
"text": "Jovanoski et al. (2016)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 948, |
|
"end": 966, |
|
"text": "Teng et al. (2016)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1097, |
|
"end": 1115, |
|
"text": "Qian et al. (2016)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Use of Affective Knowledge", |
|
"sec_num": "2.3." |
|
}, |
|
{ |
|
"text": "We proposes a novel affection driven method for neural sentiment classification. The affective lexicon with EPA values is used as the external affective knowledge which is integrated into neural networks for performance enhancement. The use of external knowledge reduces computation time and the cognition grounded three dimensional affective information using EPA is more comprehensive. The method works as follows. The affective terms in a task dataset are first identified in order to collect their EPA vectors through a pre-processing step. Each identified affective term is then given a weight based on a linear transformation mapping the three dimensional EPA values into a single value with a corresponding affective polarity. The affective weight will grant the prior affective knowledge to the identified affective terms to enhance word representation as a coefficient. This set of affective coefficients are used to adjust the weights in neural network models. This work applies the affective coefficients to a number of LSTM models including the basic LSTM, LSTM with attention layer (LSTM-AT), Bi-direction LSTM (BiLSTM) and BiLSTM with attention layer (BiLSTM-AT) with the EPA weights. This mechanism is generally applicable to many neural network models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "In this work, we use the affective lexicon with EPA values provided by Heise (2010) 2 For each term, the EPA values are measured separately in three separate dimensions as numerical values in the continuous space ranging from -4.50 to 4.50. The signum indicates the correlation, while the value implies the degree of relation. When using EPA to identify the polarity of sentiment, the E, P, and A weights need to be projected to one value before integrating it into a deep learning model. As a result, the EPA values are transformed into one single weight W EP A which is regarded as an affective influence value. Affine combination is used to constrain this value to stay in the range of [-4.50, 4 .50], as formulated below:", |
|
"cite_spans": [ |
|
{ |
|
"start": 689, |
|
"end": 698, |
|
"text": "[-4.50, 4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EPA Weight Transformation", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "W EP Acomb = \u03b1W E + \u03b2W P + \u03b3W A ,", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "EPA Weight Transformation", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EPA Weight Transformation", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b1 + \u03b2 + \u03b3 = 1,", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "EPA Weight Transformation", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "and \u03b1, \u03b2, \u03b3 are hyper parameters to indicate the significance of each component. For instance, we use [1, 0, 0] to indicate the exclusive use of Evaluation. To avoid the over-weighting problem for affective terms and at the same time to highlight the intensity information of EPA values, another linear transformation is defined below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EPA Weight Transformation", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "W EP A = (1 + a|W EP Acomb |).", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "EPA Weight Transformation", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "W EP A is a weight value, referred as the affective influence value. Equation 3 ensures that all terms in the EPA lexicon will have value over one. Terms in the target dataset which do not appear in the EPA lexicon will have the weight value of one. a is a non-negative parameter that can be tuned as the amplification of EPA values.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EPA Weight Transformation", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "This section elaborates on the mechanism of our proposed affective neural networks. In other words, how we incorporate affective influence values into affective deep neural networks. Although neural network models such as LSTM with added attention layer has powerful learning capacity for sentences, they demonstrate no explicit ability in identifying lexical sentiment affiliation. Serving as prior affective knowledge, these affective influence values can be used in any deep learning model. Figure 2 shows the general framework of our proposed method to incorporate affective influence values into any neural network model that involves the learning of word representation. Simply put in a deep learning model, the learning of word representation is carried out in the word representation layer to obtain their representations\u0125 i . The affective influence values as representation of affective information W EP Ai is then incorporated with\u0125 i before it goes into the pooling layer. Let D be a collection of n documents for sentiment classification. Each document d i is an instance in D, (i \u2208 1, 2, ..., n). In sentiment analysis, the label can either be a binary value to simply indicate polarity or a numerical value to indicate both polarity and strength. Each document d i is first tokenized into a word sequence. The representation vector of words, denoted as \u2212 \u2192 w i , is then obtained from a word embedding layer. For LSTM-based algorithms, the word representation vectors h i are updated in the recurrent layer. To incorporate affective knowledge, we use the product of W EP Ai with its corresponding word representation", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 494, |
|
"end": 502, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Affective Neural Network Framework", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u2212 \u2192 h i . \u2212 \u2192 h i = W EP Ai * \u2212 \u2192 h i", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Affective Neural Network Framework", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "This computation will be repeated during the entire process. As a result, all recognized affective terms are highlighted with certain intensity. The updated word represen-tation \u2212 \u2192 h i can then be fed to the pooling layer or attention layer as usual, generating document representation \u2212\u2212\u2192 R doc . Thus, \u2212\u2212\u2192 R doc accommodates both semantic information and affective prior knowledge for the classifier layer. Using W EP A as attention weight can significantly accelerate the training and inference speed compared to methods of using local context to get attention weights. This is because getting W EP Ai as a linear transformation only takes constant time so that it is not related to document size. Incorporating W EP Ai also takes a fixed time. However, for getting attention weights for n length documents, it requires matrix operation whose calculation required O(n).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Affective Neural Network Framework", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "Performance evaluation is conducted on three benchmark datasets 3 , including a Twitter collection, an airline dataset, and an IMDB review. The baseline classifiers include Support Vector Machine (SVM), CNN, LSTM, and BiLSTM. Attention-based LSTM and BiLSTM are also implemented for further comparisons.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance Evaluation", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The first benchmark dataset (Twitter) is collected from twitter and is publicly available for sentiment analysis 4 . The content is mainly about personal opinions on events, gossips, etc. The affective labels are defined as binary values to indicate positive and negative polarities. The second benchmark dataset (AirRecord) consists of customer twitted messages from six major U.S. airlines 5 . It includes 14,640 messages collected in February of 2015 which were manually labeled with positive, negative, and neutral classes. The third dataset (IMDB) is collected and provided by Maas et.al (2011) , which contains user comments of paragraphs extracted from online IMDB film database 6 . Affective labels are binary values for positive and negative. To utilize the affective lexicon, all the three datasets are pre-processed to identify affective terms in the affective lexicon. Table 1 shows some statistical data of the three datasets including the proportions of affective terms over the total number of words in the datasets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 582, |
|
"end": 599, |
|
"text": "Maas et.al (2011)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 881, |
|
"end": 888, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Datasets and Settings", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "Instance about one-fifth of all words used in the dataset. Among the three datasets, affective terms take the largest proportion in IMDB while the percentage in AirRecord is 3% lower. Both Twitter and IMDB are binary classification tasks and their performance is measured by accuracy and RMSE (rooted mean square error). Since AirRecord is labeled by positive, negative, or neutral, the F1 score is also provided. We take pre-trained Glove vectors (Pennington et al., 2014) as word embedding for deep learning models. As variants of LSTM are widely used in text classification tasks, we evaluate our methods on both the basic LSTM model and the BiLSTM model. Two LSTM variants with attention mechanism are also included in the evaluation, denoted as LSTM-AT and BiLSTM-AT respectively. All models are tuned with the three datasets. For a fair comparison, the parameters are set as follows: Embedding size=300 dim, Optimizer=Adam, Learning Rate=5e-4, Dropout=0.1, Batch Size=32 and Epoch=3. Convolutional Kernel Size for CNN is set to 3.", |
|
"cite_spans": [ |
|
{ |
|
"start": 448, |
|
"end": 473, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Partitioned by stratified sampling, 90% instances of each dataset are used as the training data and the remaining 10% serves as the testing data. Experiments are conducted to evaluate the prediction performance of LSTM algorithms using our proposed method compared with baseline models listed below. The average result of three runs for each setting are reported. By default, the hyper-parameters \u03b1, \u03b2 and \u03b3 in Formula 1 and 2 are set equivalently to 0.33, and a is experimentally set to 1.15 as the optimized setting. The name of a model augmented with EPA values is expanded with (EPA).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "\u2022 SVM is the basic model that uses a sentence feature vector. We use the mean of word embedding to generate the sentence representation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "\u2022 CNN uses a convolution layer to capture features of adjacent words. The final sentiment label is classified with a perceptron.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "\u2022 LSTM is a typical RNN architecture with a gated mechanism. LSTMs were developed to handle exploding and vanishing gradient problems when training traditional RNNs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "\u2022 LSTM-AT uses LSTM with attention mechanism to re-weight important words before the fully connected layer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "\u2022 BiLSTM learns bidirectional long-term dependencies between time steps of sequential data. These dependencies can be useful for learning from the complete time series.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "\u2022 BiLSTM-AT is BiLSTM with attention mechanism included. It is designed to combine strengths of both BiLSTM and attention mechanism. Table 2 shows the overall performance for all the models for all the three datasets. SVM performs the worst among all approaches. This is because using mean word embedding vectors cannot fully capture contextual information in sentences. As a deep learning model, CNN shows some improvement compared to SVM. Although introducing a convolution window can include some adjacent lexical information in context, it lacks complete sentential representations. Also, the fixed length of the convolution window may work incorrectly on semantic segments. Thus the model can be hampered by additional noise intrinsic in the model. LSTM-based models, on the other hand, can manage to track long-term dependency and partially solve the vanishing gradient problem. As shown in Table 2 , LSTM-based models significantly outperform CNN, and BiLSTM outperforms LSTM. This can be accounted for by the richer reverse information in BiLSTM. The attention mechanism also outperforms the baseline because it can put more emphasis on semantically salient terms. Considering these two variants of LSTM with bi-direction and attention, the improvement of attention mechanism is generally larger than bidirectional learning. Even though the results of bidirectional approaches are slightly inferior to unidirectional models, BiLSTM-AT with EPA generally achieves the best performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 140, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 897, |
|
"end": 904, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "Let us take a closer look at how the performance varies on three benchmark datasets across different LSTM variants. In Twitter and AirRecord, the increased accuracy by using EPA values attains to 1% on average, and the improvement is even larger in IMDB (1.5%), which contains longer paragraphs of movie reviews. In particular, the BiL-STM model achieves a top performance with accuracy increased by 2.5%. As for the attention mechanism, it can potentially identify implicit semantic information from the local context, which can be used to adjust the coefficient of the word representation. Results of the four LSTM-based methods, however, suggest that non-attention models with EPA show more significant improvement than attentionbased models. It indicates that using affective lexicon can be more effective than using attention. All the results congruently suggest that affective terms with informative sentiment representation can effectively and consistently contribute to model enhancement across different datasets, including both short sentences and long paragraphs, of which the later is more significant. Therefore, highlighting the affective terms relevant to sentiment could further improve the attention mechanism.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "Note that in Formula 1, there are three hyper parameters \u03b1, \u03b2, and \u03b3. To see how different values of these hyper parameters affect the performance, we conducted the second set of experiments with different settings as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ablation Analysis on EPA", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "\u2022 EPA follows the setting [0.33, 0.33, 0.33] \u2022 E [1, 0, 0] is used for Evaluation only.", |
|
"cite_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 44, |
|
"text": "[0.33, 0.33, 0.33]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ablation Analysis on EPA", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "\u2022 P [0, 1, 0] is used for Potency only.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ablation Analysis on EPA", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "\u2022 A [0, 0, 1] is used for Activity only. Table 3 : Performance of EPA components; overall best result is bolded; group best is marked bold with underline; secondbest of each group is underlined", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 48, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ablation Analysis on EPA", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "In ACT, the three components {E, P, A} are used as a group of information to characterize an affective related event. This section probes into the individual role of each attribute to the performance improvement. For each LSTM variant, Table 3 shows the evaluation of above four settings given in Formula 1 and 2 of the hyper-parameters \u03b1, \u03b2 and \u03b3.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 243, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ablation Analysis on EPA", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "As shown in 3, all affection-driven methods outperform their baseline counterparts. However, the performance discrepancy among different EPA components varies for different datasets. In Twitter, the accuracy of every component is very close (e.g., minor performance gaps in the range of 0.0%-0.2%) and EPA as a whole is the best (around 76%).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ablation Analysis on EPA", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "In AirRecord, all four settings show similar performance (around 82%) although LSTM-AT with Evaluation gives the best performance (82.8%). For IMDB, the performance discrepancy is larger (in the range of 0.2%-0.8%). Among the three components, Evaluation shows slightly stronger effectiveness and Activity contributes the most on average. The overall performance suggests no superior component,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ablation Analysis on EPA", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "whereas, for different datasets, the proportion of E, P, and A can be fine-tuned to achieve finer improvement. One significant observation is EPA together serves as the best representation which indicates the orthogonality of each component could be supplementary to each other. Thus, we use EPA with equal weights for our model comparison in Section 4.2..", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ablation Analysis on EPA", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "This section studies the impact of affective lexicon to sentiment classification in comparison to the attention mechanism, which is known as a fine-tuned mechanism for word representation in the learning process. Affective lexicon contains external knowledge with words of commonly agreed affective values that without contextual information. In contrast, the attention mechanism aims to capture contextual information to optimize the weight of specific words. To make a clear comparison, we randomly select two sentences of distinct sentiment polarity, one positive and the other negative, and show their weight distribution as heat graphs in Figures 3 and 4 , respectively. As Figure 3 clearly shows, affective terms account for a very small proportion. The words of red color in the EPA lexicon, i.e., 'glad', 'again', 'food' and 'good' are emphasized with higher weights. As the influence values of these terms are increased, the remaining words are less weighted by proportion. The attention weights are largely consistent with affective influence values, yet their intensity values are not as significant as the affective ones. Note that the attention mechanism puts more weight on the word 'so', and less but still considerable weight on 'im', 'china town' and 'i'. These words may be semantically more important in the sentence. But, they are not necessarily related to sentiment expression. Figure 4 showcase the sample of a negative polarity: \"Hey, Paris? Ushud totally just stick wid sayin that's hot! Cuz HUGE just isn't the same. Its's really lame.\". Two adverbs 'just' and 'really' and one adjective 'hot' are recognized in this sentence. However, the evidence 'lame' is not in affective lexicon. Failing to identify this negative adjective made the affective knowledge base fail to update affective polarity. On the contrast, the attention mechanism does not require prior knowledge, and it can still identify 'lame' as strong evidence. In summary, affective knowledge in the form of a lexicon provides salient and reliable lexicon-level evidence for sentiment analysis. On the other hand, limited lexical coverage can lead to negative impact on updating word representation. Attention mechanism can be used as a selfadaptive method to highlight some important words if external knowledge is not available. Contextual patterns can serve as supplementary information to word representation. In spite of the fact that words with high attention weight may be semantically more meaningful, such words may not directly related to sentiment. To aggregate the strengths of both methods, future attempts can be targeted at models with the incorporation of both mechanisms.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 644, |
|
"end": 659, |
|
"text": "Figures 3 and 4", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 679, |
|
"end": 687, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 1400, |
|
"end": 1408, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Case Study", |
|
"sec_num": "4.4." |
|
}, |
|
{ |
|
"text": "This paper presents an affection-driven neural network by loaning an external lexicon that contains explicit sentiment prior knowledge framed under the Affect Control Theory. The external knowledge is cognition grounded and comprehensive. The method used can be easily integrated into deep learning models with only minimal computational cost. Performance evaluations on various LSTM based methods have congruently validated the hypothesis that Affective words with attributes of Evaluation, Potency, and Activity are more effective for sentiment analysis than other deep learning models, including attention-based LSTMs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "The ablation introspection to the respective role of the three components in the EPA model suggests an equal contribution of them to model enhancement. To this end, Affine transformation of E, P, A is set with equal weight which in the end achieves the best performance in general. Given the limitation of the size and coverage of the lexicon in this work, future efforts can be done in three directions. The first one is to further evaluate the performance of affection-driven neural networks on corpora of richer, larger and general text sources. The second one is to develop automatic annotation tools to scale up the EPA knowledge lexicon with a wider lexical coverage so as to further attest its effectiveness for sentiment analysis. The third one is to improve the methods by considering different EPA transformation functions for mapping into affect influence values.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "http://www.indiana.edu/\u223csocpsy/public files/EnglishWords EPAs.xlsx", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The three datasets are all publicly available in Kaggle: https://www.kaggle.com/ 4 https://www.kaggle.com/c/twitter-sentiment-analysis2 5 https://www.kaggle.com/crowdflower/twitter-airlinesentiment/home/ 6 https://www.imdb.com/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The work is partially supported by the research grants from Hong Kong Polytechnic University (PolyU RTVU) and GRF grant (CERG PolyU 15211/14E, PolyU 152006/16E). Yunfei Long acknowledges the financial support of the NIHR Nottingham Biomedical Research Centre and NIHR MindTech Healthcare Technology Co-operative.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": "6." |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "When specialists and generalists work together: Overcoming domain dependence in sentiment tagging", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Andreevskaia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Bergler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of ACL-08: HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "290--298", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreevskaia, A. and Bergler, S. (2008). When specialists and generalists work together: Overcoming domain de- pendence in sentiment tagging. Proceedings of ACL-08: HLT, pages 290-298.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Neural sentiment classification with user and product attention", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Tu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1650--1659", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen, H., Sun, M., Tu, C., Lin, Y., and Liu, Z. (2016). Neural sentiment classification with user and product at- tention. In Proceedings of the 2016 Conference on Em- pirical Methods in Natural Language Processing, pages 1650-1659, Austin, Texas, November. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Sense and reference. The philosophical review", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Frege", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1948, |
|
"venue": "", |
|
"volume": "57", |
|
"issue": "", |
|
"pages": "209--230", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frege, G. (1948). Sense and reference. The philosophical review, 57(3):209-230.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Affect control theory: Concepts and model", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Heise", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Journal of Mathematical Sociology", |
|
"volume": "13", |
|
"issue": "1-2", |
|
"pages": "1--33", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Heise, D. R. (1987). Affect control theory: Concepts and model. Journal of Mathematical Sociology, 13(1-2):1- 33.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Surveying cultures: Discovering shared conceptions and sentiments", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Heise", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Heise, D. R. (2010). Surveying cultures: Discovering shared conceptions and sentiments. John Wiley & Sons.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Vader: A parsimonious rule-based model for sentiment analysis of social media text", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Hutto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Gilbert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Eighth international AAAI conference on weblogs and social media", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hutto, C. J. and Gilbert, E. (2014). Vader: A parsimo- nious rule-based model for sentiment analysis of social media text. In Eighth international AAAI conference on weblogs and social media.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Opinion mining with deep recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Irsoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "720--728", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Irsoy, O. and Cardie, C. (2014). Opinion mining with deep recurrent neural networks. In EMNLP, pages 720-728.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "New methods for large-scale analyses of social identities and stereotypes", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Joseph", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph, K. (2016). New methods for large-scale analyses of social identities and stereotypes.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "On the impact of seed words on sentiment polarity lexicon induction", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Jovanoski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Pachovski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1557--1567", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jovanoski, D., Pachovski, V., and Nakov, P. (2016). On the impact of seed words on sentiment polarity lexicon induction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1557-1567.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Convolutional neural networks for sentence classification", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1408.5882" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kim, Y. (2014). Convolutional neural networks for sen- tence classification. arXiv preprint arXiv:1408.5882.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Improving attention model based on cognition grounded data for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Long", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Xiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C.-R", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "IEEE Transactions on Affective Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Long, Y., Xiang, R., Lu, Q., Huang, C.-R., and Li, M. (2019). Improving attention model based on cognition grounded data for sentiment analysis. IEEE Transac- tions on Affective Computing.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Learning word vectors for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Maas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Daly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "142--150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maas, A. L., Daly, R. E., Pham, P. T., Huang, D., Ng, A. Y., and Potts, C. (2011). Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguis- tics: Human Language Technologies-Volume 1, pages 142-150. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Sentiment analysis of blogs by combining lexical knowledge with text classification", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Melville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Gryc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Lawrence", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1275--1284", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Melville, P., Gryc, W., and Lawrence, R. D. (2009). Senti- ment analysis of blogs by combining lexical knowledge with text classification. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discov- ery and data mining, pages 1275-1284. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The measurement of meaning", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Osgood", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Suci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Tannenbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1964, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Osgood, C. E., Suci, G. J., and Tannenbaum, P. H. (1964). The measurement of meaning. University of Illinois Press.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "GloVe: Global Vectors for Word Representation", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pennington, J., Socher, R., and Manning, C. D. (2014). GloVe: Global Vectors for Word Representation. In Pro- ceedings of EMNLP, pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Linguistically regularized lstms for sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Qian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Lei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.03949" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qian, Q., Huang, M., Lei, J., and Zhu, X. (2016). Lin- guistically regularized lstms for sentiment classification. arXiv preprint arXiv:1611.03949.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Analyzing Social Interaction", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Smith-Lovin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Heise", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Advances in Affect Control Theory", |
|
"volume": "13", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Smith-Lovin, L. and Heise, D. R. (1988). Analyzing So- cial Interaction: Advances in Affect Control Theory, vol- ume 13. Taylor & Francis.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Semi-supervised recursive autoencoders for predicting sentiment distributions", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "151--161", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Socher, R., Pennington, J., Huang, E. H., Ng, A. Y., and Manning, C. D. (2011). Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 151-161. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Recursive deep models for semantic compositionality over a sentiment treebank", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Perelygin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Chuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the conference on empirical methods in natural language processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Socher, R., Perelygin, A., Wu, J. Y., Chuang, J., Manning, C. D., Ng, A. Y., and Potts, C. (2013). Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP), vol- ume 1631, page 1642. Citeseer.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Document modeling with gated recurrent neural network for sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1422--1432", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tang, D., Qin, B., and Liu, T. (2015a). Document mod- eling with gated recurrent neural network for sentiment classification. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1422-1432.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Learning semantic representations of users and products for document level sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proc. ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tang, D., Qin, B., and Liu, T. (2015b). Learning semantic representations of users and products for document level sentiment classification. In Proc. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Contextsensitive lexicon features for neural sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Teng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D.-T", |
|
"middle": [], |
|
"last": "Vo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1629--1638", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Teng, Z., Vo, D.-T., and Zhang, Y. (2016). Context- sensitive lexicon features for neural sentiment analysis. In EMNLP, pages 1629-1638.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Recognizing contextual polarity in phrase-level sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Hoffmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the conference on human language technology and empirical methods in natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "347--354", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wilson, T., Wiebe, J., and Hoffmann, P. (2005). Recogniz- ing contextual polarity in phrase-level sentiment analy- sis. In Proceedings of the conference on human language technology and empirical methods in natural language processing, pages 347-354. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Hierarchical attention networks for document classification", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Smola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., and Hovy, E. (2016). Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "A lexicon-based supervised attention model for neural sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Gui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "868--877", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zou, Y., Gui, T., Zhang, Q., and Huang, X. (2018). A lexicon-based supervised attention model for neural sen- timent analysis. In Proceedings of the 27th International Conference on Computational Linguistics, pages 868- 877.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Figure 1below shows the histograms of the three affective measures in Heise's work. As the histogram suggests, none of the three measures E, P, and A shows a balanced distribution, and they are overall right-skewed. The evaluation component is the most evenly distributed amongst all. Notably, the Evaluation distribution has two peaks scattered at both the positive axis and 2 http://www.indiana.edu/\u223csocpsy/public files/EnglishWords EPAs.xlsx which covers the most commonly-used five thousand manually annotated English sentiment words." |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Histogram of EPA values the negative axis, which is apparently different from Potency and Activity. Potency and Activity generally follow Gaussian distribution and the means are around 0.60. The majority of the affective values fall in the range of -2.20 to 3.20, yet their variances are largely different. Most Activity values are distributed near the mean, providing less significant evidence for affective expression." |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Framework of affective deep learning schema" |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Sample Case 1. Red refers to a larger weight. White refers to a smaller weightFigure 3is the heat graph of sample case 1 with a positive polarity: \"I am so glad you went to China town again.I am actually think that Biryani Place's food looks really good\". The upper bar under the sentence indicates the recognized terms in an affective influence value sequence. Words not in the affective lexicon are displayed in white. The lower bar shows the heat map of the weighted word by attention." |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Sample Case 2. Red refers to a larger weight. White refers to a smaller weight" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td/><td colspan=\"2\">Twitter</td><td/><td>AirRecord</td><td colspan=\"2\">IMDB</td></tr><tr><td/><td colspan=\"4\">ACC RMSE ACC RMSE</td><td colspan=\"2\">F1 ACC RMSE</td></tr><tr><td>LSTM</td><td>0.747</td><td colspan=\"2\">0.327 0.805</td><td colspan=\"2\">0.428 0.721 0.803</td><td>0.405</td></tr><tr><td>LSTM(E)</td><td>0.756</td><td colspan=\"2\">0.322 0.822</td><td colspan=\"2\">0.405 0.735 0.819</td><td>0.394</td></tr><tr><td>LSTM(P)</td><td>0.761</td><td colspan=\"2\">0.317 0.812</td><td colspan=\"2\">0.412 0.734 0.824</td><td>0.387</td></tr><tr><td>LSTM(A)</td><td>0.758</td><td colspan=\"2\">0.319 0.825</td><td colspan=\"2\">0.413 0.739 0.816</td><td>0.399</td></tr><tr><td>LSTM(EPA)</td><td>0.759</td><td colspan=\"2\">0.318 0.823</td><td colspan=\"2\">0.410 0.747 0.818</td><td>0.398</td></tr><tr><td>LSTM-AT</td><td>0.754</td><td colspan=\"2\">0.321 0.811</td><td colspan=\"2\">0.425 0.733 0.807</td><td>0.395</td></tr><tr><td>LSTM-AT(E)</td><td>0.758</td><td colspan=\"2\">0.317 0.828</td><td colspan=\"2\">0.402 0.745 0.820</td><td>0.383</td></tr><tr><td>LSTM-AT(P)</td><td>0.759</td><td colspan=\"2\">0.307 0.816</td><td colspan=\"2\">0.409 0.739 0.815</td><td>0.387</td></tr><tr><td>LSTM-AT(A)</td><td>0.759</td><td colspan=\"2\">0.314 0.823</td><td colspan=\"2\">0.405 0.743 0.817</td><td>0.389</td></tr><tr><td>LSTM-AT(EPA)</td><td>0.761</td><td>0.309</td><td>0.82</td><td colspan=\"2\">0.413 0.739 0.819</td><td>0.392</td></tr><tr><td>BiLSTM</td><td>0.756</td><td colspan=\"2\">0.314 0.807</td><td colspan=\"2\">0.418 0.739 0.797</td><td>0.404</td></tr><tr><td>BiLSTM(E)</td><td>0.760</td><td colspan=\"2\">0.313 0.817</td><td colspan=\"2\">0.408 0.742 0.817</td><td>0.399</td></tr><tr><td>BiLSTM(P)</td><td>0.765</td><td colspan=\"2\">0.302 0.822</td><td colspan=\"2\">0.409 0.741 0.815</td><td>0.398</td></tr><tr><td>BiLSTM(A)</td><td>0.764</td><td colspan=\"2\">0.303 0.814</td><td colspan=\"2\">0.413 0.733 0.819</td><td>0.393</td></tr><tr><td>BiLSTM(EPA)</td><td>0.766</td><td colspan=\"2\">0.302 0.817</td><td colspan=\"2\">0.411 0.736 0.822</td><td>0.392</td></tr><tr><td>BiLSTM-AT</td><td>0.759</td><td colspan=\"2\">0.317 0.813</td><td colspan=\"2\">0.403 0.743 0.805</td><td>0.392</td></tr><tr><td>BiLSTM-AT(E)</td><td>0.761</td><td colspan=\"2\">0.311 0.813</td><td colspan=\"2\">0.398 0.739 0.822</td><td>0.381</td></tr><tr><td>BiLSTM-AT(P)</td><td>0.762</td><td colspan=\"2\">0.302 0.816</td><td colspan=\"2\">0.405 0.742 0.826</td><td>0.383</td></tr><tr><td>BiLSTM-AT(A)</td><td>0.764</td><td colspan=\"2\">0.305 0.814</td><td colspan=\"2\">0.396 0.737 0.821</td><td>0.389</td></tr><tr><td colspan=\"2\">BiLSTM-AT(EPA) 0.766</td><td colspan=\"2\">0.301 0.820</td><td colspan=\"2\">0.409 0.745 0.822</td><td>0.387</td></tr></table>", |
|
"html": null, |
|
"text": "Performance of Sentiment Analysis; global best is bolded; second-best is underlined" |
|
} |
|
} |
|
} |
|
} |