Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S18-1015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:43:07.463225Z"
},
"title": "NLPZZX at SemEval-2018 Task 1: Using Ensemble Method for Emotion and Sentiment Intensity Determination",
"authors": [
{
"first": "Zhengxin",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yunnan University Chenggong Campus",
"location": {
"settlement": "Kunming",
"country": "P.R. China"
}
},
"email": ""
},
{
"first": "Qimin",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yunnan University Chenggong Campus",
"location": {
"settlement": "Kunming",
"country": "P.R. China"
}
},
"email": ""
},
{
"first": "Hao",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yunnan University Chenggong Campus",
"location": {
"settlement": "Kunming",
"country": "P.R. China"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we put forward a system that competed at SemEval-2018 Task 1: \"Affect in Tweets\". Our system uses a simple yet effective ensemble method which combines several neural network components. We participate in two subtasks for English tweets: EI-reg and V-reg. For two subtasks, different combinations of neural components are examined. For EI-reg, our system achieves an accuracy of 0.727 in Pearson Correlation Coefficient (all instances) and an accuracy of 0.555 in Pearson Correlation Coefficient (0.5-1). For V-reg, the achieved accuracy scores are respectively 0.835 and 0.670.",
"pdf_parse": {
"paper_id": "S18-1015",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we put forward a system that competed at SemEval-2018 Task 1: \"Affect in Tweets\". Our system uses a simple yet effective ensemble method which combines several neural network components. We participate in two subtasks for English tweets: EI-reg and V-reg. For two subtasks, different combinations of neural components are examined. For EI-reg, our system achieves an accuracy of 0.727 in Pearson Correlation Coefficient (all instances) and an accuracy of 0.555 in Pearson Correlation Coefficient (0.5-1). For V-reg, the achieved accuracy scores are respectively 0.835 and 0.670.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sentiment analysis is a research area in the field of natural language processing. It aims to detect the sentiment expressed by the author of some form of textual data and many deep learning approaches have been successfully exploited (Cambria, 2016) . The goal of SemEval-2018 Task 1 \"Affect in Tweets\" is to automatically determine the intensity of emotions and intensity of sentiment of the tweeters from their tweets (Mohammad et al., 2018) . All tweets fall into three languages: English, Arabic and Spanish. We participate in two subtasks for English tweets: EIreg and V-reg. For EI-reg, all English tweets are separated into four emotions, anger, fear, joy and sadness. Every emotion has train, dev and test datasets. This subtask determines the intensity which is a real-valued score between 0 and 1 of emotion that represents the mental state of the tweeter. The instances with higher scores correspond to a greater degree of emotion than instances with lower scores. For V-reg, all English tweets are divided into three datasets: train, dev and test datasets. It determines the intensity of sentiment or valence that best represents the mental state of the tweeter a real-valued score between 0 and 1. The instances with higher scores correspond to a greater degree of positive sentiment than instances with lower scores. Both the two subtasks are regression tasks.",
"cite_spans": [
{
"start": 235,
"end": 250,
"text": "(Cambria, 2016)",
"ref_id": "BIBREF4"
},
{
"start": 421,
"end": 444,
"text": "(Mohammad et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For these two subtasks, we have adopted separate ensemble method with existing neural network components (Brueckner and Schulter, 2014; Kim, 2014; Li and Qian, 2016; Yang et al., 2017) (see Figure 1 ). We use BiLSTM-CNN component, BiLSTM-Attention component and Deep BiLSTM-Attention component with different embeddings for simple ensemble. In these subtasks, our final model is just an average of scores provided by what we select from these single neural network components. Every emotion or valence employs different ensemble method, so there are several distinct ensemble methods in the two subtasks. Experimental results show that our proposed ensemble methods are simple yet effective.",
"cite_spans": [
{
"start": 105,
"end": 135,
"text": "(Brueckner and Schulter, 2014;",
"ref_id": "BIBREF3"
},
{
"start": 136,
"end": 146,
"text": "Kim, 2014;",
"ref_id": "BIBREF6"
},
{
"start": 147,
"end": 165,
"text": "Li and Qian, 2016;",
"ref_id": "BIBREF10"
},
{
"start": 166,
"end": 184,
"text": "Yang et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 190,
"end": 198,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the paper is structured as follows. We provide details of the proposed ensemble method in Section 2. We present the experimental result of proposed methods in Section 3. Finally, a conclusion is drawn in section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose an simple ensemble method of different neural network components. We mainly introduce the implementation details of these components, including raw tweets preprocessing, lexicon features and embedding resources we use in these components, the architecture of these components and the best parameters of different single components. The parameters that can maximize the Pearson Correlation Coefficient between the predicted values and real values are chosen to be the best parameters. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "In general, tweet are not always syntactically wellstructured and the language used does not always strictly adhere to grammatical rules (Barbosa and Feng, 2010 ). So we need to preprocess raw tweets before feature extraction. Firstly, we perform a few preprocessing steps, such as remove # and retain the word itself, remove stop words with nltk.corpus. Then the tweets are transformed into lowercase. Finally, we utilize TweetTokenizer 1 to process the tweets.",
"cite_spans": [
{
"start": 137,
"end": 160,
"text": "(Barbosa and Feng, 2010",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "Each tweet is represented as a concatenation of two different feature vectors, one is lexicon features and another is word embedding. In our system, each tweet is divided into words, every word is represented as a d + m dimension vector and thus each tweet is represented as l(d + m) matrix, where d is the dimension of word embedding and m is the dimension of lexicon features. Suppose each tweet has the same length, so l is the length 1 http://www.nltk.org/ of tweet. We utilize a variety of resources for feature extraction as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2"
},
{
"text": "1. AFINN: Calculating positive and negative sentiment scores from the lexicon (Nielsen, 2011) .",
"cite_spans": [
{
"start": 78,
"end": 93,
"text": "(Nielsen, 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2"
},
{
"text": "2. NRC Affect Intensity Lexicon: The NRC Affect Intensity Lexicon is a list of English words and their associations with four basic emotions (anger, fear, sadness, joy) (Mohammad, 2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2"
},
{
"text": "The NRC Emotion Lexicon is a list of English words and their associations with eight basic emotions (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) and two sentiments (negative and positive) (Mohammad and Turney, 2010) .",
"cite_spans": [
{
"start": 215,
"end": 242,
"text": "(Mohammad and Turney, 2010)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NRC Emotion Lexicon:",
"sec_num": "3."
},
{
"text": "Lexicon: Association of words with eight emotions (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) generated automatically from tweets with emotion-word hashtags (Mohammad, 2012).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NRC Hashtag Emotion",
"sec_num": "4."
},
{
"text": "5. NRC Emoticon Lexicon: Association of words with positive (negative) sentiment generated automatically from tweets with emoticons Mohammad et al., 2013; .",
"cite_spans": [
{
"start": 132,
"end": 154,
"text": "Mohammad et al., 2013;",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NRC Hashtag Emotion",
"sec_num": "4."
},
{
"text": "6. NRC Emoticon Affirmative Context Lexicon and NRC Emoticon Negated Context Lexicon: Association of words with positive (negative) sentiment in affirmative or negated contexts generated automatically from tweets with emoticons Mohammad et al., 2013; .",
"cite_spans": [
{
"start": 228,
"end": 250,
"text": "Mohammad et al., 2013;",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NRC Hashtag Emotion",
"sec_num": "4."
},
{
"text": "Lexicon and NRC Hashtag Negated Context Sentiment Lexicon: Association of words with positive (negative) sentiment in affirmative or negated contexts generated automatically from tweets with sentiment-word hashtags Mohammad et al., 2013; .",
"cite_spans": [
{
"start": 215,
"end": 237,
"text": "Mohammad et al., 2013;",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NRC Hashtag Affirmative Context Sentiment",
"sec_num": "7."
},
{
"text": "8. NRC Hashtag Sentiment Lexicon: Association of words with positive (negative) sentiment generated automatically from tweets with sentiment-word hashtags Mohammad et al., 2013; . 9. Emoji: This is a manual classification of the dictionary, in which each emoji has a corresponding polarity value.",
"cite_spans": [
{
"start": 155,
"end": 177,
"text": "Mohammad et al., 2013;",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NRC Hashtag Affirmative Context Sentiment",
"sec_num": "7."
},
{
"text": "10. Sentiwordnet: Sentiwordnet is a lexical resource explicitly devised for supporting sentiment classification and opinion mining applications (Baccianella et al., 2010) , through the wordnet entry in the emotional classification, and marked each entry belongs to the positive and negative categories weight size. ",
"cite_spans": [
{
"start": 144,
"end": 170,
"text": "(Baccianella et al., 2010)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NRC Hashtag Affirmative Context Sentiment",
"sec_num": "7."
},
{
"text": "The BiLSTM with CNN first transform tweets into text matrices, the BiLSTM is applied to these matrices to build new text matrices, CNN is applied to the output of the BiLSTM to obtain text vectors for the prediction of emotional intensity. The BiL-STM with CNN achieves a rather good result on the task of emotional analysis (He et al., 2017 ). so we choose it for our task.",
"cite_spans": [
{
"start": 325,
"end": 341,
"text": "(He et al., 2017",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional LSTM with CNN",
"sec_num": "2.3.2"
},
{
"text": "Model Architecture: Embedding vectors are fed into a BiLSTM network followed by a CNN layer. The CNN layer consists of one dimensional convolutional layer and pooling layer where the number of filters is 256, the window size of the filter is 3, and the activation function is Relu. The input and output shape of convolutional layer are both 3D tensor. The output of the CNN layer is flattened after max-pooling operation. After the Flatten layer, two dense layers are stacked and the activation functions are respectively configured as Relu and Sigmoid. Also dropout (Srivastava et al., 2014) is utilized to avoid potential overfitting, it is used between two dense layers. The reason why we select Relu is to prevent the vanishing gradient problem and accelerate the calculation. Since the task is a regression problem, we put a dense projection with sigmoid activation to obtain an intensity value between 0 and 1.",
"cite_spans": [
{
"start": 567,
"end": 592,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional LSTM with CNN",
"sec_num": "2.3.2"
},
{
"text": "Model Training: The network parameters are learned by minimizing the mean squared error (MSE) between the real and predicted values of emotion intensity or valence intensity. We optimize this loss function via Adam that is an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments (Kingma and Ba, 2014). Batch size and training epochs may be different for different emotions and valence. To avoid overfitting issues, we use dropout in this model. Finally, we apply these three parameters for system tuning. In addition, we try various optimization algorithms with the same param- eters, such as SGD, RMSprop, Adagrad, Adam and Adamax, and find that Adam works best. So we fix the optimization algorithm with Adam (Kingma and Ba, 2014) and tune the parameters, the best configurations for EI-reg and V-reg are respectively given in Tables 2.3.1 and 2.3.1, where BS is batch size, Dp is dropout.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional LSTM with CNN",
"sec_num": "2.3.2"
},
{
"text": "Bidirectional LSTM with Attention achieves a good result on the SemEval-2017 Task 4 \"Sentiment Analysis in Twitter\" (Baziotis et al., 2017 ), so we exploit Bidirectional LSTM with Attention model and Deep Bidirectional LSTM with Attention model for our tasks. Model Architecture: For Bidirectional LSTM with attention model, embedding vectors are fed into a BiLSTM network followed by an attention layer (Yang et al., 2017) . Not all words contribute equally to the expression of sentiment in a tweet, so we use an attention layer to find the importance of each word in tweet. After the attention layer, it is consistent with Bidirectional LSTM with CNN model. The difference between the Bidirectional LSTM with attention model and its deep version is that, we use two BiLSTM layers followed by an attention layer in the deep version.",
"cite_spans": [
{
"start": 116,
"end": 138,
"text": "(Baziotis et al., 2017",
"ref_id": "BIBREF2"
},
{
"start": 404,
"end": 423,
"text": "(Yang et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional LSTM with Attention",
"sec_num": "2.3.3"
},
{
"text": "Model Training: We use the same method to learn the network parameters. In EI-reg, we use the same batch size, training epochs and dropout to train the Deep BiLSTM Attention model with different pre-training word embeddings in every emotion, but in V-reg, batch size, training epochs and dropout are different in Deep BiLSTM Attention model with different pre-training word embeddings. In these models, we also use dropout.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional LSTM with Attention",
"sec_num": "2.3.3"
},
{
"text": "The best parameters of EI-reg for these models are given in Table 2 .3.1 and V-reg's best parameters are given in Table 2 .3.1.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 114,
"end": 121,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Bidirectional LSTM with Attention",
"sec_num": "2.3.3"
},
{
"text": "Currently, ensembling is a widely used strategy which combines multiple single components to improve overall performance, there are many ensemble methods that have been proposed, such as, Voting, Blending, Bagging, Boosting, etc 5 . In this system, due to time constraint, we choose a simple average of the scores provided by different components, as each single component can predict emotional intensity or valence intensity. It can be defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Methods",
"sec_num": "2.4"
},
{
"text": "P rediction intensity = n i=1 model i n (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Methods",
"sec_num": "2.4"
},
{
"text": "where n is the number of neural components. Model i represents the prediction results of i-th component. Suppose three components are exploited to predict the intensity of anger, and three prediction values of a same tweet 0.76, 0.72 and 0.7 are suggested, then the final result of this tweet will be (0.76 + 0.72 + 0.74)/3 = 0.74. For experiments, we use five datasets from two different subtasks, These datasets, \"EI-reg-Enanger (anger)\", \"EI-reg-En-joy (joy)\", \"EI-reg-Enfear (fear)\", \"EI-reg-En-sadness (sadness)\" and \"2018-Valence-reg-En (valence)\" are downloaded from SemEval-2018 Task 1 \"Affect in Tweets\" 6 . As for the EI-reg task dataset format, each tweet consists of the id, the tweet, the emotion of the tweet, the emotion intensity and for the V-reg task, each tweet consists of the id, the tweet, the sentiment of the tweet and the sentiment intensity. All datasets have been divided into train set, dev set and test set. Test set's gold labels are given only after the evaluation period. Statistics of the datasets are shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1044,
"end": 1051,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Ensemble Methods",
"sec_num": "2.4"
},
{
"text": "To measure the performance of selected methods, two submetrics of Pearson Correlation Coefficient (PCC) are used. PCC (all instances) is Pearson correlation for a subset of test data that includes all tweets. The value varies between -1 and 1. PCC (0.5-1) is the Pearson correlation for a subset of test data that includes only those tweets with intensity score greater or equal to 0.5. For both metrics, a larger value indicate a better prediction accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Methods",
"sec_num": "2.4"
},
{
"text": "For each dataset, we use dev set to select our ensemble methods. Firstly we run these six components on all dev datasets. Then, combine these results of different components, different combinations of components lead to different results on dev set. Finally, we select the combination with a higher score for testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Methods",
"sec_num": "2.4"
},
{
"text": "Our system is implemented on Keras with a Tensorflow backend 7 . We present the result of PCC (all instances) and PCC (0.5-1) for each emotion and valence on the test data, shown in Tables 3 and 3. For simplicity, we denote WT, GN, GL and GT for the word vectors of word2vectwitter-model, GoogleNews-vectors-negative300, glove.840B.300d and glove.twitter.27B.200d. We compare the results of our single components, official baseline and our ensemble system. Every emotion and valence adopts different ensemble methods, the symbol '-' means that the component is not used in the ensemble method in this emotion or valence. For example, we only use BiL-STM Attention+GT, Deep BiLSTM Attention+WT and Deep BiLSTM Attention+GN these three components for ensemble on anger dataset. The reason why we don't use all the six components for ensemble is that ensemble does not always have a good effect, a same component can have different effects on different datasets, either good or bad. The official result for EI-reg, our average PCC reaches 0.727 in all instances and 0.555 in 0.5-1 (both ranked 10 out of 48 participants). For V-reg, the result is 0.835 in all instances (ranked 7 out of 38) and 0.670 in 0.5-1 (ranked 6 out of 38). The average result of baseline for EI-reg is 0.520 and 0.396, for V-reg, the result is 0.585 and 0.449. These results demonstrate that the ensemble approach achieves important improvement in performance across all the emotions and valence, and gains the best performance for Anger.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Methods",
"sec_num": "2.4"
},
{
"text": "We have proposed a simple yet effective ensemble method which integrates various neural components to perform the sentiment or emotion analysis for the tweet. Experimental results reflect that our method is effective in the prediction tasks of emotional intensity and sentimental intensity. Some other useful findings can be drawn from the experimental results: a) The model of integration for each emotion is different; b) As for lexicon features and word embedding, it is important for emotion or sentiment analysis; c) ensemble is not al-ways valid. Also, we have tried data augmentation considering insufficient training data, however the effect is not a good.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Works",
"sec_num": "4"
},
{
"text": "As for future works, although our ensemble method has achieved good results, we would want to examine the multi-task deep learning approach on these tasks, by which it would predict the different emotional intensity at the same time, and improve the generalization effect of the prediction model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Works",
"sec_num": "4"
},
{
"text": "http://www.spark.tc/building-a-word2vec-model-withtwitter-data/ 3 https://github.com/mmihaltz/word2vec-GoogleNewsvectors 4 https://nlp.stanford.edu/projects/glove/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://mlwave.com/kaggle-ensembling-guide/ 6 https://competitions.codalab.org/competitions/17751",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is partially supported by the National Natural Science Foundation of China (61562090) and the Graduate Research Innovation Fund Project of Yunnan University (YDY17113).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining",
"authors": [
{
"first": "Stefano",
"middle": [],
"last": "Baccianella",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Esuli",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2010,
"venue": "International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "83--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefano Baccianella, Andrea Esuli, and Fabrizio Sebas- tiani. 2010. Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In International Conference on Language Resources and Evaluation, Lrec 2010, 17-23 May 2010, Val- letta, Malta, pages 83-90.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Robust sentiment detection on twitter from biased and noisy data",
"authors": [
{
"first": "Luciano",
"middle": [],
"last": "Barbosa",
"suffix": ""
},
{
"first": "Junlan",
"middle": [],
"last": "Feng",
"suffix": ""
}
],
"year": 2010,
"venue": "International Conference on Computational Linguistics: Posters",
"volume": "",
"issue": "",
"pages": "36--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luciano Barbosa and Junlan Feng. 2010. Robust sen- timent detection on twitter from biased and noisy data. In International Conference on Computational Linguistics: Posters, pages 36-44.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Datastories at semeval-2017 task 4: Deep LSTM with attention for message-level and topic-based sentiment analysis",
"authors": [
{
"first": "Christos",
"middle": [],
"last": "Baziotis",
"suffix": ""
},
{
"first": "Nikos",
"middle": [],
"last": "Pelekis",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Doulkeridis",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016",
"volume": "",
"issue": "",
"pages": "747--754",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christos Baziotis, Nikos Pelekis, and Christos Doulk- eridis. 2017. Datastories at semeval-2017 task 4: Deep LSTM with attention for message-level and topic-based sentiment analysis. In Proceedings of the 10th International Workshop on Semantic Eval- uation, SemEval@NAACL-HLT 2016, San Diego, CA, USA, June 16-17, 2016, pages 747-754.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Social signal classification using deep blstm recurrent neural networks",
"authors": [
{
"first": "Raymond",
"middle": [],
"last": "Brueckner",
"suffix": ""
},
{
"first": "Bjorn",
"middle": [],
"last": "Schulter",
"suffix": ""
}
],
"year": 2014,
"venue": "IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "4823--4827",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raymond Brueckner and Bjorn Schulter. 2014. So- cial signal classification using deep blstm recurrent neural networks. In IEEE International Conference on Acoustics, Speech and Signal Processing, pages 4823-4827.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Affective computing and sentiment analysis",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE Intelligent Systems",
"volume": "31",
"issue": "2",
"pages": "102--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik Cambria. 2016. Affective computing and senti- ment analysis. IEEE Intelligent Systems, 31(2):102- 107.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "YZU-NLP at emoint-2017: Determining emotion intensity using a bi-directional LSTM-CNN model",
"authors": [
{
"first": "Yuanye",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Liang-Chih",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "K",
"middle": [
"Robert"
],
"last": "Lai",
"suffix": ""
},
{
"first": "Weiyi",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, WASSA@EMNLP 2017",
"volume": "",
"issue": "",
"pages": "238--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuanye He, Liang-Chih Yu, K. Robert Lai, and Weiyi Liu. 2017. YZU-NLP at emoint-2017: Determin- ing emotion intensity using a bi-directional LSTM- CNN model. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Senti- ment and Social Media Analysis, WASSA@EMNLP 2017, Copenhagen, Denmark, September 8, 2017, pages 238-242.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Conference on Empirical Methods in Natural Language Processing",
"authors": [],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746-1751.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sentiment analysis of short informal texts",
"authors": [
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Saif M",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Artificial Intelligence Research",
"volume": "50",
"issue": "",
"pages": "723--762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Svetlana Kiritchenko, Xiaodan Zhu, and Saif M Mo- hammad. 2014. Sentiment analysis of short in- formal texts. Journal of Artificial Intelligence Re- search, 50:723-762.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Text sentiment analysis based on long short-term memory",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Qian",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE International Conference on Computer Communication and the Internet",
"volume": "",
"issue": "",
"pages": "471--475",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Li and Jiang Qian. 2016. Text sentiment analysis based on long short-term memory. In IEEE Interna- tional Conference on Computer Communication and the Internet, pages 471-475.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Nrc-canada: Building the state-of-theart in sentiment analysis of tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation",
"volume": "2",
"issue": "",
"pages": "321--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. Nrc-canada: Building the state-of-the- art in sentiment analysis of tweets. In Second Joint Conference on Lexical and Computational Seman- tics (* SEM), Volume 2: Proceedings of the Sev- enth International Workshop on Semantic Evalua- tion (SemEval 2013), volume 2, pages 321-327.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "# emotional tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "246--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad. 2012. # emotional tweets. In Pro- ceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceed- ings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 246-255. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Semeval-2018 Task 1: Affect in tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad, Felipe Bravo-Marquez, Mo- hammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 Task 1: Affect in tweets. In Proceed- ings of International Workshop on Semantic Evalu- ation (SemEval-2018), New Orleans, LA, USA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Emotions evoked by common words and phrases: using mechanical turk to create an emotion lexicon",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2010,
"venue": "NAACL Hlt 2010 Workshop on Computational Approaches To Analysis and Generation of Emotion in Text",
"volume": "",
"issue": "",
"pages": "26--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad and Peter D. Turney. 2010. Emo- tions evoked by common words and phrases: using mechanical turk to create an emotion lexicon. In NAACL Hlt 2010 Workshop on Computational Ap- proaches To Analysis and Generation of Emotion in Text, pages 26-34.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A new anew: Evaluation of a word list for sentiment analysis in microblogs",
"authors": [
{
"first": "Finn\u00e5rup",
"middle": [],
"last": "Nielsen",
"suffix": ""
}
],
"year": 2011,
"venue": "Workshop on'Making Sense of Microposts: Big things come in small packages",
"volume": "",
"issue": "",
"pages": "93--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finn\u00c5rup Nielsen. 2011. A new anew: Evaluation of a word list for sentiment analysis in microblogs. In Workshop on'Making Sense of Microposts: Big things come in small packages, pages 93-98.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Conference on Empirical Meth- ods in Natural Language Processing, pages 1532- 1543.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dropout: a simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15(1):1929-1958.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Hierarchical attention networks for document classification",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1480--1489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2017. Hierarchical attention networks for document classification. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480-1489.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Nrc-canada-2014: Recent improvements in the sentiment analysis of tweets",
"authors": [
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2014,
"venue": "International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "443--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodan Zhu, Svetlana Kiritchenko, and Saif Moham- mad. 2014. Nrc-canada-2014: Recent improve- ments in the sentiment analysis of tweets. In Inter- national Workshop on Semantic Evaluation, pages 443-447.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The architecture of our system.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"text": ", and the resulting representations showcase interesting linear substructures of the word vector space. The embedding dimension used in our system is 300.4. glove.twitter.27B.200d 4 : This word embed-ding is trained on 2 billion tweets from twitter. It is similar to glove.840B.300d, but the embedding dimension is 200.",
"type_str": "table",
"content": "<table><tr><td>a global corpus</td></tr><tr><td>2.3 Neural Networks</td></tr><tr><td>2.3.1 Embeddings</td></tr><tr><td>The final model combines three neural net-</td></tr><tr><td>work components as BiLSTM-CNN, BiLSTM-</td></tr><tr><td>Attention, and Deep BiLSTM-Attention. Towards</td></tr><tr><td>BiLSTM-CNN and BiLSTM-Attention, we use</td></tr><tr><td>glove.twitter.27B.200d which contains pre-trained</td></tr><tr><td>word vectors with Glove algorithm (Penning-</td></tr><tr><td>ton et al., 2014). For Deep BiLSTM-Attention,</td></tr><tr><td>different pre-trained word vectors are used,</td></tr><tr><td>such as word2vec-twitter-model, GoogleNews-</td></tr><tr><td>vectors-negative300, glove.twitter.27B.200d and</td></tr><tr><td>glove.840B.300d.</td></tr><tr><td>1. word2vec-twitter-model 2 : word2vec model</td></tr><tr><td>(Mikolov et al., 2013) is a NLP tool launched</td></tr><tr><td>by Google in 2013. It features the quantifica-</td></tr><tr><td>tion of all words so that words can be quan-</td></tr><tr><td>tified to measure the relationship between</td></tr><tr><td>them. word2vec-twitter-model is trained on</td></tr><tr><td>tweets and the embedding dimension used in</td></tr><tr><td>our system is 400.</td></tr><tr><td>2. GoogleNews-vectors-negative300 3 : Google-</td></tr><tr><td>News vectors is trained on Google News cor-</td></tr><tr><td>pus. It resembles word2vec-twitter-model</td></tr><tr><td>and the embedding dimension is 300.</td></tr><tr><td>3. glove.840B.300d 4 : Glove is an unsupervised</td></tr><tr><td>learning algorithm for obtaining vector rep-</td></tr><tr><td>resentations for words. Training is conducted</td></tr><tr><td>on aggregated co-occurrences of words from</td></tr></table>",
"html": null,
"num": null
},
"TABREF1": {
"text": "The best parameters of EI-reg.",
"type_str": "table",
"content": "<table><tr><td>EI-reg BiLSTM CNN+GT BiLSTM Attention+GT Deep BiLSTM Attention 32 BS Epochs Dp BS Epochs Dp BS Epochs Dp BS Epochs Dp Anger Fear Joy Sadness 16 6 0.5 32 2 0.5 8 4 0.5 32 5 0.5 32 3 0.5 32 3 0.5 8 7 0.5 32 4 0.5 2 0.3 8 7 0.3 16 9 0.1 16 5 0.6</td></tr><tr><td>V-reg BiLSTM CNN+GT BiLSTM Attention+GT Deep BiLSTM Attention+WT 16 BS Epochs Dp Valence 8 5 0.5 8 10 0.6 8 0.5 Deep BiLSTM Attention+GN 32 10 0.5 Deep BiLSTM Attention+GL 8 5 0.2 Deep BiLSTM Attention+GT 16 8 0.2</td></tr></table>",
"html": null,
"num": null
},
"TABREF2": {
"text": "The best parameters of V-reg.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF4": {
"text": "Statistics of the datasets.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF6": {
"text": "Performance comparisons of models in different emotions, where the best values are marked in bold.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF7": {
"text": "Performance comparisons of models in valence, where the best values are marked in bold.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}