ACL-OCL / Base_JSON /prefixS /json /S18 /S18-1002.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S18-1002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:43:39.477059Z"
},
"title": "SeerNet at SemEval-2018 Task 1: Domain Adaptation for Affect in Tweets",
"authors": [
{
"first": "Venkatesh",
"middle": [],
"last": "Duppada",
"suffix": "",
"affiliation": {},
"email": "venkatesh.duppada@seernet.io"
},
{
"first": "Royal",
"middle": [],
"last": "Jain",
"suffix": "",
"affiliation": {},
"email": "royal.jain@seernet.io"
},
{
"first": "Sushant",
"middle": [],
"last": "Hiray",
"suffix": "",
"affiliation": {},
"email": "sushant.hiray@seernet.io"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The paper describes the best performing system for the SemEval-2018 Affect in Tweets (English) sub-tasks. The system focuses on the ordinal classification and regression sub-tasks for valence and emotion. For ordinal classification valence is classified into 7 different classes ranging from-3 to 3 whereas emotion is classified into 4 different classes 0 to 3 separately for each emotion namely anger, fear, joy and sadness. The regression sub-tasks estimate the intensity of valence and each emotion. The system performs domain adaptation of 4 different models and creates an ensemble to give the final prediction. The proposed system achieved 1 st position out of 75 teams which participated in the fore-mentioned subtasks. We outperform the baseline model by margins ranging from 49.2% to 76.4%, thus, pushing the state-of-the-art significantly.",
"pdf_parse": {
"paper_id": "S18-1002",
"_pdf_hash": "",
"abstract": [
{
"text": "The paper describes the best performing system for the SemEval-2018 Affect in Tweets (English) sub-tasks. The system focuses on the ordinal classification and regression sub-tasks for valence and emotion. For ordinal classification valence is classified into 7 different classes ranging from-3 to 3 whereas emotion is classified into 4 different classes 0 to 3 separately for each emotion namely anger, fear, joy and sadness. The regression sub-tasks estimate the intensity of valence and each emotion. The system performs domain adaptation of 4 different models and creates an ensemble to give the final prediction. The proposed system achieved 1 st position out of 75 teams which participated in the fore-mentioned subtasks. We outperform the baseline model by margins ranging from 49.2% to 76.4%, thus, pushing the state-of-the-art significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Twitter is one of the most popular micro-blogging platforms that has attracted over 300M daily users 1 with over 500M 2 tweets sent every day. Tweet data has attracted NLP researchers because of the ease of access to large data-source of people expressing themselves online. Tweets are micro-texts comprising of emoticons, hashtags as well as location data, making them feature rich for performing various kinds of analysis. Tweets provide an interesting challenge as users tend to write grammatically incorrect and use informal and slang words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In domain of natural language processing, emotion recognition is the task of associating words, phrases or documents with emotions from predefined using psychological models. The classification of emotions has mainly been researched from two fundamental viewpoints. (Ekman, 1992) and (Plutchik, 2001) proposed that emotions are discrete with each emotion being a distinct entity. On the contrary, (Mehrabian, 1980) and (Russell, 1980) propose that emotions can be categorized into dimensional groupings.",
"cite_spans": [
{
"start": 266,
"end": 279,
"text": "(Ekman, 1992)",
"ref_id": "BIBREF2"
},
{
"start": 284,
"end": 300,
"text": "(Plutchik, 2001)",
"ref_id": "BIBREF16"
},
{
"start": 397,
"end": 414,
"text": "(Mehrabian, 1980)",
"ref_id": "BIBREF7"
},
{
"start": 419,
"end": 434,
"text": "(Russell, 1980)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Affect in Tweets (Mohammad et al., 2018)shared task in SemEval-2018 focuses on extracting affect from tweets confirming to both variants of the emotion models, extracting valence (dimensional) and emotion (discrete). Previous version of the task (Mohammad and Bravo-Marquez, 2017) focused on estimating the emotion intensity in tweets. We participated in 4 sub-tasks of Affect in Tweets, all dealing with English tweets. The sub-tasks were: EI-oc: Ordinal classification of emotion intensity of 4 different emotions (anger, joy, sadness, fear), EI-reg: to determine the intensity of emotions (anger, joy, sadness, fear) into a real-valued scale of 0-1, V-oc: Ordinal classification of valence into one of 7 ordinal classes [-3, 3 ], V-reg: determine the intensity of valence on the scale of 0-1.",
"cite_spans": [
{
"start": 723,
"end": 729,
"text": "[-3, 3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Prior work in extracting Valence, Arousal, Dominance (VAD) from text primarily relied on using and extending lexicons (Bestgen and Vincze, 2012) (Turney et al., 2011) . Recent advancements in deep learning have been applied in detecting sentiments from tweets (Tang et al., 2014) , (Liu et al., 2012) , (Mohammad et al., 2013) .",
"cite_spans": [
{
"start": 118,
"end": 130,
"text": "(Bestgen and",
"ref_id": "BIBREF0"
},
{
"start": 131,
"end": 166,
"text": "Vincze, 2012) (Turney et al., 2011)",
"ref_id": null
},
{
"start": 260,
"end": 279,
"text": "(Tang et al., 2014)",
"ref_id": "BIBREF21"
},
{
"start": 282,
"end": 300,
"text": "(Liu et al., 2012)",
"ref_id": "BIBREF6"
},
{
"start": 303,
"end": 326,
"text": "(Mohammad et al., 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we use various state-of-the-art machine learning models and perform domain adaptation (Pan and Yang, 2010) from their source task to the target task. We use multi-view ensemble learning technique (Kumar and Minz, 2016) to produce the optimal feature-set partitioning for the classifier. Finally, results from multiple such classifiers are stacked together to create an ensemble (Polikar, 2012) .",
"cite_spans": [
{
"start": 210,
"end": 232,
"text": "(Kumar and Minz, 2016)",
"ref_id": "BIBREF5"
},
{
"start": 392,
"end": 407,
"text": "(Polikar, 2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we describe our approach and experiments to solve this problem. The rest of the paper is laid out as follows: Section 2 describes the system architecture, Section 3 reports results and inference from different experiments. Finally we conclude in Section 4 along with a discussion about future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 System Description 2.1 Pipeline Figure 1 details the System Architecture. We now describe how all the different modules are tied together. The input raw tweet is pre-processed as described in Section 2.2. The processed tweet is passed through all the feature extractors described in Section 2.3. At the end of this step, we extract 5 different feature vectors corresponding to each tweet. Each feature vector is passed through the model zoo where classifiers with different hyper parameters are tuned. The models are described in Section 2.4. For each vector, the results of top-2 performing models (based on cross-validation) are retained. At the end of this step, we've 10 different results corresponding to each tweet. All these results are ensembled together via stacking as described in Section 2.4.3. Finally, the output from the ensembler is the output returned by the system.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 42,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The pre-processing step modifies the raw tweets to prepare for feature extraction. Tweets are pre-processed using tweettokenize 3 tool. Twitter specific keywords are replaced with tokens, namely, USERNAME, PHONENUMBER, URLs, timestamps. All characters are converted to lowercase. A contiguous sequence of emojis is first split into individual emojis. We then replace an emoji with its description. The descriptions were scraped from EmojiPedia 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "2.2"
},
{
"text": "As mentioned in Section 1, we perform transfer learning from various state-of-the-art deep learning techniques. We will go through the following sub-sections to understand these models in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.3"
},
{
"text": "DeepMoji (Felbo et al., 2017) performs distant supervision on a very large dataset (1246 million tweets) comprising of noisy labels (emojis). Deep-Moji was able to obtain state-of-the-art results in various downstream tasks using transfer learning. This makes it an ideal candidate for domain adaptation into related target tasks. We extract 2 different feature sets by extracting the embeddings from the softmax and the attention layer from the pretrained DeepMoji model. The vector from softmax layer is of dimension 64 and the vector from attention layer is of dimension 2304.",
"cite_spans": [
{
"start": 9,
"end": 29,
"text": "(Felbo et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DeepMoji",
"sec_num": "2.3.1"
},
{
"text": "Skip-Thought vectors (Kiros et al., 2015) is an offthe-shelf encoder that can produce highly generic sentence representations. Since tweets are restricted by character limit, skip-thought vectors can create a good semantic representation. This representation is then passed to the classifier. The representation is of dimension 4800.",
"cite_spans": [
{
"start": 21,
"end": 41,
"text": "(Kiros et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Thought Vectors",
"sec_num": "2.3.2"
},
{
"text": "(Radford et al., 2017) developed an unsupervised system which learned an excellent representation of sentiment. The original model was trained to generate amazon reviews, this makes the sentiment neuron an ideal candidate for transfer learning. The representation extracted from Sentiment Neuron is of size 4096.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Sentiment Neuron",
"sec_num": "2.3.3"
},
{
"text": "Apart from all the pre-trained embeddings, we choose to also include various lexical features bundled through the EmoInt package 5 (Duppada and Hiray, 2017) The lexical features include AFINN (Nielsen, 2011) , NRC Affect Intensities (Mohammad, 2017), NRC-Word-Affect Emotion Lexicon (Mohammad and Turney, 2010), NRC Hashtag Sentiment Lexicon and Sentiment140 Lexicon (Mohammad et al., 2013) . The final feature vector is the concatenation of all the individual features. This feature vector is of size (141, 1).",
"cite_spans": [
{
"start": 192,
"end": 207,
"text": "(Nielsen, 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 367,
"end": 390,
"text": "(Mohammad et al., 2013)",
"ref_id": null
}
],
"eq_spans": [],
"section": "EmoInt",
"sec_num": "2.3.4"
},
{
"text": "This gives us five different feature vector variants. All of these feature vectors are passed individually to the underlying models. The pipeline is explained in detail in Section 2.1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EmoInt",
"sec_num": "2.3.4"
},
{
"text": "We participated in 4 sub-tasks, namely, EI-oc, EIreg, V-oc, V-reg. Two of the sub-tasks are ordinal classification and the remaining two are regressions. We describe our approach for building ML Figure 1 : System Architecture. models for both the variants in the upcoming sections.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 203,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Machine Learning Models",
"sec_num": "2.4"
},
{
"text": "We participated in the emotion intensity ordinal classification where the task was to predict the intensity of emotions from the categories anger, fear, joy, and, sadness. Separate datasets were provided for each emotion class. The goal of the subtask of valence ordinal classification was to classify the tweet into one of 7 ordinal classes [-3, 3] . We experimented with XG Boost Classifier, Random Forest Classifier of sklearn (Pedregosa et al., 2011) .",
"cite_spans": [
{
"start": 342,
"end": 349,
"text": "[-3, 3]",
"ref_id": null
},
{
"start": 430,
"end": 454,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ordinal Classification",
"sec_num": "2.4.1"
},
{
"text": "For the regression tasks (E-reg, V-reg), the goal was to predict the intensity on a scale of 0-1. We experimented with XG Boost Regressor, Random Forest Regressor of sklearn (Pedregosa et al., 2011) .",
"cite_spans": [
{
"start": 174,
"end": 198,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Regression",
"sec_num": "2.4.2"
},
{
"text": "The hyper-parameters of each model were tuned separately for each sub-task. The top-2 best models corresponding to each feature vector type were chosen after performing 7-fold cross-validation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regression",
"sec_num": "2.4.2"
},
{
"text": "Once we get the results from all the classifiers/regressors for a given tweet, we use stacking ensemble technique to combine the results. In this case, we pass the results from the models to a meta classifier/regressor as input. The output of this meta model is treated as the final output of the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stacking",
"sec_num": "2.4.3"
},
{
"text": "We observed that using ordinal regressors gave us better performance than using classifiers which treat each output class as disjoint. Ordinal Regression is a family of statistical learning meth- ods where the output variable is discrete and ordered. We use the ordinal logistic classification with squared error (Rennie and Srebro, 2005 ) from the python library Mord. 6 (Rennie and Srebro, 2005) In case of regression sub-tasks we observed the best cross validation results with Ridge Regression. Hence, we chose Ridge Regression as the meta regressor.",
"cite_spans": [
{
"start": 313,
"end": 337,
"text": "(Rennie and Srebro, 2005",
"ref_id": "BIBREF19"
},
{
"start": 384,
"end": 397,
"text": "Srebro, 2005)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stacking",
"sec_num": "2.4.3"
},
{
"text": "The metrics used for ranking various systems are discussed in this section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Results",
"sec_num": "3.1"
},
{
"text": "Pearson correlation with gold labels was used as a primary metric for ranking the systems. For EIreg and EI-oc tasks Pearson correlation is macroaveraged (MA Pearson) over the four emotion categories. Table 1 describes the results based on primary metrics for various sub-tasks in English language. Our system achieved the best performance in each of the four sub-tasks. We have also included the results of the baseline and second best performing system for comparison. As we can observe, ",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 208,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Primary Metrics",
"sec_num": "3.1.1"
},
{
"text": "Pearson (gold in 0.5-1) V-reg 0.697 (1) EI-reg 0.638 (1) our system vastly outperforms the baseline and is a significant improvement over the second best system, especially, in the emotion sub-tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task",
"sec_num": null
},
{
"text": "The competition also uses some secondary metrics to provide a different perspective on the results. Pearson correlation for a subset of the test set that includes only those tweets with intensity score greater or equal to 0.5 is used as the secondary metric for the regression tasks. For ordinal classification tasks following secondary metrics were used:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Secondary Metrics",
"sec_num": "3.1.2"
},
{
"text": "\u2022 Pearson correlation for a subset of the test set that includes only those tweets with intensity classes low X, moderate X, or high X (where X is an emotion). The organizers refer to this set of tweets as the some-emotion subset (SE).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Secondary Metrics",
"sec_num": "3.1.2"
},
{
"text": "\u2022 Weighted quadratic kappa on the full test set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Secondary Metrics",
"sec_num": "3.1.2"
},
{
"text": "\u2022 Weighted quadratic kappa on the someemotion subset of the test set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Secondary Metrics",
"sec_num": "3.1.2"
},
{
"text": "The results for secondary metrics are listed in Table 2 and 3. We have also included the ranking in brackets along with the score. We see that our system achieves the top rank according to all the secondary metrics, thus, proving its robustness.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Secondary Metrics",
"sec_num": "3.1.2"
},
{
"text": "The performance of the system is highly dependent on the discriminative ability of the tweet representation generated by the featurizers. We measure the predictive power for each of the featurizer used by calculating the pearson correlation of the system using only that featurizer. We describe the results for each sub task separately in tables 4-7. We observe that deepmoji featurizer is the most powerful featurizer of all the ones that we've used. Also, we can see that stacking ensembles of models trained on the outputs of multiple featurizers gives a significant improvement in performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Importance",
"sec_num": "3.2"
},
{
"text": "We analyze the data points where our model's prediction is far from the ground truth. We observed some limitations of the system, such as, sometimes understanding a tweet's requires contextual knowledge about the world. Such examples can be very confusing for the model. We use deepmoji pre-trained model which uses emojis as proxy for labels, however partly due to the nature of twitter conversations same emojis can be used for multiple emotions, for example, joy emojis can be sometimes used to express joy, sometimes for sarcasm or for insulting someone. One such example is 'Your club is a laughing stock'. Such cases are sometimes incorrectly predicted by our system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Limitations",
"sec_num": "3.3"
},
{
"text": "The paper studies the effectiveness of various representations of tweets and proposes ways to combine them to obtain state-of-the-art results. We also show that stacking ensemble of various classifiers learnt using different representations can vastly improve the robustness of the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work & Conclusion",
"sec_num": "4"
},
{
"text": "Further improvements can be made in the preprocessing stage. Instead of discarding various tokens such as punctuation's, incorrectly spelled words, etc, we can utilize the information by learning their semantic representations. Also, we can improve the system performance by employing multi-task learning techniques as various emotions are not independent of each other and information about one emotion can aid in predicting the other. Furthermore, more robust techniques can be employed for distant supervision which are less prone to noisy labels to get better quality training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work & Conclusion",
"sec_num": "4"
},
{
"text": "https://www.statista.com/statistics/282087/number-ofmonthly-active-twitter-users/ 2 http://www.internetlivestats.com/twitter-statistics/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/jaredks/tweetokenize 4 https://emojipedia.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/SEERNET/EmoInt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/fabianp/mord",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Checking and bootstrapping lexical norms by means of word similarity indexes",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Bestgen",
"suffix": ""
},
{
"first": "Nadja",
"middle": [],
"last": "Vincze",
"suffix": ""
}
],
"year": 2012,
"venue": "Behavior research methods",
"volume": "44",
"issue": "4",
"pages": "998--1006",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yves Bestgen and Nadja Vincze. 2012. Checking and bootstrapping lexical norms by means of word similarity indexes. Behavior research methods, 44(4):998-1006.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Seernet at emoint-2017: Tweet emotion intensity estimator",
"authors": [
{
"first": "Venkatesh",
"middle": [],
"last": "Duppada",
"suffix": ""
},
{
"first": "Sushant",
"middle": [],
"last": "Hiray",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "205--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Venkatesh Duppada and Sushant Hiray. 2017. Seernet at emoint-2017: Tweet emotion intensity estimator. In Proceedings of the 8th Workshop on Computa- tional Approaches to Subjectivity, Sentiment and So- cial Media Analysis, pages 205-211.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An argument for basic emotions",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Ekman",
"suffix": ""
}
],
"year": 1992,
"venue": "Cognition & emotion",
"volume": "6",
"issue": "3-4",
"pages": "169--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion, 6(3-4):169-200.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm",
"authors": [
{
"first": "Bjarke",
"middle": [],
"last": "Felbo",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Mislove",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Iyad",
"middle": [],
"last": "Rahwan",
"suffix": ""
},
{
"first": "Sune",
"middle": [],
"last": "Lehmann",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1615--1625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bjarke Felbo, Alan Mislove, Anders S\u00f8gaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain represen- tations for detecting sentiment, emotion and sar- casm. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 1615-1625.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Skip-thought vectors",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ruslan",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3294--3302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294-3302.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multi-view ensemble learning: an optimal feature set partitioning for high-dimensional data classification. Knowledge and Information Systems",
"authors": [
{
"first": "Vipin",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Sonajharia",
"middle": [],
"last": "Minz",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "49",
"issue": "",
"pages": "1--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vipin Kumar and Sonajharia Minz. 2016. Multi-view ensemble learning: an optimal feature set partition- ing for high-dimensional data classification. Knowl- edge and Information Systems, 49(1):1-59.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Emoticon smoothed language models for twitter sentiment analysis",
"authors": [
{
"first": "",
"middle": [],
"last": "Kun-Lin",
"suffix": ""
},
{
"first": "Wu-Jun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Minyi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2012,
"venue": "Aaai",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kun-Lin Liu, Wu-Jun Li, and Minyi Guo. 2012. Emoticon smoothed language models for twitter sentiment analysis. In Aaai.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Basic dimensions for a general psychological theory implications for personality, social, environmental, and developmental studies",
"authors": [
{
"first": "Albert",
"middle": [],
"last": "Mehrabian",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Albert Mehrabian. 1980. Basic dimensions for a gen- eral psychological theory implications for personal- ity, social, environmental, and developmental stud- ies.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Wassa-2017 shared task on emotion intensity",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.03700"
]
},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad and Felipe Bravo-Marquez. 2017. Wassa-2017 shared task on emotion intensity. arXiv preprint arXiv:1708.03700.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Semeval-2018 Task 1: Affect in tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad, Felipe Bravo-Marquez, Mo- hammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 Task 1: Affect in tweets. In Proceed- ings of International Workshop on Semantic Evalu- ation (SemEval-2018), New Orleans, LA, USA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Nrc-canada: Building the stateof-the-art in sentiment analysis of tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1308.6242"
]
},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad, Svetlana Kiritchenko, and Xiao- dan Zhu. 2013. Nrc-canada: Building the state- of-the-art in sentiment analysis of tweets. arXiv preprint arXiv:1308.6242.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 workshop on computational approaches to analysis and generation of emotion in text",
"volume": "",
"issue": "",
"pages": "26--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad and Peter D Turney. 2010. Emo- tions evoked by common words and phrases: Us- ing mechanical turk to create an emotion lexicon. In Proceedings of the NAACL HLT 2010 workshop on computational approaches to analysis and genera- tion of emotion in text, pages 26-34. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A new anew: Evaluation of a word list for sentiment analysis in microblogs",
"authors": [
{
"first": "Finn\u00e5rup",
"middle": [],
"last": "Nielsen",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1103.2903"
]
},
"num": null,
"urls": [],
"raw_text": "Finn\u00c5rup Nielsen. 2011. A new anew: Evaluation of a word list for sentiment analysis in microblogs. arXiv preprint arXiv:1103.2903.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A survey on transfer learning",
"authors": [
{
"first": "Qiang",
"middle": [],
"last": "Sinno Jialin Pan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2010,
"venue": "IEEE Transactions on knowledge and data engineering",
"volume": "22",
"issue": "10",
"pages": "1345--1359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Scikit-learn: Machine learning in python",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dubourg",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of machine learning research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct):2825-2830.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The nature of emotions: Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Plutchik",
"suffix": ""
}
],
"year": 2001,
"venue": "American scientist",
"volume": "89",
"issue": "4",
"pages": "344--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Plutchik. 2001. The nature of emotions: Hu- man emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. American scientist, 89(4):344- 350.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Ensemble learning",
"authors": [
{
"first": "Robi",
"middle": [],
"last": "Polikar",
"suffix": ""
}
],
"year": 2012,
"venue": "Ensemble machine learning",
"volume": "",
"issue": "",
"pages": "1--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robi Polikar. 2012. Ensemble learning. In Ensemble machine learning, pages 1-34. Springer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning to generate reviews and discovering sentiment",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.01444"
]
},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Rafal Jozefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Loss functions for preference levels: Regression with discrete ordered labels",
"authors": [
{
"first": "D",
"middle": [
"M"
],
"last": "Jason",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Rennie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Srebro",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the IJCAI multidisciplinary workshop on advances in preference handling",
"volume": "",
"issue": "",
"pages": "180--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason DM Rennie and Nathan Srebro. 2005. Loss func- tions for preference levels: Regression with discrete ordered labels. In Proceedings of the IJCAI mul- tidisciplinary workshop on advances in preference handling, pages 180-186. Kluwer Norwell, MA.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A circumplex model of affect",
"authors": [
{
"first": "A",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Russell",
"suffix": ""
}
],
"year": 1980,
"venue": "Journal of personality and social psychology",
"volume": "39",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James A Russell. 1980. A circumplex model of af- fect. Journal of personality and social psychology, 39(6):1161.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning sentimentspecific word embedding for twitter sentiment classification",
"authors": [
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1555--1565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentiment- specific word embedding for twitter sentiment clas- sification. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1555- 1565.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Literal and metaphorical sense identification through concrete and abstract context",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Yair",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Neuman",
"suffix": ""
},
{
"first": "Yohai",
"middle": [],
"last": "Assaf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "680--690",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D Turney, Yair Neuman, Dan Assaf, and Yohai Cohen. 2011. Literal and metaphorical sense iden- tification through concrete and abstract context. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing, pages 680- 690. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Primary metrics across various sub-tasks."
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Secondary metrics for ordinal classification sub-tasks. System rank is mentioned in the brackets."
},
"TABREF4": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Secondary metrics for regression sub-tasks. System rank is mentioned in brackets."
},
"TABREF6": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Feature Set</td><td>Pearson</td></tr><tr><td>Deepmoji (softmax layer)</td><td>0.780</td></tr><tr><td>Deepmoji (attention layer)</td><td>0.813</td></tr><tr><td>EmoInt</td><td>0.785</td></tr><tr><td colspan=\"2\">Unsupervised sentiment Neuron 0.685</td></tr><tr><td>Skip-Thought Vectors</td><td>0.748</td></tr><tr><td>Combined</td><td>0.836</td></tr></table>",
"num": null,
"text": "Pearson Correlation for V-reg task. Best results are highlighted in bold."
},
"TABREF7": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Feature Set</td><td>Pearson</td></tr><tr><td>Deepmoji (softmax layer)</td><td>0.703</td></tr><tr><td>Deepmoji (attention layer)</td><td>0.756</td></tr><tr><td>EmoInt</td><td>0.694</td></tr><tr><td colspan=\"2\">Unsupervised sentiment Neuron 0.548</td></tr><tr><td>Skip-Thought Vectors</td><td>0.656</td></tr><tr><td>Combined</td><td>0.799</td></tr></table>",
"num": null,
"text": "Pearson Correlation for V-oc task. Best results are highlighted in bold."
},
"TABREF8": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Feature Set</td><td>Pearson</td></tr><tr><td>Deepmoji softmax layer</td><td>0.611</td></tr><tr><td>Deepmoji attention layer</td><td>0.664</td></tr><tr><td>EmoInt</td><td>0.596</td></tr><tr><td colspan=\"2\">Unsupervised sentiment Neuron 0.445</td></tr><tr><td>Skip-Thought Vectors</td><td>0.557</td></tr><tr><td>Combined</td><td>0.695</td></tr></table>",
"num": null,
"text": "Macro-Averaged Pearson Correlation for EIreg task. Best results are highlighted in bold."
},
"TABREF9": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Macro-Averaged Pearson Correlation for EIoc task. Best results are highlighted in bold."
}
}
}
}