Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S16-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:25:20.324391Z"
},
"title": "MDSENT at SemEval-2016 Task 4: A Supervised System for Message Polarity Classification",
"authors": [
{
"first": "Hang",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Tim",
"middle": [],
"last": "Oates",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our system submitted for the Sentiment Analysis in Twitter task of SemEval-2016, and specifically for the Message Polarity Classification subtask. We used a system that combines Convolutional Neural Networks and Logistic Regression for sentiment prediction, where the former makes use of embedding features while the later utilizes various features like lexicons and dictionaries.",
"pdf_parse": {
"paper_id": "S16-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our system submitted for the Sentiment Analysis in Twitter task of SemEval-2016, and specifically for the Message Polarity Classification subtask. We used a system that combines Convolutional Neural Networks and Logistic Regression for sentiment prediction, where the former makes use of embedding features while the later utilizes various features like lexicons and dictionaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recently, rapid growth of the amount of usergenerated content on the web prompts increasing interest in research on sentiment analysis and opinion mining. A typical example is Twitter, where lots of users express feelings and opinions about various subjects. However, unlike traditional media, language used in social network services like Twitter is often informal, leading to new challenges to corresponding text analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The SemEval-2016 Sentiment Analysis in Twitter task (SESA-16) is a task that focuses on the sentiment analysis of tweets. As a continuation of SemEval-2015 Task 10, SESA-16 introduces several new challenges, including the replacement of classification with quantification, movement from two/three-point scale to five-point scale, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We participated in Subtask A of SESA-16, namely message polarity classification, a task that seeks to predict a sentiment label for some given text. We model the problem as a multi-class classification problem that combines the predictions given by two different classifiers: one is a Convolutional Neural Network (CNN) and the other is Logistic Regression (LR). The former takes embedding-based features while the latter utilizes various features such as lexicons, dictionaries, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is structured as follows. In Section 2, we describe our system in detail, including feature description and approaches. In Section 3, we list the details of datasets for the experiments, along with hyperparameter settings and training techniques. In Section 4, we report the experiment results and present the corresponding discussion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our system aims at predicting the sentiment of a given message, i.e., whether the message expresses positive, negative or neutral emotion. To achieve that, we adopt two separate classifiers, CNN and LR, designed to utilize different types of features. The final prediction for sentiment is a combination of predictions given by both classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "Tweets often include informal text, making it essential to preprocess tweets before they are fed to the system. However, we keep the preprocessing to a minimum by only removing URLs and @User tags. We then further tokenize and tag tweets with arktweetnlp (Gimpel et al., 2011) . In addition, all tweets are lower-cased.",
"cite_spans": [
{
"start": 255,
"end": 276,
"text": "(Gimpel et al., 2011)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "We use the LR classifier for features from sentiment lexicons and token clusters. We have used the fol- lowing:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "2.2"
},
{
"text": "\u2022 clusters: 1000 token clusters provided by the CMU tweet NLP tool. These clusters are produced with the Brown clustering algorithm on 56 million English-language tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "2.2"
},
{
"text": "\u2022 manually-constructed sentiment lexicons: NRC Emotion Lexicon (Mohammad and Turney, 2010), MPQA (Wilson et al., 2005) , Bing Liu Lexicon (Hu and Liu, 2004) and Lexicon (Nielsen, 2011) .",
"cite_spans": [
{
"start": 97,
"end": 118,
"text": "(Wilson et al., 2005)",
"ref_id": "BIBREF12"
},
{
"start": 138,
"end": 156,
"text": "(Hu and Liu, 2004)",
"ref_id": "BIBREF3"
},
{
"start": 169,
"end": 184,
"text": "(Nielsen, 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "2.2"
},
{
"text": "\u2022 automatically-constructed sentiment lexicons: Hashtag Sentiment Lexicon and Senti-ment140 Lexicon (Mohammad et al., 2013) .",
"cite_spans": [
{
"start": 92,
"end": 123,
"text": "Lexicon (Mohammad et al., 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "2.2"
},
{
"text": "For the Sentiment140 Lexicon and Hashtag Sentiment Lexicon, we compute separate lexicon features for uni-grams and bi-grams, while for other Lexicons, only uni-gram lexicon features are produced. For each lexicon, let t be the token(uni-gram or bigram), p be the polarity and s be the score provided by the lexicon. We use the same features that are also adopted by the NRC-Canada system (Mohammad et al., 2013):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "2.2"
},
{
"text": "\u2022 the total count of tokens in a tweet with s(t, p) > 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "2.2"
},
{
"text": "\u2022 the total score of tokens in a tweet w s(t, p).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "2.2"
},
{
"text": "\u2022 the maximum score of tokens in a tweet max w s(t, p).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "2.2"
},
{
"text": "\u2022 the score of the last token in the tweet with s(t, p) > 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "2.2"
},
{
"text": "For each token, we also use features to describe whether it is present or absent in each of the 1000 token clusters. There are in total 1051 features for a tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic Regression",
"sec_num": "2.2"
},
{
"text": "Deep learning models have achieved remarkable results for various NLP tasks, with most of them based on embeddings that represent words, characters, etc. with vectors of real values. Some work on embeddings suggests that word vectors generated by some embedding algorithms preserve many linguistic regularities (Mikolov et al., 2013a) .",
"cite_spans": [
{
"start": 311,
"end": 334,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Neural Network",
"sec_num": "2.3"
},
{
"text": "Among the various deep learning models, we use Convolutional Neural Networks, which have already been used for sentiment classification with promising results (Kim, 2014) . We show the network architecture in Figure 1 .",
"cite_spans": [
{
"start": 159,
"end": 170,
"text": "(Kim, 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 209,
"end": 217,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Convolutional Neural Network",
"sec_num": "2.3"
},
{
"text": "In general, the architecture contains two separate CNNs: one is for word-based input maps while the other is for character-based input maps. In our system, an input map for a tweet is a stack of the embeddings of its words/characters w.r.t. their order in the tweet. We initialize word embeddings with the publicly available 300 dimension Google News embeddings trained with Word2Vec, but randomly initialize character embeddings with the same dimension. We fine tune both kinds of embeddings during the training procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Neural Network",
"sec_num": "2.3"
},
{
"text": "Each of the two separate CNNs has its own set of convolutional filters. We fix the width of all filters to be the same as the corresponding embedding dimension, but set their height according to predefined types of n-grams. For example, a filter for bi-grams on an input map constructed with 300 dimensional word embeddings will have shape (2, 300), where 2 is the height and 300 is the width. In other words, we use each filter to capture and extract features w.r.t. a specific type of n-gram from an input map.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Neural Network",
"sec_num": "2.3"
},
{
"text": "The feature maps generated by a particular filter may have different shapes for different input maps, due to variable tweet lengths. Thus we adopt a pooling scheme called max-over-time pooling (Collobert et al., 2011) , which captures the most important feature, i.e., the one with highest value, for each feature map. This pooling scheme naturally deals with the variable tweet length problem.",
"cite_spans": [
{
"start": 193,
"end": 217,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Neural Network",
"sec_num": "2.3"
},
{
"text": "After pooling, we first generate a representation for each CNN by concatenating its own pooled features, and then form a final representation by concatenating the two separate representations. The final representation is then fed into a multi-layer perceptron (MLP) classifier for predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Neural Network",
"sec_num": "2.3"
},
{
"text": "For regularization we employ dropout with a constraint on l 2 -norms of the weight vectors (Hinton et al., 2012). The key idea of dropout is to prevent coadaptation of feature detectors (hidden units) by ran-domly dropping out a portion of hidden units in the training procedure. At test time, the learned weight vectors are scaled according to the portion while no dropout is needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "2.3.1"
},
{
"text": "In addition to dropout, we constrain weight vectors by introducing an upper limit on their l 2 -norms. That is, for a weight vector w, we rescale it to have ||w|| 2 = l, whenever it has ||w|| 2 > l, after gradient descent step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization",
"sec_num": "2.3.1"
},
{
"text": "We combine the predictions of the two classifiers in the form of a weighted summation. Given the prediction P LR by Logistic Regression and the prediction P CN N by the CNN, we introduce a scalar w, such that the final prediction is given as,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combination",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P f inal = (1 \u2212 w)P LR + wP CN N",
"eq_num": "(1)"
}
],
"section": "Combination",
"sec_num": "2.4"
},
{
"text": "In other words, let x be the input instance,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combination",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P f inal (Y = y|x) = wP CN N (Y = y|x) + (1 \u2212 w)P LR (Y = y|x)",
"eq_num": "(2)"
}
],
"section": "Combination",
"sec_num": "2.4"
},
{
"text": "We do not simply feed the features of LR along with the features generated by the CNN into a single classifier because they are naturally different. The features from LR are highly relevant with manually-created or automatically-generated dictionaries, scores, clusters, etc. They are a mixture of binary and real-value features with high variance. While for the CNN, the features are generated by convolutional kernels on distributed representations (embeddings), leading to strong correlation and relatively smaller variance. Our preliminary experiments show that by simply adding LR features to CNN features, the performance of our system does not increase, but drops.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combination",
"sec_num": "2.4"
},
{
"text": "We test our model on the SemEval-2016 benchmark dataset with two different settings. Setting 1 uses only the 2016 datasets while Setting 2 uses a combination of 2016 and 2013 datasets. We list the details of the two settings in Table 1. For setting 2, the merge of two datasets is conducted w.r.t. the train/dev splits. Although we did Test Setting 1 5975 1997 32009 Setting 2 12964 3100 32009 Table 1 : Statistics of our two settings of datasets for experiments. Setting 1: a dataset with only the SemEval-2016 dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 236,
"text": "Table 1.",
"ref_id": null
},
{
"start": 336,
"end": 406,
"text": "Test Setting 1 5975 1997 32009 Setting 2 12964 3100 32009 Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "Setting 2: a dataset that is a combination of the SemEval-2016 and SemEval-2013 datasets. In Setting 2, the merge is conducted w.r.t. train/dev splits, with \"Not Available\" tweets removed. not remove any \"Not Available\" tweets for setting 1, we found a relatively high amount of such tweets in the combined dataset, which may significantly influence the system performance, thus we removed all the \"Not Available\" tweets for setting 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings Train Dev",
"sec_num": null
},
{
"text": "For both settings, we use rectified linear units. For the word-based CNN, we use filters of height 1,2,3,4, while for the character-based CNN, we use filters of height 3,4,5. And 100 feature maps are used for each filter. We also use a dropout rate of 0.5, l 2 -norm constraint of 3, and mini-batch size of 50. These values were picked on the Dev dataset of Setting 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN",
"sec_num": "3.2.1"
},
{
"text": "We perform early stop on dev datasets during training. We use Adadelta as the optimization algorithm (Zeiler, 2012).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN",
"sec_num": "3.2.1"
},
{
"text": "We use the publicly available tool LibLinear for LR training. The cost is set to be 0.5 with all other parameters assigned with default settings. The cost is chosen based on the Dev dataset of Setting 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LR",
"sec_num": "3.2.2"
},
{
"text": "The scalar w is picked via grid search on the Dev dataset for both settings. Because of the random initialization of weights and random shuffling of batches for the CNN during the training procedure, w is different for different runs. Thus we consider it as a weight to be trained with other weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combination",
"sec_num": "3.2.3"
},
{
"text": "It is popular to initialize word vectors with pretrained embeddings obtained by some unsupervised algorithms trained over a large corpus to improve for \"the number of tweets that were labeled as X and should have been labeled as Y\", where P U N stand for Positive Neutral Negative, respectively. system performance (Kim, 2014) (Socher et al., 2011) . We use the publicly available Word2Vec vectors trained on 100 billion words from Google News using the continuous bag-of-words architecture (Mikolov et al., 2013b) to initialize word embeddings, but randomly initialize character embeddings. All embeddings have dimensionality of 300. We also randomly initialize word embeddings that are not present in the vocabulary of those pre-trained word vectors.",
"cite_spans": [
{
"start": 315,
"end": 326,
"text": "(Kim, 2014)",
"ref_id": "BIBREF4"
},
{
"start": 327,
"end": 348,
"text": "(Socher et al., 2011)",
"ref_id": "BIBREF11"
},
{
"start": 491,
"end": 514,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings",
"sec_num": "3.3"
},
{
"text": "The same evaluation measure as the one used in previous years is adopted, i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "F P N 1 = F P os 1 + F N eg 1 2 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "where F P os 1 is defined as, F P os 1 = 2\u03c0 P os \u03c1 P os \u03c0 P os + \u03c1 P os (4) with \u03c1 P os defined as the precision of predicted positive tweets, i.e., the fraction of tweets predicted to be positive that are indeed positive, \u03c1 P os = P P P P + P U + P N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "and \u03c0 P os defined as the recall of predicted positive tweets, i.e., the fraction of positive tweets that are predicted to be such, \u03c0 P os = P P P P + U P + N P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "where PP, PU, PN, UP, NP are defined in Table 2 , a confusion matrix for Subtask A provided by (Nakov et al., ) We show the evaluation results of our system in Table 3 , along with the top 15 systems reported. Originally we tested the system with only setting 1 and it ranks 12th among 34 systems. However, we find the system with setting 1 perform poorly on older datasets, which may due to the lack of training data. Thus we then test our model with setting 2 and report ranks generated from the same list of evaluation results reported by the 34 systems. It is apparent that our system can benefit from more training data and shows significant performance improvement (rank 6th).",
"cite_spans": [
{
"start": 96,
"end": 112,
"text": "(Nakov et al., )",
"ref_id": null
}
],
"ref_spans": [
{
"start": 40,
"end": 48,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 161,
"end": 168,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "Another interesting observation is that when provided with large amounts of training data, the CNN itself can perform very well, with LR assigned a very small weight during the combination proce-dure. We further test this finding by making 5 individual runs for both settings and checking the combination scalar weight w and final evaluation score F P N 1 . We list corresponding results in Table 4 . With more training data, w increased from an average of 0.654 to an average of 0.98, which is very close to 1, while the performance improved from an average of 0.587 to an average of 0.604. This suggests the possibility to use only deep learning techniques along with embeddings to achieve similar or even better performance than traditional systems that require a lot of human engineered features and knowledge bases.",
"cite_spans": [],
"ref_spans": [
{
"start": 391,
"end": 398,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "Our future work includes finer-design of the CNN, e.g., performing two stages of classification: first for subjectivity detection and then for polarity classification. We will also seek the possibility of conducting unsupervised learning with the CNN, which allows us to make use of the large amount of tweets on the Internet. With such increased amount of training data, our system may further improve its performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "is defined similarly as F P os 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493- 2537.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Part-of-speech tagging for twitter: Annotation, features, and experiments",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Mills",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Flanigan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers",
"volume": "2",
"issue": "",
"pages": "42--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A Smith. 2011. Part-of-speech tagging for twit- ter: Annotation, features, and experiments. In Pro- ceedings of the 49th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 42-47. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Improving neural networks by preventing coadaptation of feature detectors",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Geoffrey E Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ruslan R",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1207.0580"
]
},
"num": null,
"urls": [],
"raw_text": "Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing co- adaptation of feature detectors. arXiv preprint arXiv:1207.0580.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 168-177. ACM.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5882"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sen- tence classification. arXiv preprint arXiv:1408.5882.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representa- tions in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 workshop on computational approaches to analysis and generation of emotion in text",
"volume": "",
"issue": "",
"pages": "26--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad and Peter D Turney. 2010. Emo- tions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon. In Pro- ceedings of the NAACL HLT 2010 workshop on com- putational approaches to analysis and generation of emotion in text, pages 26-34. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Nrc-canada: Building the state-of-theart in sentiment analysis of tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1308.6242"
]
},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. Nrc-canada: Building the state-of-the- art in sentiment analysis of tweets. arXiv preprint arXiv:1308.6242.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Evaluation measures for the semeval-2016 task 4 'sentiment analysis in twitter",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Se- bastiani, and Veselin Stoyanov. Evaluation measures for the semeval-2016 task 4 'sentiment analysis in twit- ter'(draft: Version 1.1).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A new anew: Evaluation of a word list for sentiment analysis in microblogs",
"authors": [
{
"first": "Finn\u00e5rup",
"middle": [],
"last": "Nielsen",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1103.2903"
]
},
"num": null,
"urls": [],
"raw_text": "Finn\u00c5rup Nielsen. 2011. A new anew: Evaluation of a word list for sentiment analysis in microblogs. arXiv preprint arXiv:1103.2903.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dynamic pooling and unfolding recursive autoencoders for paraphrase detection",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Eric",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pennin",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "801--809",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Eric H Huang, Jeffrey Pennin, Christo- pher D Manning, and Andrew Y Ng. 2011. Dynamic pooling and unfolding recursive autoencoders for para- phrase detection. In Advances in Neural Information Processing Systems, pages 801-809.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Recognizing contextual polarity in phrase-level sentiment analysis",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Hoffmann",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on human language technology and empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "347--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the conference on human language technology and empirical methods in natural language processing, pages 347-354. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adadelta: an adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1212.5701"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "CNN architecture for an example with both word-based and character-based input maps.",
"type_str": "figure",
"num": null
},
"TABREF1": {
"html": null,
"content": "<table/>",
"num": null,
"text": "The confusion matrix for Subtask A. Cell XY stands",
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>Rank</td><td>System</td><td>2013 Tweet</td><td>SMS</td><td>Tweet</td><td colspan=\"2\">2014 Tweet sacasm Live-Journal</td><td>2015 Tweet</td><td>2016 Tweet</td></tr><tr><td>1 2 3 4 5 6 7 8 9 10 11 12 12 14 14 -</td><td colspan=\"4\">SwissCheese SENSEI-LIF unimelb INESC-ID aueb* SentiSys I2RNTU INSIGHT-1 twise ECNU NTNUSentEval 0.623 12 0.641 1 0.651 11 0.700 5 0.637 2 0.716 5 0.706 4 0.634 3 0.744 2 0.687 7 0.593 9 0.706 7 0.723 2 0.609 6 0.727 3 0.666 8 0.618 5 0.708 6 0.714 3 0.633 4 0.723 4 0.693 6 0.597 7 0.680 8 0.602 16 0.582 12 0.644 16 0.610 15 0.540 17 0.645 14 0.643 10 0.593 9 0.662 9 MDSENT 0.589 19 0.509 21 0.587 20 CUFE 0.642 11 0.596 8 0.662 9 THUIR 0.616 13 0.575 14 0.648 12 PUT 0.565 21 0.511 20 0.614 19 MDSENT* 0.664 9 0.610 6 0.676 9 baseline 0.292 0.190 0.346</td><td>0.566 1 0.467 8 0.449 11 0.554 3 0.410 17 0.515 5 0.469 6 0.391 23 0.450 10 0.425 14 0.427 13 0.386 24 0.466 9 0.399 20 0.360 27 0.410 17 0.277</td><td>0.695 7 0.741 1 0.683 9 0.702 4 0.695 7 0.726 2 0.696 6 0.559 23 0.649 13 0.663 10 0.719 3 0.606 19 0.697 5 0.640 15 0.648 14 0.689 9 0.272</td><td colspan=\"2\">0.671 1 0.662 2 0.651 4 0.657 3 0.623 7 0.644 5 0.638 6 0.595 16 0.621 8 0.606 11 0.585 10 0.633 1 0.630 2 0.617 3 0.610 4 0.605 5 0.598 6 0.596 7 0.593 8 0.586 9 0.599 13 0.583 11 0.593 18 0.580 12 0.598 14 0.580 12 0.617 10 0.576 14 0.597 15 0.576 14 0.628 7 0.601 6 0.303 0.255</td></tr></table>",
"num": null,
"text": ". F N eg",
"type_str": "table"
},
"TABREF3": {
"html": null,
"content": "<table><tr><td colspan=\"2\">Our model with setting 1 ranks 12th among 34 systems. We also show the evaluation results and our reported ranks of MDSENT</td></tr><tr><td colspan=\"2\">with setting 2 among the 34 systems in MDSENT*.</td></tr><tr><td>Runs Run 1</td><td>Setting 1 w F P N 1 0.66 0.582 1.00 0.603 Setting 2 w F P N 1</td></tr><tr><td>Run 2</td><td>0.81 0.583 1.00 0.604</td></tr><tr><td>Run 3</td><td>0.60 0.587 0.98 0.607</td></tr><tr><td>Run 4</td><td>0.60 0.591 0.97 0.603</td></tr><tr><td>Run 5</td><td>0.60 0.592 0.95 0.601</td></tr><tr><td colspan=\"2\">Average 0.654 0.587 0.98 0.604</td></tr></table>",
"num": null,
"text": "Evaluation Results of the top 15 systems with ranks provided as subscripts. aueb* stands for \"aueb.twitter.sentiment\".",
"type_str": "table"
},
"TABREF4": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Statistics of 5 individual runs for both settings.",
"type_str": "table"
}
}
}
}