Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S16-1022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:26:42.615213Z"
},
"title": "Finki at SemEval-2016 Task 4: Deep Learning Architecture for Twitter Sentiment Analysis",
"authors": [
{
"first": "Dario",
"middle": [],
"last": "Stojanovski",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Gjorgji",
"middle": [],
"last": "Strezoski",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Gjorgji",
"middle": [],
"last": "Madjarov",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Ivica",
"middle": [],
"last": "Dimitrovski",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present a novel deep learning architecture for sentiment analysis in Twitter messages. Our system finki, employs both convolutional and gated recurrent neural networks to obtain a more diverse tweet representation. The network is trained on top of GloVe word embeddings pre-trained on the Common Crawl dataset. Both neural networks are used to obtain a fixed length representation of variable sized tweets, and the concatenation of these vectors is supplied to a fully connected softmax layer with dropout regularization. The system is evaluated on benchmark datasets from the Sentiment Analysis in Twitter task of the SemEval 2016 challenge where our model achieves best and second highest results on the 2-point and 5-point quantification subtasks respectively. Despite not relying on any hand-crafted features, our system manages the second highest average rank on the considered subtasks.",
"pdf_parse": {
"paper_id": "S16-1022",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present a novel deep learning architecture for sentiment analysis in Twitter messages. Our system finki, employs both convolutional and gated recurrent neural networks to obtain a more diverse tweet representation. The network is trained on top of GloVe word embeddings pre-trained on the Common Crawl dataset. Both neural networks are used to obtain a fixed length representation of variable sized tweets, and the concatenation of these vectors is supplied to a fully connected softmax layer with dropout regularization. The system is evaluated on benchmark datasets from the Sentiment Analysis in Twitter task of the SemEval 2016 challenge where our model achieves best and second highest results on the 2-point and 5-point quantification subtasks respectively. Despite not relying on any hand-crafted features, our system manages the second highest average rank on the considered subtasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Twitter sentiment analysis is an area of Natural Language Processing (NLP) dealing with the classification of sentiment polarity in Twitter messages. Most of the approaches to this problem are generally based on hand crafted features and sentiment lexicons (Mohammad et al., 2013; Pak and Paroubek, 2010) . These features are then used as input to classifying algorithms such as, Support Vector Machines (SVM) and naive Bayes classifier. However, such approaches require extensive domain knowledge, are laborious to define, and can lead to incomplete or over-specific features.",
"cite_spans": [
{
"start": 257,
"end": 280,
"text": "(Mohammad et al., 2013;",
"ref_id": "BIBREF5"
},
{
"start": 281,
"end": 304,
"text": "Pak and Paroubek, 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Deep learning methods for sentiment analysis, on the other hand, handle the feature extraction automatically which provides for robustness and adaptability. Notably, most popular deep learning methods are convolutional neural networks (CNN), which have been shown to achieve state-of-the-art results (Kim, 2014; dos Santos and Gatti, 2014) , though some works propose different models such as Recursive Neural Tensor Network (Socher et al., 2013) .",
"cite_spans": [
{
"start": 300,
"end": 311,
"text": "(Kim, 2014;",
"ref_id": "BIBREF3"
},
{
"start": 312,
"end": 339,
"text": "dos Santos and Gatti, 2014)",
"ref_id": "BIBREF1"
},
{
"start": 425,
"end": 446,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recurrent neural networks (RNN) are intuitive architectures for NLP as they inherently take into account the ordering of words in the text as opposed to CNNs which take only a small limited context window. However, to our knowledge, these networks have not been applied to sentiment analysis in Twitter messages. Le et al. (Le and Zuidema, 2015) report state-of-the-art results with Long Short Term Memory (LSTM) networks on binary and fine-grained classification on the Stanford Sentiment Treebank dataset.",
"cite_spans": [
{
"start": 313,
"end": 345,
"text": "Le et al. (Le and Zuidema, 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a novel deep learning architecture for sentiment classification and quantification in Twitter messages. The model consists of a convolutional and a gated recurrent neural network (GRNN). Both neural networks are used to model a suitable representation of a tweet. The feature representations output from the networks are fused and fed to a standard softmax regression classifier. The system leverages unsupervised pre-training of word embeddings. For this, we utilize the publicly available GloVe 1 word embeddings (Pennington et al., 2014) , specifically ones trained on the Common Crawl dataset. In previous work (Stojanovski et al., 2015) , we have experimented with multiple filters with additional window sizes of 4 and 5 and we leave such system implementation for future work.",
"cite_spans": [
{
"start": 541,
"end": 566,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF8"
},
{
"start": 641,
"end": 667,
"text": "(Stojanovski et al., 2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate our deep learning system on four out of five subtasks of the Sentiment Analysis in Twitter task (Task 4) (Nakov et al., 2016) as part of the SemEval 2016 challenge. We competed in the 2point and 5-point classification and quantification. Our model achieves high results on the quantification subtasks, getting second place on Subtask E and attaining the best score on Subtask D.",
"cite_spans": [
{
"start": 117,
"end": 137,
"text": "(Nakov et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The proposed model for sentiment analysis in this paper, consists of two neural networks. The first is a convolutional neural network with a single filter with windows size of 3. The second part of the architecture is a gated recurrent neural network. The system architecture is presented in Figure 1 . The model is implemented using the Keras 2 library for deep learning on a Theano backend. ",
"cite_spans": [],
"ref_spans": [
{
"start": 292,
"end": 300,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Deep learning architecture",
"sec_num": "2"
},
{
"text": "Twitter constraints tweet length to a maximum of 140 characters. Consequently, users are forced to find new and unpredictable ways of expressing themselves. Determining sentiment in these circumstances is very challenging and, as a result, we apply some preprocessing steps in order to clean tweets from unnecessary information. All URLs and HTML entities are removed from the tweets 2 http://keras.io along with punctuation with the exception of question and exclamation marks. Emoticons and Twitter specifics such as hashtags are kept in their original form, unlike user mentions, which are completely removed. We also lowercase all words. Additionally, each appearance of an elongated word is shortened to a maximum of three character repetitions. Since all tweets are in relation to some topic, and the model has to determine the overall sentiment for the quantification tasks, we decided to replace words matching the tweet topic with generic tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.1"
},
{
"text": "Each word or token that is a part of a tweet is first mapped to an appropriate distributional feature representation, also known as word embedding. Before training, we define the so called lookup table, where each word is associated with the corresponding feature representation. For the purposes of this work, we utilize the publicly available GloVe embeddings, pre-trained on the Common Crawl dataset with a dimensionality of 300. We choose these over the GloVe embeddings trained on Twitter data because of the higher dimensionality, considerably larger training corpus and vocabulary of unique words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained word embeddings",
"sec_num": "2.2"
},
{
"text": "For words in the dataset not present in the lookup table, we use random initialization of word embeddings. However, despite their effectiveness in encoding syntactic and semantic regularities of words, they are oblivious to the words' sentiment characteristics. To counteract this, word embeddings are continuously updated during network training by back-propagating the classification errors. Therefore, sentiment regularities are being encoded in the feature representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained word embeddings",
"sec_num": "2.2"
},
{
"text": "One component of our architecture is a convolutional neural network for feature extraction of Twitter messages. Dealing with variable sized text is inherently built into CNNs. Additionally, these networks, to some extent, take into account the ordering of the words and the context each word appears in. Unlike applications of CNNs in image processing, we only employ one convolutional and max pooling layer. The convolutional layer is used to extract local features around each word window, while the max pooling layer is used to extract the most important features in the feature map.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional neural network",
"sec_num": "2.3"
},
{
"text": "Let's consider a tweet t with length of n tokens. Because of the sliding window manner in which the filters are applied, we apply appropriate padding at the beginning and at the end of the tweet. Padding length is defined as h/2 where h is the window size of the filter. Before we apply the convolutional operation, each word is mapped to its corresponding word embedding. A tweet is represented as a concatenation of these word embeddings, t = [w 1 , w 2 , . . . , w n ], where w i is the word embedding for the i-th word in the tweet and w i \u2208 R 300 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional neural network",
"sec_num": "2.3"
},
{
"text": "In this work, we only use a single filter with window size of 3. As tweets are limited in length, smaller window sizes are more favorable in contrast to larger ones. The network learns a filter W c and a bias term for the filter. The convolutional operation is applied to every possible window of words and as a result a feature x i is produced. We can formally express the operation as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional neural network",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x i = f (W c \u2022 t i:i+h\u22121 + b c ),",
"eq_num": "(1)"
}
],
"section": "Convolutional neural network",
"sec_num": "2.3"
},
{
"text": "where t i:i+h\u22121 is the concatenation of word vectors from position i to position i + h \u2212 1, while f (\u2022) is an activation function. In this work, we choose the hard rectified linear activation function. Each of the produced features are used to generate a feature map",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional neural network",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x = [x 1 , x 2 . . . x n\u2212h+1 ].",
"eq_num": "(2)"
}
],
"section": "Convolutional neural network",
"sec_num": "2.3"
},
{
"text": "Then, the max-over-time pooling operation is applied over the feature map, which takes the maximum valuex = max{x}. The max pooling layer outputs a fixed sized vector with a predefined dimensionality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional neural network",
"sec_num": "2.3"
},
{
"text": "The accompanying part of the CNN in our deep learning architecture is a gated recurrent neural network. RNNs make use of sequential data. They perform the same task for every element in a sequence with the output being dependent on previous computations. These networks compute hidden states and each hidden state depends on its predecessor. They can also be seen as having a memory compo-nent, enabling them to look back arbitrarily in the sequence of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated recurrent neural network",
"sec_num": "2.4"
},
{
"text": "RNNs suffer from the exploding and vanishing gradient problem. There are two proposed methods for overcoming this issue: the LSTM networks (Hochreiter and Schmidhuber, 1997) and the Gated Recurrent Unit (Chung et al., 2014) . We decided to use GRU because of the fewer model parameters, potentially needing less data to generalize and enabling faster training. GRU has gating units that modulate the flow of information inside the unit. The activation s j t of the GRU at time t is a linear interpolation between the previous activation s j t\u22121 and the candidate activation\u015d j t :",
"cite_spans": [
{
"start": 139,
"end": 173,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF2"
},
{
"start": 203,
"end": 223,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gated recurrent neural network",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s j t = (1 \u2212 z j t )s j t\u22121 + z j t\u015d j t ,",
"eq_num": "(3)"
}
],
"section": "Gated recurrent neural network",
"sec_num": "2.4"
},
{
"text": "where an update gate z j t decides how much the unit updates its activation or content. The update gate is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated recurrent neural network",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z j t = \u03c3(W z x t + U z s t\u22121 ) j .",
"eq_num": "(4)"
}
],
"section": "Gated recurrent neural network",
"sec_num": "2.4"
},
{
"text": "where \u03c3 is a logistic sigmoid function. The GRU unlike LSTM has no mechanism to control the degree to which it exposes its state and exposes the whole state each time. The candidate activation is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated recurrent neural network",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s j t = tanh(W x t + U (r t s t\u22121 )) j ,",
"eq_num": "(5)"
}
],
"section": "Gated recurrent neural network",
"sec_num": "2.4"
},
{
"text": "where r t is a set of reset gates and is an elementwise multiplication. The reset gate is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated recurrent neural network",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r j t = \u03c3(W r x t + U r s t\u22121 ) j .",
"eq_num": "(6)"
}
],
"section": "Gated recurrent neural network",
"sec_num": "2.4"
},
{
"text": "This network also produces a fixed vector which is necessary in our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated recurrent neural network",
"sec_num": "2.4"
},
{
"text": "The outputs from both networks are concatenated to form a single feature vector. This vector is then fed to a fully connected softmax layer. The softmax regression classifier gives probability distribution over the labels in the output space. The label having the highest probability is chosen as the final prediction. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Network fusion",
"sec_num": "2.5"
},
{
"text": "Due to the high number of parameters being learned, deep learning methods suffer from overfitting. To counteract this issue, we utilize dropout regularization (Srivastava et al., 2014) , which randomly drops a proportion of hidden units in each iteration of network training. The dropout parameter is set to 0.25. The output size of the convolutional network and the GRU network is set to 100. The network is trained using stochastic gradient descent over shuffled mini-batches using the RMSprop (Tieleman and Hinton, 2012) update rule.",
"cite_spans": [
{
"start": 159,
"end": 184,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 496,
"end": 523,
"text": "(Tieleman and Hinton, 2012)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization and model parameters",
"sec_num": "2.6"
},
{
"text": "3 Experiments and results",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regularization and model parameters",
"sec_num": "2.6"
},
{
"text": "We train our model on the benchmark datasets provided by the SemEval challenge. However, due to deletion or changed privacy settings, we were not able to retrieve all tweets. For the 2-point classification and quantification we used the datasets from SemEval 2016 and we apply the topic preprocessing step previously mentioned. Moreover, we use positive and negative tweets from previous editions of the challenge to additionally refine our model in spite of the fact that these tweets are not labeled with the related topic. For the 5-point classification and quantification, we only used the dataset from this year's edition of SemEval. The model is trained on the provided training and development sets while as validation set we use the provided devtest set. The testing sets are also provided by the SemEval challenge without the need to download the specific tweets. The distribution of the sentiment labels in both datasets are provided in Table 1 and Table 2. VN N Neu P VP Total Train 107 871 2083 3654 419 7134 Dev 28 200 520 835 191 1774 Test 138 2201 10081 7830 382 20632 Table 2 : Dataset label distribution for Subtasks C and E. (N -negative, VN -very negative, Neu -neutral, VP -very positive, P -positive)",
"cite_spans": [],
"ref_spans": [
{
"start": 947,
"end": 1107,
"text": "Table 1 and Table 2. VN N Neu P VP Total Train 107 871 2083 3654 419 7134 Dev 28 200 520 835 191 1774 Test 138 2201 10081 7830 382 20632 Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "The performance our model achieves and the official ranking are provided in Table 3 . The systems are ranked by the macroaveraged recall for the Subtask B where higher scores are better. On the other subtasks, systems are ranked by the error functions where lower scores are better. From the obtained results, we can see that our system notably performs best on the quantification subtasks. The merging of the networks provides better performance over their distinctive versions for the quantification tasks. Separately, the CNN and GRNN achieve KLD scores of 0.045 and 0.035 on Subtask D respectively, while only managing 0.761 and 0.632 for the EMD score on Subtask E. On Subtask C, the model surpasses the CNN, which attains a MAE M score of 0.92, but fails in comparison to the GRNN which gets 0.812. On Subtask B, both networks achieve comparable accuracy and F1 score in comparison to our proposed model, but gain better results on the recall measure, improving the performance by \u223c 5 points.",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 83,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "For Subtask B, our model performs best according to the accuracy measure, being ranked 4th. According to the average recall and F1 score, the model does not achieve notable performance although it produces significant improvement over baseline scores, especially for the AvgF1 measure. For the 5-point classification, our model again obtains average performances when compared against other teams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "Concerning Subtask D, our deep learning system produces best KLD score and also a considerable improvement over baseline scores on all three measures. Furthermore, the system gains high results on the 5-point quantification subtask as well, being ranked second. Our model averages a score of 4.3 on the all scores for each subtask, while averaging 4.5 on the main scores. The proposed method of our team is one of the most robust out of all other teams, as it manages second highest average rank on the considered subtasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "In this paper, we presented a novel deep learning model for sentiment classification of Twitter messages. We proposed a fusion of CNN and GRNN for extracting features from Twitter messages and a softmax layer for generating class predictions. The deep neural network is trained on top of GloVe word embeddings pre-trained on the Common Crawl dataset. The model effectiveness is evaluated on the Sentiment Analysis in Twitter task from SemEval 2016 where our system achieved second best average rank on the 2-point and 5-point classification and quantification subtasks, testifying for its robustness. Although our model achieved high results, there is room for improvement. For future work, we would like to pre-train word embeddings on a large set of distantly labeled tweets. Additionally, it would be interesting to see the effects of using bi-directional GRNN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "http://nlp.stanford.edu/projects/glove",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to acknowledge the support of the European Commission through the project MAES-TRA Learning from Massive, Incompletely annotated, and Structured Data (Grant number ICT-2013-612944).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.3555"
]
},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. arXiv preprint arXiv:1412.3555.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Deep convolutional neural networks for sentiment analysis of short texts",
"authors": [
{
"first": "C\u00edcero",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Maira",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gatti",
"suffix": ""
}
],
"year": 2014,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "69--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C\u00edcero Nogueira dos Santos and Maira Gatti. 2014. Deep convolutional neural networks for sentiment analysis of short texts. In COLING, pages 69-78.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735- 1780.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5882"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Compositional distributional semantics with long short term memory",
"authors": [
{
"first": "Phong",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.02510"
]
},
"num": null,
"urls": [],
"raw_text": "Phong Le and Willem Zuidema. 2015. Compositional distributional semantics with long short term memory. arXiv preprint arXiv:1503.02510.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Nrc-canada: Building the state-of-theart in sentiment analysis of tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1308.6242"
]
},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. Nrc-canada: Building the state-of-the- art in sentiment analysis of tweets. arXiv preprint arXiv:1308.6242.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "SemEval-2016 task 4: Sentiment analysis in Twitter",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval '16",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Veselin Stoyanov, and Fabrizio Sebastiani. 2016. SemEval- 2016 task 4: Sentiment analysis in Twitter. In Pro- ceedings of the 10th International Workshop on Se- mantic Evaluation, SemEval '16, San Diego, Califor- nia, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Twitter as a corpus for sentiment analysis and opinion mining",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Pak",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Paroubek",
"suffix": ""
}
],
"year": 2010,
"venue": "LREc",
"volume": "10",
"issue": "",
"pages": "1320--1326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Pak and Patrick Paroubek. 2010. Twitter as a corpus for sentiment analysis and opinion mining. In LREc, volume 10, pages 1320-1326.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the conference on empirical methods in natural language processing",
"volume": "1631",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical meth- ods in natural language processing (EMNLP), volume 1631, page 1642. Citeseer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "The Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Twitter sentiment analysis using deep convolutional neural network",
"authors": [
{
"first": "Dario",
"middle": [],
"last": "Stojanovski",
"suffix": ""
},
{
"first": "Gjorgji",
"middle": [],
"last": "Strezoski",
"suffix": ""
},
{
"first": "Gjorgji",
"middle": [],
"last": "Madjarov",
"suffix": ""
},
{
"first": "Ivica",
"middle": [],
"last": "Dimitrovski",
"suffix": ""
}
],
"year": 2015,
"venue": "Hybrid Artificial Intelligent Systems",
"volume": "",
"issue": "",
"pages": "726--737",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dario Stojanovski, Gjorgji Strezoski, Gjorgji Madjarov, and Ivica Dimitrovski. 2015. Twitter sentiment anal- ysis using deep convolutional neural network. In Hybrid Artificial Intelligent Systems, pages 726-737. Springer.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude",
"authors": [
{
"first": "Tijmen",
"middle": [],
"last": "Tieleman",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2012,
"venue": "COURSERA: Neural Networks for Machine Learning",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running aver- age of its recent magnitude. COURSERA: Neural Net- works for Machine Learning, 4:2.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Deep neural network architecture."
},
"TABREF2": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Results and ranks for Subtask B, C, D and E respectively"
}
}
}
}