Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S16-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:24:51.784861Z"
},
"title": "CUFE at SemEval-2016 Task 4: A Gated Recurrent Model for Sentiment Classification",
"authors": [
{
"first": "Mahmoud",
"middle": [],
"last": "Nabil",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Mohamed",
"middle": [],
"last": "Aly",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "KAUST",
"location": {
"region": "KSA"
}
},
"email": ""
},
{
"first": "Amir",
"middle": [
"F"
],
"last": "Atiya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cairo University",
"location": {
"country": "Egypt"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we describe a deep learning system that has been built for SemEval 2016 Task4 (Subtask A and B). In this work we trained a Gated Recurrent Unit (GRU) neural network model on top of two sets of word embeddings: (a) general word embeddings generated from unsupervised neural language model; and (b) task specific word embeddings generated from supervised neural language model that was trained to classify tweets into positive and negative categories. We also added a method for analyzing and splitting multi-words hashtags and appending them to the tweet body before feeding it to our model. Our models achieved 0.58 F1-measure for Subtask A (ranked 12/34) and 0.679 Recall for Subtask B (ranked 12/19).",
"pdf_parse": {
"paper_id": "S16-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we describe a deep learning system that has been built for SemEval 2016 Task4 (Subtask A and B). In this work we trained a Gated Recurrent Unit (GRU) neural network model on top of two sets of word embeddings: (a) general word embeddings generated from unsupervised neural language model; and (b) task specific word embeddings generated from supervised neural language model that was trained to classify tweets into positive and negative categories. We also added a method for analyzing and splitting multi-words hashtags and appending them to the tweet body before feeding it to our model. Our models achieved 0.58 F1-measure for Subtask A (ranked 12/34) and 0.679 Recall for Subtask B (ranked 12/19).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Twitter is a huge microbloging service with more than 500 million tweets per day 1 from different locations in the world and in different languages. This large, continuous, and dynamically updated content is considered a valuable resource for researchers. However many issues should be taken into account while dealing with tweets, namely: (1) informal language used by the users; (2) spelling errors; (3) text in the tweet may be referring to images, videos, or external URLs; (4) emoticons; (5) hashtags used (combining more than one word as a single word); (6) usernames used to call or notify other users; (7) spam or irrelevant tweets; and (8) character limit for a tweet to 140 characters. This poses many challenges when analyzing tweets for natural language processing tasks. In this paper we describe our system used for SemEval 2016 (Nakov et al., 2016b) Subtasks A and B. Subtask A (Message Polarity Classification) requires classifying a tweet's sentiment as positive; negative; or neutral,. Subtask B (Tweet classification according to a two-point scale) requires classifying a tweet's sentiment given a topic as positive or negative. Our system uses a GRU neural network model (Bahdanau et al., 2014) with one hidden layer on top of two sets of word embeddings that are slightly fine-tuned on each training set (see Fig. 1 ). The first set of word embeddings is considered as general purpose embeddings and was obtained by training word2vec (Mikolov et al., 2013) on 20.5 million tweets that we crawled for this purpose. The second set of word embeddings is considered as task specific set, and was obtained by training on a supervised sentiment analysis dataset using another GRU model. We also added a method for analyzing multi-words hashtags by splitting them and appending them to the body of the tweet before feeding it to the GRU model. In our experiments we tried both keeping the word embeddings static during the training or fine-tuning them and reported the result for each experiment. We achieved 0.58 F1-measure for Subtask A (ranked 12/34) and 0.679 Recall for Subtask B (ranked 12/19).",
"cite_spans": [
{
"start": 843,
"end": 864,
"text": "(Nakov et al., 2016b)",
"ref_id": "BIBREF10"
},
{
"start": 1191,
"end": 1214,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 1455,
"end": 1477,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 1330,
"end": 1336,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A considerable amount of research has been done to address the problem of sentiment analysis for social content. Nevertheless, most of the stateof-the-art systems still extensively depends on feature engineering, hand coded features, and linguistic resources. Recently, deep learning model gained much attention in sentence text classification inspired from computer vision and speech recognition tasks. Indeed, two of the top four performing systems from SemEval 2015 used deep learning models. (Severyn and Moschitti, 2015 ) used a Convolution Neural Network (CNN) on top of skip-gram model word embeddings trained on 50 million unsupervised tweets. In (Astudillo et al., 2015) the author built a model that uses skip-gram word embeddings trained on 52 million unsupervised tweets then they project these embeddings into a small subspace, finally they used a non-linear model that maps the embedding subspace to the classification space. In (Kim, 2014) the author presented a series of CNN experiments for sentence classification where static and fine-tuned word embeddings were used. Also the author proposed an architecture modification that allow the use of both task-specific and static vectors. In (Lai et al., 2015) the author proposed a recurrent convolutional neural network for text classification. Finally regarding feature engineering methods, (B\u00fcchner and Stein, 2015) the top performing team in SemEval 2015, used an ensemble learning approach that averages the confidence scores of four classifiers. The model uses a large set of linguistic resources and hand coded features.",
"cite_spans": [
{
"start": 496,
"end": 524,
"text": "(Severyn and Moschitti, 2015",
"ref_id": "BIBREF11"
},
{
"start": 655,
"end": 679,
"text": "(Astudillo et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 943,
"end": 954,
"text": "(Kim, 2014)",
"ref_id": "BIBREF6"
},
{
"start": 1205,
"end": 1223,
"text": "(Lai et al., 2015)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Fig 1 shows the architecture of our deep learning model. The core of our network is a GRU layer, which we chose because (1) it is more computational efficient than Convolutional Neural Network (CNN) models (Lai et al., 2015) that we experimented with but were much slower; (2) it can capture long semantic patterns without tuning the model parameter, unlike CNN models where the model depends on the length of the convolutional feature maps for capturing long patterns; (3) it achieved superior performance to CNNs in our experiments.",
"cite_spans": [
{
"start": 206,
"end": 224,
"text": "(Lai et al., 2015)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3"
},
{
"text": "Our network architecture is composed of a word embeddings layer, a merge layer, dropout layers, a GRU layer, a hyperbolic tangent tanh layer, and a soft-max classification layer. In the following we give a brief description of the main components of the architecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3"
},
{
"text": "This is the first layer in the network where each tweet is treated as a sequence of words w 1 , w 2 ...w S of length S, where S is the maximum tweet length. We set S to 40 as the length of any tweet is limited to 140 character. We used zero padding while dealing with short tweets. Each word w i is represented by two embedding vectors w i 1 , w i 2 \u2208R d where d is the embedding dimension, and according to (Astudillo et al., 2015) setting d to 200 is a good choice with respect to the performance and the computation efficiency. w i 1 is considered a general-purpose embedding vector while w i 2 is considered a task-specific embedding vector. We performed the following steps to initialize both types of word embeddings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": "3.1"
},
{
"text": "1. For the general word embeddings we collected about 40M tweets using twitter streaming API over a period of two month (Dec. 2015 and Jan. 2016). We used three criteria while collecting the tweets: (a) they contain at least one emoticon in a set of happy and sad emoticons like ':)' ,':(', ':D' ... etc. (Go et al., 2009) ; (b) hash tags collected from SemEval 2016 data set; (c) hash tags collected from SemEval 2013 data set. After preparing the tweets as described in Section 4 and removing retweets we ended up with about 19 million tweet. We also appended 1.5 million tweets from Sentiment140 (Go et al., 2009) corpus after preparation so we end up with about 20.5 million tweet. To train the general embeddings we used word2vec (Mikolov et al., 2013) neural language model skipgram model with window size 5, negative sampling and filtered out words with frequency less than 5.",
"cite_spans": [
{
"start": 305,
"end": 322,
"text": "(Go et al., 2009)",
"ref_id": "BIBREF5"
},
{
"start": 599,
"end": 616,
"text": "(Go et al., 2009)",
"ref_id": "BIBREF5"
},
{
"start": 735,
"end": 757,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": "3.1"
},
{
"text": "2. For the task specific word embeddings we used semi-supervised 1.5 million tweets from sen-timent140 corpus, where each tweet is tagged either positive or negative according to the tweet's sentiment . Then we applied another GRU model similar to Fig 1 with a modification to the soft-max layer for the purpose of the two classes classification and with random initialized embeddings that are fine-tuned during the training. We used the resulting fine-tuned embeddings as task-specific since they contain contextual semantic meaning from the training process.",
"cite_spans": [],
"ref_spans": [
{
"start": 248,
"end": 258,
"text": "Fig 1 with",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Embedding Layer",
"sec_num": "3.1"
},
{
"text": "The purpose of this layer is to concatenate the two types of word embeddings used in the previous layer in order to form a sequence of length 2S that can be used in the following GRU layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Merge Layer",
"sec_num": "3.2"
},
{
"text": "The purpose of this layer is to prevent the previous layer from overfitting (Srivastava et al., 2014) where some units are randomly dropped during training so the regularization of these units is improved.",
"cite_spans": [
{
"start": 76,
"end": 101,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dropout Layers",
"sec_num": "3.3"
},
{
"text": "This is the core layer in our model which takes an input sequence of length 2S words each having dimension d (i.e. input dimension is 2Sd) . The gated recurrent network proposed in (Bahdanau et al., 2014) is a recurrent neural network (a neural network with feedback connection, see (Atiya and Parlos, 2000)) where the activation h j t of the neural unit j at time t is a linear interpolation between the previous activation h j t\u22121 at time t \u2212 1 and the candidate activatio\u00f1 h j t (Chung et al., 2014) :",
"cite_spans": [
{
"start": 181,
"end": 204,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 482,
"end": 502,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GRU Layer",
"sec_num": "3.4"
},
{
"text": "h j t = (1\u2212z j t )h j t\u22121 + z j th j t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRU Layer",
"sec_num": "3.4"
},
{
"text": "where z j t is the update gate that determines how much the unit updates its content, andh j t is the newly computed candidate state.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRU Layer",
"sec_num": "3.4"
},
{
"text": "The purpose of this layer is to allow the neural network to make complex decisions by learning nonlinear classification boundaries. Although the tanh function takes more training time than the Rectified Linear Units (ReLU), tanh gives more accurate results in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tanh Layer",
"sec_num": "3.5"
},
{
"text": "This is last layer in our network where the output of the tanh layer is fed to a fully connected soft-max layer. This layer calculates the classes probability distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft-Max Layer",
"sec_num": "3.6"
},
{
"text": "P (y = c | x, b) = exp w T c x + b c K k=1 exp w T k x + b k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft-Max Layer",
"sec_num": "3.6"
},
{
"text": "where c is the target class, x is the output from the previous layer, w k and b k are the weight and the bias of class k, and K is the total number of classes. The difference between the architecture used for Subtask A and Subtask B is in this layer, where for Subtask A three neurons were used (i.e. K = 3) while for Subtask B only two neurons were used (i.e. K = 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft-Max Layer",
"sec_num": "3.6"
},
{
"text": "All the data used either for training the word embeddings or for training the sentiment classification model undergoes the following preprocessing steps: 2. Using hand-coded tokenization regex to split the following suffixes: 's, 've, 't , 're, 'd, 'll. 3. Using the patterns described in Table 1 to normalize each tweet.",
"cite_spans": [
{
"start": 230,
"end": 253,
"text": "'ve, 't , 're, 'd, 'll.",
"ref_id": null
}
],
"ref_spans": [
{
"start": 289,
"end": 296,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4"
},
{
"text": "4. Adding StartToken and EndToken at the beginning and the ending of each tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4"
},
{
"text": "5. Splitting multi-word hashtags as explained below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4"
},
{
"text": "Consider the following tweet \"Thinking of reverting back to 8.1 or 7. #Windows10Fail\". The sentiment of the tweet is clearly negative and the simplest way to give the correct tag is by looking at the word \"Fail\" in the hashtag \"#Windows10Fail\". For this reason we added a depth first search dictionary method in order to infer the location of spaces inside each hashtag in the tweet and append the result tokens to the tweet's end. We used 125k words dictionary 3 collected from Wikipedia. In the given example, we first lower the hashtag case, remove numbers and underscores from the hashtag then we apply our method to split the hashtag this results in two tokens \"windows\" and \"fail\". Hence, we append these two tokens to the end of the tweet and the normal preparation steps continue. After the preparation the tweet will look like \" StartToken Thinking of reverting back to NUM or NUM . #Windows10Fail. windows fail EndToken \". ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4"
},
{
"text": "In order to train and test our model for Subtask A, we used the dataset provided for SemEval-2016 Task 4 and SemEval-2013 Task 2. We obtained 8,978 from the first dataset and 7,130 from the second, the remaining tweets were not available. So, we ended up with a dataset of 16,108 tweets. Regarding Subtask B we obtained 6,324 from SemEval-2016 provided dataset. We partitioned both datasets into train and development portions of ratio 8:2. Table 2 shows the distribution of tweets for both Subtasks. For optimizing our network weights we used Adam (Kingma and Ba, 2014), a new and computationally efficient stochastic optimization method. All the experiments have been developed using Keras 4 deep learning library with Theano 5 backend and with CUDA enabled. The model was trained using the default parameters for Adam optimizer, and we tried either to keep the weights of embedding layer static or slightly fine-tune them by using a dropout probability equal to 0.9. Table 3 shows our results on the development part of the data set for Subtask A and B where we report the official performance measure for both subtasks (Nakov et al., 2016a) . From 3 the results it is shown that fine-tuning word embeddings with hashtags splitting gives the best results on the development set. All our experiments were performed on a machine with Intel Core i7-4770 CPU @ 3.40GHz (8 cores), 16GB of RAM and GeForce GT 640 GPU. Table 4 shows our individual results on different SemEval datasets. Table 5 shows our results for Subtask B. From the results and our rank in both Subtasks, we noticed that our system was not satisfactory compared to other teams this was due to the following reasons:",
"cite_spans": [
{
"start": 1123,
"end": 1144,
"text": "(Nakov et al., 2016a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 441,
"end": 448,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 970,
"end": 977,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 1415,
"end": 1422,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 1483,
"end": 1490,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "1. We used the development set to validate our model in order to find the best learning parameters, However we mistakenly used the learning accuracy to find the optimal learning parameters especially the number of the training epochs. This significantly affected our rank based on the official performance measure. Table 4 and Table 5 show the old and the new results after fixing this bug.",
"cite_spans": [],
"ref_spans": [
{
"start": 315,
"end": 334,
"text": "Table 4 and Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "2. Most of the participating teams in this year competition used deep learning models and they used huge datasets (more than 50M tweets) to train and refine word embeddings according to the emotions of the tweet. However, we only used 1.5M from sentiment140 corpus to generate task-specific embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "3. The model used for generating the task-specific embeddings for Subtask A should be trained on three classes not only two (positive, negative, and neutral) where if the tweet contains positive emotions like \":)\" should be positive, if it contains negative emotions like \":(\" should be negative, and if it contains both or none it should be neutral.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "In this paper, we presented our deep learning model used for SemEval2016 Task4 (Subtasks A and B).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The model uses a gated recurrent layer as a core layer on top of two types of word embeddings (general-purpose and task-specific). Also we described our steps in generating both types word embeddings and how we prepared the dataset used especially when dealing with multi-words hashtags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The system ranked 12th on Subtask A and 12th for Subtask B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://internetlivestats.com/ twitter-statistics/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nltk.org/api/nltk.tokenize.html 3 http://pasted.co/c1666a6b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://keras.io/ 5 http://deeplearning.net/software/ theano/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been funded by ITIDA's ITAC project number CFP65.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Inesc-id: Sentiment analysis without hand-coded features or liguistic resources using embedding subspaces",
"authors": [
{
"first": "Silvio",
"middle": [],
"last": "Ramon F Astudillo",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Bruno",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "M\u00e1rio",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Rua Alves",
"middle": [],
"last": "Trancoso",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Redol",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramon F Astudillo, Silvio Amir, Wang Ling, Bruno Mar- tins, M\u00e1rio Silva, Isabel Trancoso, and Rua Alves Redol. 2015. Inesc-id: Sentiment analysis without hand-coded features or liguistic resources using em- bedding subspaces. SemEval-2015, page 652.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "New results on recurrent network training: unifying the algorithms and accelerating convergence",
"authors": [
{
"first": "F",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Alexander G Parlos",
"middle": [],
"last": "Atiya",
"suffix": ""
}
],
"year": 2000,
"venue": "IEEE Transactions on",
"volume": "11",
"issue": "3",
"pages": "697--709",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir F Atiya and Alexander G Parlos. 2000. New results on recurrent network training: unifying the algorithms and accelerating convergence. Neural Networks, IEEE Transactions on, 11(3):697-709.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Webis: An ensemble for twitter sentiment detection",
"authors": [],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Hagen Martin Potthast Michel B\u00fcchner and Benno Stein. 2015. Webis: An ensemble for twitter sentiment detection. SemEval-2015, page 582.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.3555"
]
},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. arXiv preprint arXiv:1412.3555.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Twitter sentiment classification using distant supervision",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Go",
"suffix": ""
},
{
"first": "Richa",
"middle": [],
"last": "Bhayani",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Go, Richa Bhayani, and Lei Huang. 2009. Twit- ter sentiment classification using distant supervision. CS224N Project Report, Stanford, 1:12.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Diederik Kingma and Jimmy Ba",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Convolutional neural networks for sentence classification",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5882"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sen- tence classification. arXiv preprint arXiv:1408.5882. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Recurrent convolutional neural networks for text classification",
"authors": [
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Liheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "2267--2273",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text clas- sification. In AAAI, pages 2267-2273.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representa- tions in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Evaluation measures for the semeval-2016 task 4 sentiment analysis in twitter",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Se- bastiani, and Veselin Stoyanov. 2016a. Evaluation measures for the semeval-2016 task 4 sentiment anal- ysis in twitter (draft: Version 1.1).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "SemEval-2016 task 4: Sentiment analysis in Twitter",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Veselin Stoy- anov, and Fabrizio Sebastiani. 2016b. SemEval-2016 task 4: Sentiment analysis in Twitter. In Proceedings of the 10th International Workshop on Semantic Eval- uation (SemEval 2016), San Diego, California, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Unitn: Training deep convolutional neural network for twitter sentiment classification",
"authors": [
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "464--469",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aliaksei Severyn and Alessandro Moschitti. 2015. Unitn: Training deep convolutional neural network for twitter sentiment classification. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), Association for Computational Lin- guistics, Denver, Colorado, pages 464-469.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "The Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The architecture of the GRU deep Learning model",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"html": null,
"text": "",
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Normalization Patterns</td></tr><tr><td>1. Using NLTK twitter tokenizer 2 to tokenize</td></tr><tr><td>each tweet.</td></tr></table>"
},
"TABREF2": {
"html": null,
"text": "Tweets distribution for Subtask A and B",
"num": null,
"type_str": "table",
"content": "<table><tr><td>Dataset</td><td colspan=\"2\">Subtask A Subtask B</td></tr><tr><td>GRU-static</td><td>0.635</td><td>0.826</td></tr><tr><td>GRU-fine-tuned</td><td>0.639</td><td>0.829</td></tr><tr><td>GRU-fine-tuned + Split Hashtag</td><td>0.642</td><td>0.830</td></tr></table>"
},
"TABREF3": {
"html": null,
"text": "Development results for Subtask A and B. Note: average F1-mesure for positive and negative classes is used for Subtask A, while the average recall is used for Subtask B.",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF5": {
"html": null,
"text": "Results for Subtask A on different SemEval datasets.",
"num": null,
"type_str": "table",
"content": "<table><tr><td>Dataset</td><td colspan=\"3\">Baseline Recall (Old) Recall (New)</td></tr><tr><td>Tweet-2016</td><td>0.389</td><td>0.679</td><td>0.767</td></tr></table>"
},
"TABREF6": {
"html": null,
"text": "Result for Subtask B on SemEval 2016 dataset.",
"num": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}