|
{ |
|
"paper_id": "S16-1016", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:26:13.684913Z" |
|
}, |
|
"title": "GTI at SemEval-2016 Task 4: Training a Naive Bayes Classifier using Features of an Unsupervised System", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Juncal-Mart\u00ednez, Tamara\u00e1lvarez-L\u00f3pez", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Milagros", |
|
"middle": [], |
|
"last": "Fern\u00e1ndez-Gavilanes", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Enrique", |
|
"middle": [], |
|
"last": "Costa-Montenegro", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [ |
|
"Javier" |
|
], |
|
"last": "Gonz\u00e1lez-Casta\u00f1o", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper presents the approach of the GTI Research Group to SemEval-2016 task 4 on Sentiment Analysis in Twitter, or more specifically, subtasks A (Message Polarity Classification), B (Tweet classification according to a two-point scale) and D (Tweet quantification according to a two-point scale). We followed a supervised approach based on the extraction of features by a dependency parsing-based approach using a sentiment lexicon and Natural Language Processing techniques.", |
|
"pdf_parse": { |
|
"paper_id": "S16-1016", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper presents the approach of the GTI Research Group to SemEval-2016 task 4 on Sentiment Analysis in Twitter, or more specifically, subtasks A (Message Polarity Classification), B (Tweet classification according to a two-point scale) and D (Tweet quantification according to a two-point scale). We followed a supervised approach based on the extraction of features by a dependency parsing-based approach using a sentiment lexicon and Natural Language Processing techniques.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In recent years, research on the field of Sentiment Analysis (SA) has increased considerably, due to the growth of user content generated in social networks, blogs and other platforms on the Internet. These are considered valuable information for companies, which seek to know or even predict the acceptance of their products, to design their marketing campaigns more efficiently. One of these sources of information is Twitter, where users can write about any topic, using colloquial and compact language. As a consecuence, SA in Twitter is specially challenging, as opinions are expressed in one or two short sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Many approaches have been proposed for SA, and can be roughly divided into two categories. The first one tries to capture and model linguistic knowledge through the use of dictionaries (Taboada et al., 2011) containing words that are tagged with their semantic orientation. These methods detect the words present in a text using different strategies involving lexics, syntax or semantics (Quinn et al., 2010) . The other one is machine learning-based, which is currently the most predominant approach including supervised learning and deep learning. They widely use classifiers including Support Vector Machines (SVM), Maximum Entropy Models (MAXENT), and Naive Bayes classifiers. Most of the time, they are built from features of a \"bag of words\" representation (Pak and Paroubek, 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 207, |
|
"text": "(Taboada et al., 2011)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 388, |
|
"end": 408, |
|
"text": "(Quinn et al., 2010)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 763, |
|
"end": 787, |
|
"text": "(Pak and Paroubek, 2010)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our group has participated in SemEval-2016 task 4 on Sentiment Analysis in Twitter, subtasks A (Message Polarity Classification), B (Tweet classification according to a two-point scale) and D (Tweet quantification according to a two-point scale) (Nakov et al., 2016b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 246, |
|
"end": 267, |
|
"text": "(Nakov et al., 2016b)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The remainder of this article is structured as follows: Section 2 presents in detail the system proposed for the performance of these subtasks, and Section 3 shows the results obtained and discusses them. Finally, Section 4 summarizes the main findings and conclusions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our main objective was to create a supervised system using extracted features from an unsupervised system described in (Fern\u00e1ndez-Gavilanes et al., 2015) . This last approach comprises different processing stages, including the generation of sentiment lexicons, test preprocessing and the application of different methods for determining contextual polarity based on syntactical structure. This makes our approach robust in diverse contexts without the need for previous manual tagging of datasets. As we can decide independently which modules of the unsupervised system to use or not, it was easy to extract different features from each one individually or together. Once extracted, classification was applied using Weka tool (Hall et al., 2009) . This environment contains a collection of machine learning-based algorithms for data mining tasks, such as, classification, regression, clustering, association rules, and visualization. The new supervised system was built with a Naive Bayes classifier.", |
|
"cite_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 153, |
|
"text": "(Fern\u00e1ndez-Gavilanes et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 727, |
|
"end": 746, |
|
"text": "(Hall et al., 2009)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Overview", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The first extracted features of the unsupervised system were the different sentiment outputs of the modules combination. As mentioned before, modules can be enabled and disabled independently. With this feature, multiple sentiment outputs were obtained from these combinations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modules combination features", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The unsupervised system has four different modules (\"intensification treatment\" (I), \"negation treatment\" (N), \"polarity conflict treatment\" (C) and \"adversative/concessive clause treatment\" (A/CO)). In total, there were 14 possible combinations: one by one, combining pairs or groups of three of them, and all of them at once (the latter is the default output of the unsupervised system). In subtask A, each output obtained is defined by a sentiment value contained between three possible ones: negative, neutral or positive. However, in subtask B, the sentiment value obtained for each combination only can be contained between two possible ones: negative or positive. So, the result of each one of these 14 combinations was considered as a feature. All of them are defined in Table 1 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 779, |
|
"end": 786, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Modules combination features", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Combination Subtask A Subtask B I POSITIVE NEGATIVE NEUTRAL POSITIVE NEGATIVE N C A/CO I + N I + C I + A/CO N + C N + A/CO C + A/CO I + N + C I + N + A/CO N + C + A/CO ALL", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modules combination features", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In addition to the previous modules combination results extracted, other features were also extracted from each module independently. Each tweet was represented as a vector of generic and relational features. Generic features are those that are not related to a scope in a given tweet, and relational features represent the corresponding scope needed for each module. For example, in the negation module, the scope would begin in the unigram that caused the negation (the negator term itself), and would cover all affected unigrams in a branch of the dependencies tree, detected by its syntactic function. For this reason, both types of features can be distinguished. The option chosen to mark the scope was to use relational attributes. With them, unigram to unigram can be stored with all its associated features: such as it is an intensifier, a negator, a part of the scope of negation, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Generic features: The first features introduced are not related to a scope, and involve:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Phrases: the number of phrases of a particular tweet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Adjectives: the number of existing adjectives in a given tweet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Common names: the number of existing common names in a given tweet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Verbs: the number of existing verbs in a given tweet (except auxiliary verbs).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Positive/negative polarity unigrams: the number of unigrams with positive/negative polarity in a given tweet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Positive/negative emoticons: the number of positive/negative emoticons (with positive/negative polarity) in a given tweet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Positive/negative intensifications: the number of positive/negative intensifications in a given tweet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Unigrams: all lemmas were considered (except hashtag, mention, URL, unigrams with numbers, unigrams with length 1 and punctuation marks).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Relational features: They can be defined as an array of features. Each unigram of a given tweet has assigned all the features defined in the relational, so it is easy to mark the scope of treatment of each of the separate modules. Then, all features introduced for each unigram in the relational are detailed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Part of speech: it can take one of the next five values: adjective, common name, verb, adverb or other.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Polarity value: it can take one of the next seven values: negative +, negative, negative -, none, positive -, positive and positive +.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Is intensifier: it indicates if an unigram is an intensifier. It can take one of the next five values: intensity --, intensity -, none, intensity +, intensity + +.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Was intensified: it indicates if an unigram was intensified. It can take one of the next seven values: negative +, negative, negative -, none, positive -, positive and positive +.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Conflict unigram: it indicates if an unigram causes a polarity conflict, with its polarity converted to intensity. It can take one of the next five values: intensity --, intensity -, none, intensity +, intensity + +.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Affected unigram: it indicates when an unigram is affected by a conflict unigram, modifying its polarity value. It can take one of the next seven values: negative +, negative, negative -, none, positive -, positive and positive +.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Negator unigram: it indicates when an unigram is a negator, modifying the polarity value of the subsequent unigrams. It can take one of the next two values: 0 if it isn't a negator or 1 if it is.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Negated unigram: it indicates when an unigram is affected by a negator, modifying its polarity value. It can take one of the next seven values: negative +, negative, negative -, none, positive -, positive and positive +. This is the value contributed by that unigram in a negated branch of the dependencies tree (the scope).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual modules features", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Once features were extracted, the next step was to create a model to predict sentiment in testing datasets. Previously, it was said that Weka contains a collection of machine learning algorithms for data mining tasks. Several algorithms were tested, such as Support Vector Machines (SVM) (Mullen and Collier, 2004) , Large-Scale Linear (LIBLIN-EAR) (Fan et al., 2008) or Hidden Markov Model (HMM) (Soni and Sharaff, 2015) , but the best results obtained were with Naive Bayes (Tan et al., 2009) . Also, 10-fold cross-validation was used to obtain the best classification model with the training dataset. Once all classification models were obtained, in subtask A the model with the best F-measure was selected, while in subtask B the selected model was the one with the best recall (R), as the organization proposed. For the subtask D, the subtask B results were taken into account. A previous step before the selection of the best classification model is needed. Most algorithms do not accept as input relational attributes, so it was necessary to apply an unsupervised filter by attribute, RELAGG, both in training and test files. It processes all relational attributes that fall into the user defined range, making them nominal attributes. In Naive Bayes algorithm, the default settings were used, for both the training and the testing datasets, as they are defined in Weka. Finally, applying the best model for each subtask on the corresponding testing dataset, the final sentiment prediction for all tweets was obtained. performance of the system is measured by means of the normalized cross-entropy, better known as Kullback-Leibler Divergence (KLD). In this last case, there is a minor modification in the formula, with a smoothed version of the originals p(c j ) and p(c j ), and a smoothing factor . All of these measurements are described in (Nakov et al., 2016a) . Table 2 presents the overall scores for subtasks A, B and D, in their respective test sets: F-measure, recall and KLD, respectively. The third column shows the unsupervised approach results (UAR) and the fourth shows the supervised approach results (SAR) obtained this year. After performing several experiments on the training, development and development-test datasets provided by organizers, the neutral sentiment intervals were set to [-0.5, 0.5] for subtask A and [-0.05, 0 .05] for subtask B (subtask D depends on subtask B). More specifically, in subtask A, our supervised approach was tested with SemEval-2014 development-test, SemEval-2015 development-test and 2016 development-test datasets provided; in subtask B, it was tested with 2016 development-test dataset; and for subtask D, the 2016 developmenttest dataset results in subtask B were taken into account. In development time, the improvement of our supervised system was between 1 and 3 % compared to our unsupervised system for subtasks A and B, and for subtask D a difference of -0.02 KLD.", |
|
"cite_spans": [ |
|
{ |
|
"start": 288, |
|
"end": 314, |
|
"text": "(Mullen and Collier, 2004)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 349, |
|
"end": 367, |
|
"text": "(Fan et al., 2008)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 397, |
|
"end": 421, |
|
"text": "(Soni and Sharaff, 2015)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 476, |
|
"end": 494, |
|
"text": "(Tan et al., 2009)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1852, |
|
"end": 1873, |
|
"text": "(Nakov et al., 2016a)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 2345, |
|
"end": 2354, |
|
"text": "[-0.05, 0", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1876, |
|
"end": 1883, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentiment prediction", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "In order to assess the improvement of our supervised system regarding our unsupervised system, a comparison is performed in the test sets of this year, as it can be seen in Table 2 . With these results, we can say that the new approach, in most cases, improves the unsupervised system, between 0.19 and 1.73 % for subtask A and B (except in Twitter Sarcasm 2014), and a difference of -0.012 in subtask D.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 180, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentiment prediction", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "This paper describes the participation of the GTI Research Group, AtlantTIC Centre, University of Vigo, in SemEval-2016 task 4: Sentiment Analysis in Twitter. The results were achieved using a supervised system with extracted features from an unsupervised system, described in (Fern\u00e1ndez-Gavilanes et al., 2015) . Table 3 shows the position of this approach in the ranking published for subtasks A, B and D for the datasets evaluated. The unsupervised approach consists of sentiment propagation rules on dependencies where features were selected (as the different sentiment outputs of the modules combination), and a vector of generic (features not related to a scope in a given tweet) and relational (features extracted from the scope in each treatment performed in each module) features. The results denote a low/medium improvement in subtask A regarding the unsupervised system, and a low improvement in the subtask B (also reflected in the subtask D). Although the new approach is supervised, the fact of using only features of an unsupervised system makes it totally different from other approaches, and still has margin of improvement adding new external features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 277, |
|
"end": 311, |
|
"text": "(Fern\u00e1ndez-Gavilanes et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 314, |
|
"end": 321, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Experimental resultsIn this section, the conducted experiments for subtasks A, B and D are described. The experiments were carried out using the datasets provided by SemEval-2016 task organizers. These datasets are composed of texts extracted from Twitter, and in the case of the subtasks B and D, with a given topic. In subtask A, the number of tweets is 32009 and the performance of the system is measured by means of the F-score. In subtask B, the number of tweets is 10551 and the performance of the system is measured by means of the macroaveraged recall. Finally, in subtask D, as in subtask B, the number of tweets is 10551 (same dataset) but this time, the", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported by the Spanish Government, co-financed by the European Regional Development Fund (ERDF) under project TACTICA.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "LIBLINEAR: A Library for Large Linear Classification", |
|
"authors": [ |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Rong-En Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cho-Jui", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang-Rui", |
|
"middle": [], |
|
"last": "Hsieh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chih-Jen", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "J. Mach. Learn. Res", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "1871--1874", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A Li- brary for Large Linear Classification. J. Mach. Learn. Res., 9:1871-1874, June.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "GTI: An Unsupervised Approach for Sentiment Analysis in Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Milagros", |
|
"middle": [], |
|
"last": "Fern\u00e1ndez-Gavilanes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tamara\u00e1lvarez", |
|
"middle": [], |
|
"last": "L\u00f3pez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Juncal-Mart\u00ednez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Enrique", |
|
"middle": [], |
|
"last": "Costa-Montenegro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [ |
|
"Javier" |
|
], |
|
"last": "Gonz\u00e1lez-Casta\u00f1o", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "10--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Milagros Fern\u00e1ndez-Gavilanes, Tamara\u00c1lvarez L\u00f3pez, Jonathan Juncal-Mart\u00ednez, Enrique Costa- Montenegro, and Francisco Javier Gonz\u00e1lez-Casta\u00f1o. 2015. GTI: An Unsupervised Approach for Sen- timent Analysis in Twitter. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 533-538, Denver, Colorado, June. Association for Computational Linguistics. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA Data Mining Software: An Update. SIGKDD Explor. Newsl., 11(1):10-18, November.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Sentiment Analysis using Support Vector Machines with Diverse Information Sources", |
|
"authors": [ |
|
{ |
|
"first": "Tony", |
|
"middle": [], |
|
"last": "Mullen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nigel", |
|
"middle": [], |
|
"last": "Collier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of EMNLP 2004", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "412--418", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tony Mullen and Nigel Collier. 2004. Sentiment Anal- ysis using Support Vector Machines with Diverse In- formation Sources. In Dekang Lin and Dekai Wu, ed- itors, Proceedings of EMNLP 2004, pages 412-418, Barcelona, Spain, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Evaluation Measures for the Semeval-2016 task 4: Sentiment Analysis in Twitter (Draft: Version 1.12)", |
|
"authors": [ |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Rosenthal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabrizio", |
|
"middle": [], |
|
"last": "Sebastiani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Se- bastiani, and Veselin Stoyanov. 2016a. Evaluation Measures for the Semeval-2016 task 4: Sentiment Analysis in Twitter (Draft: Version 1.12). In Proceed- ings of the 10th International Workshop on Seman- tic Evaluation (SemEval 2016), San Diego, California, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "SemEval-2016 task 4: Sentiment Analysis in Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Rosenthal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabrizio", |
|
"middle": [], |
|
"last": "Sebastiani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Veselin Stoy- anov, and Fabrizio Sebastiani. 2016b. SemEval-2016 task 4: Sentiment Analysis in Twitter. In Proceedings of the 10th International Workshop on Semantic Eval- uation (SemEval 2016), San Diego, California, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Twitter as a corpus for sentiment analysis and opinion mining", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Pak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Paroubek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ";", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Khalid", |
|
"middle": [], |
|
"last": "Choukri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bente", |
|
"middle": [], |
|
"last": "Maegaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Mariani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Pak and Patrick Paroubek. 2010. Twit- ter as a corpus for sentiment analysis and opinion mining. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Mike Rosner, and Daniel Tapias, editors, Proceedings of the Seventh Interna- tional Conference on Language Resources and Evalu- ation (LREC'10), Valletta, Malta, may. European Lan- guage Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "How to analyze political attention with minimal assumptions and costs", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Quinn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Burt", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Monroe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Colaresi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Crespin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Dragomir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Radev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "American Journal of Political Science", |
|
"volume": "54", |
|
"issue": "1", |
|
"pages": "209--228", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin M. Quinn, Burt L. Monroe, Michael Colaresi, Michael H. Crespin, and Dragomir R. Radev. 2010. How to analyze political attention with minimal as- sumptions and costs. American Journal of Political Science, 54(1):209-228.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Sentiment Analysis of Customer Reviews Based on Hidden Markov Model", |
|
"authors": [ |
|
{ |
|
"first": "Swati", |
|
"middle": [], |
|
"last": "Soni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aakanksha", |
|
"middle": [], |
|
"last": "Sharaff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 International Conference on Advanced Research in Computer Science Engineering & Technology (ICARCSET 2015), ICARCSET '15", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "1--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Swati Soni and Aakanksha Sharaff. 2015. Senti- ment Analysis of Customer Reviews Based on Hid- den Markov Model. In Proceedings of the 2015 Inter- national Conference on Advanced Research in Com- puter Science Engineering & Technology (ICARCSET 2015), ICARCSET '15, pages 12:1-12:5, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Lexicon-based methods for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Maite", |
|
"middle": [], |
|
"last": "Taboada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Brooke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milan", |
|
"middle": [], |
|
"last": "Tofiloski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kimberly", |
|
"middle": [], |
|
"last": "Voll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manfred", |
|
"middle": [], |
|
"last": "Stede", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Comput. Linguist", |
|
"volume": "37", |
|
"issue": "2", |
|
"pages": "267--307", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maite Taboada, Julian Brooke, Milan Tofiloski, Kim- berly Voll, and Manfred Stede. 2011. Lexicon-based methods for sentiment analysis. Comput. Linguist., 37(2):267-307, June.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Adapting Naive Bayes to Domain Adaptation for Sentiment Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Songbo", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xueqi", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuefen", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongbo", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 31th European Conference on IR Research on Advances in Information Retrieval, ECIR '09", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "337--349", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Songbo Tan, Xueqi Cheng, Yuefen Wang, and Hongbo Xu. 2009. Adapting Naive Bayes to Domain Adap- tation for Sentiment Analysis. In Proceedings of the 31th European Conference on IR Research on Ad- vances in Information Retrieval, ECIR '09, pages 337- 349, Berlin, Heidelberg. Springer-Verlag.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "Results of the approach for subtasks A, B and D. Tw refers to Twitter, TwS to Twitter Sarcasm and LJ to LiveJournal.", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"text": "Positions of the approach for subtasks A, B and D.", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |