Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S16-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:25:53.286301Z"
},
"title": "MIB at SemEval-2016 Task 4a: Exploiting lexicon-based features for sentiment analysis in Twitter",
"authors": [
{
"first": "Vittoria",
"middle": [],
"last": "Cozza",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Marinella",
"middle": [],
"last": "Petrocchi",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This work presents our team solution for task 4a (Message Polarity Classification) at the Se-mEval 2016 challenge. Our experiments have been carried out over the Twitter dataset provided by the challenge. We follow a supervised approach, exploiting a SVM polynomial kernel classifier trained with the challenge data. The classifier takes as input advanced NLP features. This paper details the features and discusses the achieved results.",
"pdf_parse": {
"paper_id": "S16-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "This work presents our team solution for task 4a (Message Polarity Classification) at the Se-mEval 2016 challenge. Our experiments have been carried out over the Twitter dataset provided by the challenge. We follow a supervised approach, exploiting a SVM polynomial kernel classifier trained with the challenge data. The classifier takes as input advanced NLP features. This paper details the features and discusses the achieved results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Revealing the sentiment behind a text is motivated by several reasons, e.g., to figure out how many opinions on a certain topic are positive or negative. Also, it could be interesting to span positivity and negativity across a n-point scale. As an example, a five-point scale is now widespread in digital scenarios where human ratings are involved: Amazon, TripAdvisor, Yelp, and many others, adopt the scale for letting their users rating products and services.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Under the big hat of sentiment analysis (Liu, 2012) , polarity recognition attempts to classify texts into positive or negative, while the rating inference task tries to identify different shades of positivity and negativity, e.g., from strongly-negative, to strongly-positive. There currently exists a number of popular challenges on the matter, as those included in the SemEval series on evaluations of computational semantic analysis systems 1 . Both polarity recognition and rating inference have been applied 1 https://en.wikipedia.org/wiki/SemEval to recommendation systems. Recently, Academia has been focusing on the feasibility to apply sentiment analysis tasks to very short and informal texts, such as tweets (see, e.g. (Rosenthal et al., 2015) ).",
"cite_spans": [
{
"start": 40,
"end": 51,
"text": "(Liu, 2012)",
"ref_id": "BIBREF9"
},
{
"start": 731,
"end": 755,
"text": "(Rosenthal et al., 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper shows the description of the system that we have set up for participating into the Semeval 2016 challenge in (Nakov et al., 2016b) , task 4a (Message Polarity Classification). We have adopted a supervised approach, a SVM polynomial kernel classifier trained with the data provided by the challenge, after extracting lexical and lexicon features from such data.",
"cite_spans": [
{
"start": 120,
"end": 141,
"text": "(Nakov et al., 2016b)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organised as follows. Next section briefly addresses related work in the area. Section 3 describes the features extracted from the training data. In Section 4, we present the results of our attempt to answer to the challenge. Finally, we give concluding remarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the last recent years, the Semeval tasks series challenges the polarity evaluation of tweets. This represents a detachment from the traditional polarity detection task. Tweets usually features the use of an informal language, with mispellings, new words, urls, abbreviations and specific symbols (like RT for \"re-tweet\" and # for hashtags, which are a type of tagging for Twitter messages). Existing approaches and open issues on how to handle such new challenges are in related work like (Kouloumpis et al., 2011; Barbosa and Feng, 2010) .",
"cite_spans": [
{
"start": 492,
"end": 517,
"text": "(Kouloumpis et al., 2011;",
"ref_id": "BIBREF8"
},
{
"start": 518,
"end": 541,
"text": "Barbosa and Feng, 2010)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "At the 2015 challenge (Rosenthal et al., 2015) , the top scored systems were those using deep learning, i.e., semantic vector spaces for single words, used as features in (Turney and Pantel, 2010) . Other approaches, as (Basile and Novielli, 2015) , exploited lexical and sentiment lexicon features to classify the sentiment of the tweet through machine learning.",
"cite_spans": [
{
"start": 22,
"end": 46,
"text": "(Rosenthal et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 171,
"end": 196,
"text": "(Turney and Pantel, 2010)",
"ref_id": "BIBREF15"
},
{
"start": 220,
"end": 247,
"text": "(Basile and Novielli, 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In (Priyanka and Gupta, 2013) , the authors also exploited different lexical and lexicon features for evaluating the sentiment of a review corpus. The current work inherits most of such features. While they used the lexicon SentiWordNet (Esuli and Sebastiani, 2006) , we rely instead on two different ones, LabMT in (Dodds et al., 2011) and Sentic-Net3.0 presented in (Cambria et al., 2014) .",
"cite_spans": [
{
"start": 3,
"end": 29,
"text": "(Priyanka and Gupta, 2013)",
"ref_id": "BIBREF13"
},
{
"start": 237,
"end": 265,
"text": "(Esuli and Sebastiani, 2006)",
"ref_id": "BIBREF7"
},
{
"start": 316,
"end": 336,
"text": "(Dodds et al., 2011)",
"ref_id": "BIBREF6"
},
{
"start": 368,
"end": 390,
"text": "(Cambria et al., 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "All the above cited lexicons (Cambria et al., 2014; Esuli and Sebastiani, 2006; Dodds et al., 2011) are popular and extensively adopted lexicons for sentiment analysis tasks.",
"cite_spans": [
{
"start": 29,
"end": 51,
"text": "(Cambria et al., 2014;",
"ref_id": "BIBREF3"
},
{
"start": 52,
"end": 79,
"text": "Esuli and Sebastiani, 2006;",
"ref_id": "BIBREF7"
},
{
"start": 80,
"end": 99,
"text": "Dodds et al., 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The SemEval 2016 Sentiment Analysis challenge (Nakov et al., 2016b) requires the labelling of a test set of 28,481 tweets. In order to facilitate the application of supervised machine learning approaches, the challenge organisers provide the access to a gold dataset: a set of labeled tweets, where the labels -positive, negative or neutral -were manually assigned. In detail, the labeled dataset is divided in a training set of 4,000 tweets and a development set of 2,000 tweets. 340 tweets in the training data and 169 ones in the development data could have not be accessed, since such tweets were\"Not Available\" at crawling time. We rely on the provided labeled dataset (train + devel) in order to respectively train and evaluate a Support Vector Machine (SVM) classifier (Chang and Lin, 2011) to learn a model for automatic Sentiment Analysis on Twitter. We investigate four groups of features based on: keyword and micro-blogging characteristics, n-grams, negation, and sentiment lexicon.",
"cite_spans": [
{
"start": 46,
"end": 67,
"text": "(Nakov et al., 2016b)",
"ref_id": "BIBREF12"
},
{
"start": 776,
"end": 797,
"text": "(Chang and Lin, 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment analysis",
"sec_num": "3"
},
{
"text": "After evaluating the feature set, we built a new classifier model for annotating the unlabeled test set provided by the challenge (prediction phase, see Figure 1 ). In this phase, we used as features the best combination of the features previously extracted (actually, all of them) and as the training corpus the overall labeled tweet data (devel+test). The results were not satisfactory, being our team ranked 30 (over 34 teams). The challenge results are reported and discussed in (Nakov et al., 2016b) .",
"cite_spans": [
{
"start": 483,
"end": 504,
"text": "(Nakov et al., 2016b)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 153,
"end": 161,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Sentiment analysis",
"sec_num": "3"
},
{
"text": "In the following, we will detail the process of test cleaning and feature extractions. Then, we present our evaluation, which has been designed to test the efficacy of our feature set for sentiment analysis. Finally, we provide the results obtained at the challenge. In the follwoing, we will use the following tweet as a running example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment analysis",
"sec_num": "3"
},
{
"text": "Happy hour at @Microsoft #msapc2015 with @sarahvaughan and friends. Good luck for tomorrow's keynote http://t.co/emvqoeRS6j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment analysis",
"sec_num": "3"
},
{
"text": "The microblogging features have been extracted from the tweet original text, without pre-processing it. We have defined such features with the aim of capturing some typical aspects of micro-blogging. These have been extracted by simply matching regular expressions. First of all, we have cleaned the text from the symbols of mentions \"@\" and hashtags \"#\", from urls, from emoticons. Indeed, their presence makes challenging to analyze the text with a traditional linguistic pipeline. Before deleting symbols, emoticons and urls, we have counted them, having as features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Micro-blogging features",
"sec_num": "3.1"
},
{
"text": "\u2022 the number of hashtags;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Micro-blogging features",
"sec_num": "3.1"
},
{
"text": "\u2022 the number of mentions;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Micro-blogging features",
"sec_num": "3.1"
},
{
"text": "\u2022 the number of urls;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Micro-blogging features",
"sec_num": "3.1"
},
{
"text": "\u2022 EmoPos, i.e., the number of positive emoticons;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Micro-blogging features",
"sec_num": "3.1"
},
{
"text": "\u2022 EmoNeg, i.e., the number of negative emoticons;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Micro-blogging features",
"sec_num": "3.1"
},
{
"text": "Also, we have also focused on vowels repetitions, exclamations and question marks, introducing the following features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Micro-blogging features",
"sec_num": "3.1"
},
{
"text": "\u2022 the number of vowels repetitions;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Micro-blogging features",
"sec_num": "3.1"
},
{
"text": "\u2022 the number of question marks and exclamation marks repetitions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Micro-blogging features",
"sec_num": "3.1"
},
{
"text": "Concerning the marks, we consider a repetition when they are repeated more than once, as in \"!!\". Instead, we have considered a vowel as repeated when it occurs more than twice, as in \"baaad\". The positive and negative emoticons we considered are those on the Wikipedia's page 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Micro-blogging features",
"sec_num": "3.1"
},
{
"text": "In order to extract syntactic and semantic features from the text, we pre-processed it with the Tanl pipeline (Attardi et al., 2010) , a suite of modules for text analytics and natural language processing, based on machine learning. Pre-processing has consisted in first dividing the text in sentences and then into the single word forms composing the sentence. Then, for each form, we have identified the lemma (when available) and the part of speech (POS). As an example, starting from the sentence Happy hour at Microsoft msapc2015 with sarahvaughan and friends in the above sample tweet, we obtain the annotation shown in Figure 2 . The last column gives the part of speech that a word form yields in a sentence, according to the Penn Treebank Project 3 . The last phase of pre-processing is data cleaning. For each sentence, we removed conjunctions, number, determiners, pronouns, and punctuation (still relying on the Penn Treebank POS tags). For the remaining terms, we keep the lemma. Thus, the example sentence results in the following list of lemmas: (happy, hour, microsoft, msapc2015, sarahvaughan, friend).",
"cite_spans": [
{
"start": 110,
"end": 132,
"text": "(Attardi et al., 2010)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 626,
"end": 634,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Text pre-processing",
"sec_num": "3.2"
},
{
"text": "In the following, we describe the features we have extracted from the pre-processed text. Among others, we inherit some of the features in (Priyanka and Gupta, 2013) and (Basile and Novielli, 2015) , which face with sentiment analysis on Twitter.",
"cite_spans": [
{
"start": 139,
"end": 165,
"text": "(Priyanka and Gupta, 2013)",
"ref_id": "BIBREF13"
},
{
"start": 170,
"end": 197,
"text": "(Basile and Novielli, 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text pre-processing",
"sec_num": "3.2"
},
{
"text": "Upon pre-processing, we have obtained a words vector representation of each tweet. Then, we have extracted n-grams, i.e., all the pairs of sequencing lemmas in the vector. As an over simplification, we have considered only the case of n=2. We thought this was reasonable, since tweets are short portions of text bounded to 140 characters. In the example sentence, some are (happy-hour, hour-microsoft).The 2grams have been discretised into binary attributes representing their presence or not in the text. There are 1,237 unique 2-grams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "n-grams features",
"sec_num": "3.3"
},
{
"text": "Handling negations is an important step in sentiment analysis, as they can reverse the meaning of a sentence. Also, negations can often occur with sarcastic and ironic goals, which are quite difficult to detect. We consider 1-grams and we prefix them with P (N) when they are asserted (negated). To identify if the unigram appears in a negated scope, we have applied a rule-based approach 4 . The approach works as follows. Considering a negative sentiment tweet, like, e.g., \"It might be not nice but it's the reality., the \"nice\" unigram is in the scope of negation, and, thus, it will be labeled as N_nice. The \"but\" unigram changes again the scope, thus \"reality\" will be labeled as P_reality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negation-based features",
"sec_num": "3.4"
},
{
"text": "We have identified the following features as suitable for handling negations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negation-based features",
"sec_num": "3.4"
},
{
"text": "\u2022 Unigrams with scope;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negation-based features",
"sec_num": "3.4"
},
{
"text": "\u2022 Positiveterms: Number of lemmas with positive scope;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negation-based features",
"sec_num": "3.4"
},
{
"text": "\u2022 Negativeterms: Number of lemmas with negative scope;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negation-based features",
"sec_num": "3.4"
},
{
"text": "The first feature has been discretised into binary attributes representing the presence (or not) of the 1gram. The number of unique unigrams (with scope) are 5,110.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negation-based features",
"sec_num": "3.4"
},
{
"text": "Several lexicons are available for sentiment analysis. In this work, we consider SenticNet 3.0 (Cambria et al., 2014) and the LabMT (Dodds et al., 2011) .",
"cite_spans": [
{
"start": 95,
"end": 117,
"text": "(Cambria et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 126,
"end": 152,
"text": "LabMT (Dodds et al., 2011)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment lexicon-based features",
"sec_num": "3.5"
},
{
"text": "SenticNet 3.0 is a large concept-level base of knowledge, assigning semantics, sentics, and polarity to 30,000 natural language concepts. In particular, polarity is a floating number between -1 (extreme negativity) and +1 (extreme positivity) 5 . We rely on SenticNet 3.0 to compute features based on polarity, according to the SenticNet lexicon:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment lexicon-based features",
"sec_num": "3.5"
},
{
"text": "\u2022 Min, max, average and standard deviation polarity of lemmas;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment lexicon-based features",
"sec_num": "3.5"
},
{
"text": "\u2022 PA Positive Asserted: number of lemmas with a positive polarity, e.g., \"good\";",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment lexicon-based features",
"sec_num": "3.5"
},
{
"text": "\u2022 PN Positive Negated: number of lemmas with a positive polarity, but negated, e.g., \"not good\";",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment lexicon-based features",
"sec_num": "3.5"
},
{
"text": "\u2022 NA Negative Asserted: n. lemmas with a negative polarity (e.g., \"bad\");",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment lexicon-based features",
"sec_num": "3.5"
},
{
"text": "\u2022 NN Negative Negated: n. lemmas and with a negative polarity, and negated (e.g., \"not bad\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment lexicon-based features",
"sec_num": "3.5"
},
{
"text": "To assign the polarity to \"not good\", we consider the polarity of \"good\" in the SenticNet lexicon (0.667) and we revert it, assigning -0.667. Also, SenticNet provides the polarity score to complex expressions. As an example, the popular idiomatic expression \"32 teeth\" obtains a polarity score of 0.903. Thus, beside unigrams polarity scores, we have also considered 2-grams polarity scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment lexicon-based features",
"sec_num": "3.5"
},
{
"text": "Since not all the lemmas in the dataset were covered by the SenticNet lexicon, we have enlarged the covering by relying on LabMT (Dodds et al., 2011) .",
"cite_spans": [
{
"start": 123,
"end": 149,
"text": "LabMT (Dodds et al., 2011)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment lexicon-based features",
"sec_num": "3.5"
},
{
"text": "LabMT is a list of words, manually labeled with a sentiment score through crowdsourcing. In particular, we considered the happiness score. This value ranges over 1 and 9 (1 is very unhappy, while 9 absolutely happy). We have normalised such values to range over -1 and 1, using the linear function y=(x-5)/4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment lexicon-based features",
"sec_num": "3.5"
},
{
"text": "We have preliminarily built a prediction model trained with the 2016 challenge data, in details we have used the train and the devel data, respectively for training and evaluation. The prediction model is based on an SVM linear kernel classifier. For the experiments, the classifier has been implemented through sklearn 6 in Python. We have used a linear classifier suitable for handling unbalanced data: SGDClassifier with default parameters 7 . The model exploits the four groups of features presented in Section 3. Upon extracting the features from the training dataset, we obtained 6,547 features. In the following, we will show some feature ablation experiments, each of them corresponds to remove one category of features from the full set. Results are in terms of Precision and Recall, see Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 797,
"end": 804,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Evaluation and results",
"sec_num": "4"
},
{
"text": "The features evaluation shows that we do not have a set of dominant features group, leading to a not satisfying discrimination among positive, negative, and neutral tweets. In (Cozza et al., 2016) , the authors have proposed a similar approach to the one here presented. The aim was to evaluate the sentiment of a large set of online reviews. In online reviews, the textual opinion is usually accompanied by a numerical score, and sentiment analysis could be a valid alley for identifying misalignment between the score and the satisfaction expressed in the text. Work in (Cozza et al., 2016) shows that the features' set was discriminant for evaluating the sentiment of the reviews. In part, this would support the thesis that standard sentiment analysis approaches are more suitable for \"literary\" texts than for short, informal texts featured by tweets.",
"cite_spans": [
{
"start": 176,
"end": 196,
"text": "(Cozza et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 572,
"end": 592,
"text": "(Cozza et al., 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and results",
"sec_num": "4"
},
{
"text": "It is worth noting that the lexicons we rely on are based on lemmas, while there exist other lexicons that consider also the part of speech, see, e.g., SentiWordNet (Esuli and Sebastiani, 2006) . Let the reader consider the following tweet, from the SemEval 2016 training set: #OnThisDay1987 CBS records shipped out the largest preorder in the company's history for Michael Jackson's album Bad http://t.co/v4fkyOx2eW",
"cite_spans": [
{
"start": 165,
"end": 193,
"text": "(Esuli and Sebastiani, 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and results",
"sec_num": "4"
},
{
"text": "In this example, the word \"Bad\" should not be considered as a negative adjective, since it is an album name. However, in the current work, we have not discriminated between nouns and adjectives with same spelling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and results",
"sec_num": "4"
},
{
"text": "The results over the challenge test set are available on the SemEval website 8 , according to the challenge score system described in (Nakov et al., 2016a) . Table 3 shows the comparison of our results with the ones of the winning team. In the submitted result, the classifier has been trained over the training + development dataset, annotated with the best combination of features, as analyzed before.",
"cite_spans": [
{
"start": 134,
"end": 155,
"text": "(Nakov et al., 2016a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Challenge results",
"sec_num": "4.1"
},
{
"text": "8 http://alt.qcri.org/semeval2016/task4/ data/uploads/semeval2016_task4_results.pdf team score SwissCheese 63.301 MIB 40.10 Table 3 : SemEval 2016 task 4a results (Tweet 2016)",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Challenge results",
"sec_num": "4.1"
},
{
"text": "The approach proposed in this work achieved unsatisfactory results. This was in part due to a data preprocessing phase and a feature extraction phase that do not consider characteristics intrinsic to microblogging. Indeed, we mostly dealt with tweets handling them as regular text. The challenge data have been preprocessed by supervised approached, where features have been extracted through a NLP pipeline, trained on newswire domain. Within our proposed features, the sentiment lexicon-based features has proved to work well. However, we believe their extraction could take advantage of the adoption of other lexicons, different to those we have relied on. Specifically, there exist lexicons trained over tweets, such as the NCR emotion lexicon (Mohammad and Turney, 2013). Finally, we expect that a better solution could be achieved by extending the approach to include features extracted by unsupervised approaches (word embeddings), or by adopting a deep learning classifier, instead of a linear one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "https://it.wikipedia.org/wiki/Emoticon 3 https://www.ling.upenn.edu/courses/ Fall_2003/ling001/penn_treebank_pos.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://sentiment.christopherpotts.net/ lingstruc.html#negation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://scikit-learn.org/stable/ 7 http://scikit-learn.org/stable/ modules/generated/sklearn.linear_model. SGDClassifier.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The TANL pipeline. Web Services and Processing Pipelines in HLT: Tool Evaluation, LR Production and Validation (LREC:WSSP)",
"authors": [
{
"first": "Giuseppe",
"middle": [],
"last": "Attardi",
"suffix": ""
},
{
"first": "Stefano",
"middle": [
"Dei"
],
"last": "Rossi",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Simi",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giuseppe Attardi, Stefano Dei Rossi, and Maria Simi. 2010. The TANL pipeline. Web Services and Process- ing Pipelines in HLT: Tool Evaluation, LR Production and Validation (LREC:WSSP).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Robust sentiment detection on Twitter from biased and noisy data",
"authors": [
{
"first": "Luciano",
"middle": [],
"last": "Barbosa",
"suffix": ""
},
{
"first": "Junlan",
"middle": [],
"last": "Feng",
"suffix": ""
}
],
"year": 2010,
"venue": "23rd International Conference on Computational Linguistics: Posters",
"volume": "",
"issue": "",
"pages": "36--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luciano Barbosa and Junlan Feng. 2010. Robust sen- timent detection on Twitter from biased and noisy data. In 23rd International Conference on Computa- tional Linguistics: Posters, pages 36-44. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "UNIBA: Sentiment analysis of English tweets combining micro-blogging, lexicon and semantic features",
"authors": [
{
"first": "P",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Novielli",
"suffix": ""
}
],
"year": 2015,
"venue": "9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "595--600",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Basile and N. Novielli. 2015. UNIBA: Sentiment anal- ysis of English tweets combining micro-blogging, lex- icon and semantic features. In 9th International Work- shop on Semantic Evaluation (SemEval 2015), pages 595-600. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "SenticNet 3: a common and common-sense knowledge base for cognition-driven sentiment analysis",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Olsher",
"suffix": ""
},
{
"first": "Dheeraj",
"middle": [],
"last": "Rajagopal",
"suffix": ""
}
],
"year": 2014,
"venue": "28th AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1515--1521",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik Cambria, Daniel Olsher, and Dheeraj Rajagopal. 2014. SenticNet 3: a common and common-sense knowledge base for cognition-driven sentiment anal- ysis. In 28th AAAI Conference on Artificial Intelli- gence, pages 1515-1521.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "LIBSVM: a library for support vector machines",
"authors": [
{
"first": "Chih-Chung",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2011,
"venue": "Transactions on Intelligent Systems and Technology (TIST)",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: a library for support vector machines. Transactions on Intelligent Systems and Technology (TIST), 2(3):27.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Write a number in letters: A study on text-score disagreement in online reviews",
"authors": [
{
"first": "Vittoria",
"middle": [],
"last": "Cozza",
"suffix": ""
},
{
"first": "Marinella",
"middle": [],
"last": "Petrocchi",
"suffix": ""
},
{
"first": "Angelo",
"middle": [],
"last": "Spognardi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vittoria Cozza, Marinella Petrocchi, and Angelo Spog- nardi. 2016. Write a number in letters: A study on text-score disagreement in online reviews.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Temporal patterns of happiness and information in a global social network: Hedonometrics and twitter",
"authors": [
{
"first": "Kameron Decker",
"middle": [],
"last": "Peter Sheridan Dodds",
"suffix": ""
},
{
"first": "Isabel",
"middle": [
"M"
],
"last": "Harris",
"suffix": ""
},
{
"first": "Catherine",
"middle": [
"A"
],
"last": "Kloumann",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"M"
],
"last": "Bliss",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Danforth",
"suffix": ""
}
],
"year": 2011,
"venue": "PLoS ONE",
"volume": "6",
"issue": "12",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Sheridan Dodds, Kameron Decker Harris, Isabel M. Kloumann, Catherine A. Bliss, and Christopher M. Danforth. 2011. Temporal patterns of happiness and information in a global social network: Hedonomet- rics and twitter. PLoS ONE, 6(12):e26752, 12.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "SENTI-WORDNET: A publicly available lexical resource for opinion mining",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Esuli",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2006,
"venue": "5th Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "417--422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Esuli and Fabrizio Sebastiani. 2006. SENTI- WORDNET: A publicly available lexical resource for opinion mining. In 5th Conference on Language Re- sources and Evaluation, pages 417-422.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Twitter sentiment analysis: The good the bad and the omg",
"authors": [
{
"first": "Efthymios",
"middle": [],
"last": "Kouloumpis",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Johanna",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2011,
"venue": "AAAI Conference on Weblogs and Social",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Efthymios Kouloumpis, Theresa Wilson, and Johanna Moore. 2011. Twitter sentiment analysis: The good the bad and the omg. In AAAI Conference on Weblogs and Social.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sentiment Analysis and Opinion Mining",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu. 2012. Sentiment Analysis and Opinion Min- ing. Morgan & Claypool Publishers, May.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Crowdsourcing a word-emotion association lexicon",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"D"
],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Intelligence",
"volume": "29",
"issue": "3",
"pages": "436--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad and Peter D. Turney. 2013. Crowd- sourcing a word-emotion association lexicon. Compu- tational Intelligence, 29(3):436-465.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Evaluation measures for the semeval-2016 task 4 sentiment analysis in Twitter",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Se- bastiani, and Veselin Stoyanov. 2016a. Evaluation measures for the semeval-2016 task 4 sentiment anal- ysis in Twitter.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "SemEval-2016 task 4: Sentiment analysis in Twitter",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Veselin Stoy- anov, and Fabrizio Sebastiani. 2016b. SemEval-2016 task 4: Sentiment analysis in Twitter. In Proceedings of the 10th International Workshop on Semantic Eval- uation (SemEval 2016), San Diego, California, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Identifying the best feature combination for sentiment analysis of customer reviews",
"authors": [
{
"first": "C",
"middle": [],
"last": "Priyanka",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Computing, Communications and Informatics (ICACCI)",
"volume": "",
"issue": "",
"pages": "102--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Priyanka and D. Gupta. 2013. Identifying the best fea- ture combination for sentiment analysis of customer reviews. In Advances in Computing, Communications and Informatics (ICACCI), pages 102-108.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "SemEval-2015 task 10: Sentiment analysis in Twitter",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "451--463",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Rosenthal, Preslav Nakov, Svetlana Kiritchenko, Saif Mohammad, Alan Ritter, and Veselin Stoyanov. 2015. SemEval-2015 task 10: Sentiment analysis in Twitter. In 9th International Workshop on Semantic Evaluation, pages 451-463. Association for Computa- tional Linguistics, June.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of semantics. CoRR, abs/1003.1141.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Sentiment analysis: Prediction phase",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Annotation with Tanl English Pipeline (http://tanl.di.unipi.it/en/)",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"text": "MIB results (Tweets 2016 -dev, all feats)",
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF2": {
"html": null,
"text": ".24 0.28 0.45 0.22 0.30 0.41 0.72 0.53 0.38 All but negation-based feats 0.42 0.08 0.13 0.52 0.06 0.10 0.39 0.97 0.56 0.28 All but polarity lexicon feats 0.34 0.10 0.15 0.44 0.42 0.43 0.42 0.58 0.48 0.40 Ablation tests tive class, but, overall, less than we have expected.",
"type_str": "table",
"num": null,
"content": "<table><tr><td>Ablation tests show that negation-</td></tr><tr><td>based features are the most relevant ones. Polarity</td></tr><tr><td>lexicon features are influential to identify the nega-</td></tr></table>"
}
}
}
}