Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "L16-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:08:08.121684Z"
},
"title": "A Comparison of Domain-based Word Polarity Estimation using different Word Embeddings",
"authors": [
{
"first": "Aitor",
"middle": [],
"last": "Garc\u00eda-Pablos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IXA research group",
"location": {
"addrLine": "Vicomtech-IK4, Vicomtech-IK4"
}
},
"email": ""
},
{
"first": "Montse",
"middle": [],
"last": "Cuadros",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IXA research group",
"location": {
"addrLine": "Vicomtech-IK4, Vicomtech-IK4"
}
},
"email": "[email protected]"
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IXA research group",
"location": {
"addrLine": "Vicomtech-IK4, Vicomtech-IK4"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A key point in Sentiment Analysis is to determine the polarity of the sentiment implied by a certain word or expression. In basic Sentiment Analysis systems this sentiment polarity of the words is accounted and weighted in different ways to provide a degree of positivity/negativity. Currently words are also modelled as continuous dense vectors, known as word embeddings, which seem to encode interesting semantic knowledge. With regard to Sentiment Analysis, word embeddings are used as features to more complex supervised classification systems to obtain sentiment classifiers. In this paper we compare a set of existing sentiment lexicons and sentiment lexicon generation techniques. We also show a simple but effective technique to calculate a word polarity value for each word in a domain using existing continuous word embeddings generation methods. Further, we also show that word embeddings calculated on in-domain corpus capture the polarity better than the ones calculated on general-domain corpus.",
"pdf_parse": {
"paper_id": "L16-1009",
"_pdf_hash": "",
"abstract": [
{
"text": "A key point in Sentiment Analysis is to determine the polarity of the sentiment implied by a certain word or expression. In basic Sentiment Analysis systems this sentiment polarity of the words is accounted and weighted in different ways to provide a degree of positivity/negativity. Currently words are also modelled as continuous dense vectors, known as word embeddings, which seem to encode interesting semantic knowledge. With regard to Sentiment Analysis, word embeddings are used as features to more complex supervised classification systems to obtain sentiment classifiers. In this paper we compare a set of existing sentiment lexicons and sentiment lexicon generation techniques. We also show a simple but effective technique to calculate a word polarity value for each word in a domain using existing continuous word embeddings generation methods. Further, we also show that word embeddings calculated on in-domain corpus capture the polarity better than the ones calculated on general-domain corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A key point in Sentiment Analysis is to determine the polarity of the sentiment implied by a certain word or expression (Taboada et al., 2011) . In basic Sentiment Analysis systems this sentiment polarity of the words is accounted and weighted in different ways to provide a degree of positivity/negativity of, for example, a customer review. In more sophisticated systems, word polarity is employed as an additional feature for machine learning algorithms. This polarity value can be a categorical value (e.g. positive/neutral/negative) or a real value within a range (e.g. from -1.0 to +1.0), and can be plugged in supervised classification algorithms together with other lexical and semantic features to help discriminating the overall polarity of an expression or a sentence. Currently words are also modelled as continuous dense vectors, known as word embeddings, which seem to encode interesting semantic knowledge. The word vectors are usually computed using very big corpora of texts, like the English Wikipedia. One of the best known systems to obtain a dense continuous representation of words is Word2Vec (Mikolov et al., 2013c) . But Word2Vec is not the only one, and in fact there are already a lot of variants and many researchers working on different kinds of word embeddings (Le and Mikolov, 2014; Iacobacci et al., 2015; Ji et al., 2015; Hill et al., 2014; Schwartz et al., 2014) . With regard to Sentiment Analysis, word embeddings are used as features to more complex supervised classification systems to obtain very precise sentiment classifiers (Tang et al., 2014a; Socher et al., 2013) . In this paper we compare a set of existing static sentiment lexicons and dynamic sentiment lexicon generation techniques. We also show a simple but competitive technique to calculate a word polarity value for each word in a domain using continuous word embeddings. Our objective is to see if word embeddings calculated on an in-domain corpus can be directly used to obtain a polarity measure of the domain vocabulary with no additional supervision. Further, we want to see to which extent word embeddings calcu-lated on in-domain corpus improve the ones calculated on general-domain corpus and analyse pros and cons of each compared method. The paper is structured as follows. Section 2. reviews several works related to the generation of sentiment lexicons, providing the context for the rest of the paper. Section 3. describes the lexicons and methods that will be used to make the comparison, focusing on the ones using continuous word representations. Section 4. presents the datasets used to generate some of the lexicons. Section 5. describes the experiments to compare the different approaches and discusses them. Finally the last section shows the conclusions.",
"cite_spans": [
{
"start": 120,
"end": 142,
"text": "(Taboada et al., 2011)",
"ref_id": "BIBREF40"
},
{
"start": 1116,
"end": 1139,
"text": "(Mikolov et al., 2013c)",
"ref_id": "BIBREF27"
},
{
"start": 1291,
"end": 1313,
"text": "(Le and Mikolov, 2014;",
"ref_id": "BIBREF21"
},
{
"start": 1314,
"end": 1337,
"text": "Iacobacci et al., 2015;",
"ref_id": "BIBREF16"
},
{
"start": 1338,
"end": 1354,
"text": "Ji et al., 2015;",
"ref_id": "BIBREF18"
},
{
"start": 1355,
"end": 1373,
"text": "Hill et al., 2014;",
"ref_id": "BIBREF13"
},
{
"start": 1374,
"end": 1396,
"text": "Schwartz et al., 2014)",
"ref_id": "BIBREF36"
},
{
"start": 1566,
"end": 1586,
"text": "(Tang et al., 2014a;",
"ref_id": "BIBREF41"
},
{
"start": 1587,
"end": 1607,
"text": "Socher et al., 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Sentiment analysis refers to the use of NLP techniques to identify and extract subjective information in digital texts like customer reviews about products or services. Due to the grown of the social media, and specialized websites that allow users posting comments and opinions, Sentiment Analysis has been a very prolific research area during the last decade (Pang and Lee, 2008; Zhang and Liu, 2014) . A key point in Sentiment Analysis is to determine the polarity of the sentiment implied by a certain word or expression (Taboada et al., 2011) . Usually this polarity is also known as Semantic Orientation (SO). SO indicates whether a word or an expression states a positive or a negative sentiment, and can be a continuous value in a range from very positive to very negative, or a categorical value (like the common 5star rating used to rate products). Further, the SO of a word is a useful feature to be used within a more complex Sentiment Analysis system like machine learning algorithms (Lin et al., 2009; Jaggi et al., 2014; Tang et al., 2014a) . A collection of words and their respective SO is known as sentiment lexicon. Sentiment lexicons can be constructed manually, by human experts that estimate the corresponding SO value to each word of interest. Obviously, this approach is usually too time consuming for obtaining a good coverage and difficult to maintain when the vocabulary evolves or a new language or domain must be analyzed.",
"cite_spans": [
{
"start": 361,
"end": 381,
"text": "(Pang and Lee, 2008;",
"ref_id": "BIBREF29"
},
{
"start": 382,
"end": 402,
"text": "Zhang and Liu, 2014)",
"ref_id": "BIBREF46"
},
{
"start": 525,
"end": 547,
"text": "(Taboada et al., 2011)",
"ref_id": "BIBREF40"
},
{
"start": 997,
"end": 1015,
"text": "(Lin et al., 2009;",
"ref_id": "BIBREF23"
},
{
"start": 1016,
"end": 1035,
"text": "Jaggi et al., 2014;",
"ref_id": "BIBREF17"
},
{
"start": 1036,
"end": 1055,
"text": "Tang et al., 2014a)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Therefore it is necessary to devise a method to automate the process as much as possible. Some systems employ existing lexical resources like Word-Net (Fellbaum, 1998) to bootstrap a list of positive and negative words via different methods. In (Esuli and Sebastiani, 2006) the authors employ the glosses that accompany each WordNet synset 1 to perform a semi-supervised synset classification. The result consists of three scores per synset: positivity, negativity and objectivity. In (Baccianella et al., 2010) version 3.0 of SentiWordNet is introduced with improvements like a random walk approach in the WordNet graph to calculate the SO of the synsets. In (Agerri and Garcia, 2009) another system is introduced, Q-WordNet, which expands the polarities of the WordNet synsets using lexical relations like synonymy. In (Guerini et al., 2013) the authors propose and compare different approaches based SentiWordNet to improve the polarity determination of the synsets. Other authors try different bootstrapping approaches and evaluate them on WordNet of different languages (Maks et al., 2014; Vicente et al., 2014) . A problem with the approaches based on resources like WordNet is that they rely on the availability and quality of those resources for a new languages. Being a general resource, WordNet also fails to capture domain dependent semantic orientations. Likewise other approaches using common dictionaries do not take into account the shifts between domains (Ramos and Marques, 2005) . Other methods calculate the SO of the words directly from text. In (Hatzivassiloglou and McKeown, 1997) the authors model the corpus as a graph of adjectives joined by conjunctions. Then, they generate partitions on the graph based on some intuitions like that two adjectives joined by \"and\" will tend to share the same orientation while two adjectives joined by \"but\" will have opposite orientations. On the other hand, in (Turney, 2002) the SO is obtained calculating the Pointwise Mutual Information (PMI) between each word and a very positive word (like \"excellent\") and a very negative word (like \"poor\") in a corpus. The result is a continuous numeric value between -1 and +1. These ideas of bootstrapping SO from a corpus have been further explored and sophisticated in more recent works (Popescu and Etzioni, 2005; Brody and Elhadad, 2010; Qiu et al., 2011) ",
"cite_spans": [
{
"start": 151,
"end": 167,
"text": "(Fellbaum, 1998)",
"ref_id": null
},
{
"start": 245,
"end": 273,
"text": "(Esuli and Sebastiani, 2006)",
"ref_id": "BIBREF7"
},
{
"start": 485,
"end": 511,
"text": "(Baccianella et al., 2010)",
"ref_id": "BIBREF1"
},
{
"start": 660,
"end": 685,
"text": "(Agerri and Garcia, 2009)",
"ref_id": "BIBREF0"
},
{
"start": 821,
"end": 843,
"text": "(Guerini et al., 2013)",
"ref_id": "BIBREF10"
},
{
"start": 1075,
"end": 1094,
"text": "(Maks et al., 2014;",
"ref_id": "BIBREF24"
},
{
"start": 1095,
"end": 1116,
"text": "Vicente et al., 2014)",
"ref_id": "BIBREF45"
},
{
"start": 1471,
"end": 1496,
"text": "(Ramos and Marques, 2005)",
"ref_id": "BIBREF34"
},
{
"start": 1566,
"end": 1602,
"text": "(Hatzivassiloglou and McKeown, 1997)",
"ref_id": "BIBREF11"
},
{
"start": 1923,
"end": 1937,
"text": "(Turney, 2002)",
"ref_id": "BIBREF44"
},
{
"start": 2294,
"end": 2321,
"text": "(Popescu and Etzioni, 2005;",
"ref_id": "BIBREF32"
},
{
"start": 2322,
"end": 2346,
"text": "Brody and Elhadad, 2010;",
"ref_id": "BIBREF3"
},
{
"start": 2347,
"end": 2364,
"text": "Qiu et al., 2011)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Continuous word representations (also vector representations or word embeddings) represent each word by a ndimensional vector. Usually, these vector encapsulates some semantic information derived from the corpus used and the process applied to derive the vector. One of the best known techniques for deriving vector representations of words and documents are Latent Semantic Indexing (Dumais et al., 1995) and Latent Semantic Analysis (Dumais, 2004) . Currently it is becoming very common in the literature to employ Neural Networks and the so-called Deep Learning to compute word embeddings (Bengio et al., 2003; Turian 1 A WordNet synset in a set of synonym words that denote the same concept et al., Huang et al., 2012; Mikolov et al., 2013c) . Word embeddings show interesting semantic properties to find related concepts, word analogies, or to use them as features to conventional machine learning algorithms (Socher et al., 2013; Tang et al., 2014b; Pavlopoulos and Androutsopoulos, 2014) . Word embeddings are also explored in tasks such as deriving adjectival scales (Kim, 2013) .",
"cite_spans": [
{
"start": 384,
"end": 405,
"text": "(Dumais et al., 1995)",
"ref_id": "BIBREF5"
},
{
"start": 435,
"end": 449,
"text": "(Dumais, 2004)",
"ref_id": "BIBREF6"
},
{
"start": 592,
"end": 613,
"text": "(Bengio et al., 2003;",
"ref_id": "BIBREF2"
},
{
"start": 614,
"end": 622,
"text": "Turian 1",
"ref_id": null
},
{
"start": 703,
"end": 722,
"text": "Huang et al., 2012;",
"ref_id": "BIBREF15"
},
{
"start": 723,
"end": 745,
"text": "Mikolov et al., 2013c)",
"ref_id": "BIBREF27"
},
{
"start": 914,
"end": 935,
"text": "(Socher et al., 2013;",
"ref_id": null
},
{
"start": 936,
"end": 955,
"text": "Tang et al., 2014b;",
"ref_id": "BIBREF42"
},
{
"start": 956,
"end": 994,
"text": "Pavlopoulos and Androutsopoulos, 2014)",
"ref_id": "BIBREF30"
},
{
"start": 1075,
"end": 1086,
"text": "(Kim, 2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous word representations",
"sec_num": "2.1."
},
{
"text": "Our aim is to compare different existing sentiment lexicons and methods to find out if continuous word embeddings can be used to easily compute accurate sentiment polarity over the words of a domain, and under which conditions. The experiments are carried on two specific domains, in particular restaurants and laptops reviews.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicons and methods",
"sec_num": "3."
},
{
"text": "The General Inquirer (GI) (Stone et al., 1966 ) is a very well-known manually crafted lexicon that includes the polarity of many English words. GI contains about 2000 positive and negative words. It has been used in many different research works over the past years. On the other hand we have also used the Bing Liu's sentiment lexicon (Hu and Liu, 2004) . According to the web page 2 it has been compiled and incremented over many years. It contains around 6800 words with an assigned categorical polarity (positive or negative).",
"cite_spans": [
{
"start": 26,
"end": 45,
"text": "(Stone et al., 1966",
"ref_id": "BIBREF39"
},
{
"start": 336,
"end": 354,
"text": "(Hu and Liu, 2004)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General lexicons",
"sec_num": "3.1."
},
{
"text": "SentiWordnet assigns scores to each WordNet synset 3 (Esuli and Sebastiani, 2006) . SentiWordNet polarity consists of three scores per synset: positivity, negativity and objectivity. In (Baccianella et al., 2010) version 3.0 of Sen-tiWordNet is introduced with improvements like a random walk approach in the graph of WordNet. We have also used the Q-WordNet as Personalized PageRanking Vector (QWN-PPV) which propagates and ranks polarity values on the WordNet graph starting from few seed words (Vicente et al., 2014) .",
"cite_spans": [
{
"start": 53,
"end": 81,
"text": "(Esuli and Sebastiani, 2006)",
"ref_id": "BIBREF7"
},
{
"start": 186,
"end": 212,
"text": "(Baccianella et al., 2010)",
"ref_id": "BIBREF1"
},
{
"start": 497,
"end": 519,
"text": "(Vicente et al., 2014)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Wordnet based lexicons",
"sec_num": "3.2."
},
{
"text": "Following the work at (Turney, 2002) , we also have derived some polarity lexicons from a domain corpus using Pointwise Mutual Information (PMI). In few words, PMI is used as a measure of relatedness between two events, in this case the co-occurrence of words with known positive contexts. In the original Turney's work the value of co-occurrence was measured counting hits in a web search (the extinct Altavista) between words and the seed word \"excellent\" (for positives) and the seed word \"poor\".",
"cite_spans": [
{
"start": 22,
"end": 36,
"text": "(Turney, 2002)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PMI based lexicons",
"sec_num": "3.3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "SO(w) = PMI(w, P OS) \u2212 PMI(w, N EG)",
"eq_num": "(1)"
}
],
"section": "PMI based lexicons",
"sec_num": "3.3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "PMI(w1, w2) = log p(w1, w2) p(w1) \u00d7 p(w2)",
"eq_num": "(2)"
}
],
"section": "PMI based lexicons",
"sec_num": "3.3."
},
{
"text": "Firstly, we have borrowed the lexicon generated in (Kiritchenko et al., 2014) (named NRC CANADA in the experiment tables), which was generated computing the PMI between each word and positive reviews(4 or 5 stars in a 5-star rating) and negative reviews (1 or 2 stars), for both restaurants and laptops review datasets. Because it uses the user ratings, this approach is supervised. As a counterpart we have calculated another PMI based lexicon, in which we employ the co-occurrence of words within a five word window with the word excellent (analogously with the word terrible for negative) to calculate the PMI score. This is potentially less accurate but requires no supervised information apart from the two seed words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PMI based lexicons",
"sec_num": "3.3."
},
{
"text": "We have applied the popular Word2Vec (Mikolov et al., 2013a) and the Stanford Glove system (Pennington and Manning, ) to calculate word embeddings. We have computed three models for each system: one in a restaurant reviews dataset, another in a laptop reviews dataset and a third one in a much bigger general domain dataset (consisting on the first billion characters from the English Wikipedia 4 ). Notice that the employed general domain dataset is pretty much bigger than the domain-based datasets. General domain dataset is a 700MB raw text file after cleaning it, while restaurants and laptop dataset only weight 28 and 40 MB respectively. General domain datasets, like the whole Wikipedia data or News dataset from online newspapers, capture very well general syntactic and semantic regularities. However, to capture in-domain word polarities smaller domain focused dataset might work better (Garc\u00eda-Pablos et al., 2015) . Also notice that at the time of writing this paper, there are appearing a lot of different techniques to calculate word embeddings that could work better than plain Word2Vec (Li and Jurafsky, 2015; Rothe et al., 2016) , but due to their recent apparition are not employed in these experiments. In table 3.4. and table 3.4. it can be observed how the word embedding computed for restaurant and laptop domain seem to capture polarity quite accurately just by using word similarity. This is because the employed datasets are customer reviews of each domain, and the kind of content present in customer reviews helps modelling the meaning and polarity of the words (adjectives in this case). Tables show top similarities according to the cosine distance between word vectors computed by each model. Words like excellent and horrible are domain independent, and the most similar words are quite equivalent for both domains. But for the third word, slow, the differences between both domains are more evident. The word slow in the context of restaurants is usually employed to describe the service quality (when judging waiters and waitresses serving speed and skills), while in the context of laptops it refers to the performance of hardware and/or software. Another advantage versus a general domain computed model is that domainbased models will contain any domain jargon words or even commonly misspelled words (as long as it appears usually enough in the corresponding dataset). A general domain dataset is less likely to cover all the vocabulary present for any possible domain.",
"cite_spans": [
{
"start": 37,
"end": 60,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF25"
},
{
"start": 91,
"end": 117,
"text": "(Pennington and Manning, )",
"ref_id": null
},
{
"start": 898,
"end": 926,
"text": "(Garc\u00eda-Pablos et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 1103,
"end": 1126,
"text": "(Li and Jurafsky, 2015;",
"ref_id": "BIBREF22"
},
{
"start": 1127,
"end": 1146,
"text": "Rothe et al., 2016)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word embedding lexicons",
"sec_num": "3.4."
},
{
"text": "We have used a simple formula to assign a polarity to the words in the vocabulary, using a single positive seed word and a single negative seed word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word embedding lexicons",
"sec_num": "3.4."
},
{
"text": "pol(w) = sim(w, P OS) \u2212 sim(w, N EG)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word embedding lexicons",
"sec_num": "3.4."
},
{
"text": "In the equation P OS is the seed positive word for the domain represented by its corresponding word vector, and analogously N EG is the vector representation of seed negative word. In the experiments we have used domain independent seed words with a very clear and context-and domain-independent polarity, in particular excellent and horrible as positive and negative seeds respectively. sim stands for the cosine distance between word vectors. Note that this simple formula provides a real number, that in a sense gives a continuous value for the polarity. The fact of obtaining a continuous value for the polarity could be an interesting property to measure the strength of the sentiment, but for now we simply convert the polarity value to a binary label: positive if the value is greater or equal to zero, and negative otherwise. This makes the comparison with the other examined lexicons easier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word embedding lexicons",
"sec_num": "3.4."
},
{
"text": "In order to generate the lexicons with the methods that require an in-domain corpus (i.e. the PMI based one, the Word2Vec and the GloVe) we have used corpus from two different domains. The first corpus consists of customer reviews about restaurants. It is a 100k review subset about restaurants obtained from the Yelp dataset 5 (henceforth Yelp-restaurants). We also have used a second corpus of customer reviews about laptops. This corpus contains a subset of about 100k reviews from the Amazon electronic device review dataset from the Stanford Network Analysis Project (SNAP) 6 after selecting reviews that contain the word \"laptop\" (henceforth Amazon-laptops). The corpora have been processed removing all non-content words (i.e. everything except adjectives, adverbs, verbs and nouns is removed), and words have been lowercased. For other tasks like word-analogy discovery (Mikolov et al., 2013d) or marchine translation (Mikolov et al., 2013b) every word (even those that are usually considered stopwords), but as our in-domain datasets are of reduced size 7 we remove the words that are less informative to model the polarity, like pronouns, articles or prepositions. After that, both corpora have been used to feed the target methods, obtaining their respective domain-aware results. In the case of the PMI based lexicon a score, and in the case of Word2Vec and GloVe, a vector representation of the words for each domain. In the case of Word2Vec we have employed the implementation contained in the Apache Spark Mllib library 8 . This Word2Vec implementation is based on the Word2Vec Skip-gram architecture, and we have let the default hyper-parameters and configuration 9 .",
"cite_spans": [
{
"start": 878,
"end": 901,
"text": "(Mikolov et al., 2013d)",
"ref_id": "BIBREF28"
},
{
"start": 926,
"end": 949,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain corpora",
"sec_num": "4."
},
{
"text": "We have performed two different evaluations. On the one hand, we have used the domain corpora (Yelp-restaurants and Amazon-laptops) to automatically obtain a list of domain adjectives ranked by frequency. From that list we have manually selected the first 200 adjectives with contextindependent positive or negative polarity for each domain 10 . Then we have manually assigned a polarity label (positive or negative) to each of the selected adjectives. From now on we will refer to these annotated adweb-Amazon.html 7 Compared to the billion words datasets employed in other works 8 http://spark.apache.org/mllib/ 9 Please, refer to the Apache Spark Mllib Word2Vec documentation to see which the default parameters are 10 With context-independent polarity we refer to those adjectives with unambiguous polarity not depending on the domain aspect they are modifying (e.g. superb is likely to be always positive, while small could be positive or negative depending on the context) jectives restaurant-adjectives-test-set and laptop-adjectivestest-set respectively. The restaurant-adjectives-test-set contains 119 positive adjectives and 81 negatives adjectives, while laptops-adjectives-test-set contains 127 positives and 73 negatives 11 . On the other hand, we have used the SemEval 2015 task 12 datasets 12 . The first dataset contains 254 annotated reviews about restaurants (a total of 1,315 sentences). The second dataset contains 277 annotated reviews about laptops (a total of 1,739 sentences). On the restaurant-adjectives-test-set and laptop-adjectivestest-set we measure the polarity accuracy (when a lexicon assigns the correct polarity) and the coverage (when a lexicon contains a polarity for the requested word). Tables 3 and 4 show the results for restaurants and laptops respectively. In the tables the accuracy measures how many word polarities have been correctly tagged from the ones present in each lexicon (i.e. out-of-vocabulary words are not taken as errors). The coverage measures how many words were present in each lexicon regardless of the tagged polarity. The experiment shows that the static lexicons like GI and Liu's assign polarities with a very high precision, but they suffer from lower coverage. A similar behaviour can be observed for polarities based on WordNet. On the other hand the lexicons calculated directly on the domain datasets are less accurate, but they have much higher coverage. NRC CANADA lexicon achieves a very good result, but it must be noted that it employs supervised information. The PMI based on windows achieve a quite good result despite of its simplicity, but it does not cover all the words (i.e. some words do not co-occur in the same context). The lexicons based on word embeddings calculated on the domain achieve a 100% coverage, because they are modelling the whole vocabulary, and offer a reasonable precision. Word embeddings (both Word2Vec and Glove) calculated on general domain corpus still cover a lot of the adjectives since they have been trained on a very large corpora, but they show a lower accuracy capturing the polarity of the words.",
"cite_spans": [],
"ref_spans": [
{
"start": 1726,
"end": 1740,
"text": "Tables 3 and 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "SemEval 2015 based datasets consists of quintuples of aspect-term, entity-attribute, polarity, and starting and ending position of the aspect-term. We are only interested in using the polarity slots, which refer to the polarity of a particular aspect of each sentence (not to the overall sentence polarity). We have applied the different lexicons to infer the polarity of each sentence, and then we have compared them to the gold annotations that come with the datasets. The process of assigning a polarity to each sentence using the different polarity lexicons is the following: \u2022 Negation words are taken into account to reverse the polarity of the subsequent word, in particular: no, neither, nothing, not, n't, none, any, never, without",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SemEval 2015 datasets based experiments",
"sec_num": "5.2."
},
{
"text": "\u2022 The number of positive and negative words according to each lexicon is counted. If the positives count is greater or equal to negatives count, the polarity of all polarity slots of the sentence is assigned as positive; and negative otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SemEval 2015 datasets based experiments",
"sec_num": "5.2."
},
{
"text": "Notice that this is a very naive polarity annotation process. It is not intended to obtain good results but for comparing the lexicons against real sentences using the same setting. That is way in general the results are lower than in the experiment with the bare adjective lists. This naive polarity annotation process is repeated for every polarity lexicon so the different lexicons and methods can be compared under the same conditions in real reviews test sets. Table 6 : Semeval 2015 laptops results lexicon seem to be more accurate capturing positive words and others seem to have a better recall. It must be noted that in this case what is being annotated are whole sentences of actual reviews, so there are a lot of facts involved apart from the mere polarity of single words. Also in this case the domain-based word embeddings work better capturing the polarity than their general-domain counterparts.",
"cite_spans": [],
"ref_spans": [
{
"start": 466,
"end": 473,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "SemEval 2015 datasets based experiments",
"sec_num": "5.2."
},
{
"text": "In this work we have compared different existing lexicons and methods to obtain a polarity value for words in a particular domain. We have shown a simple yet functional way to quickly get a polarity value only with unlabelled texts using continuous word representations. It is similar in essence to other exiting methods that require cooccurrence computations among words, but the semantic properties of the continuous word embeddings does not require words to co-occur and is easier to compute. In addition we have shown that the similarity of sentiment bearing words (mainly adjectives) is better modelled using a smaller in-domain dataset rather than a bigger general dataset. We have observed a similar behaviour in preliminary experiments for other languages such us Spanish, French or Italian. An obvious advantage is that provided enough unlabelled domain data, the word embeddings and polarity scores can be easily obtained for any language. As a further work, we would like to experiment with these indomain calculated word embeddings (and other variants) within more complex sentiment analysis systems to see if they improve the performance. Also, many machine learning based sentiment analysis approaches in the literature already employ word embeddings as input features, usually computed against very big general corpora. It would be interesting to see how general domain word embeddings, which provide general language knowledge, and in-domain calculated word embeddings, which provide domain-aware information, can be combined to improve the results of such systems. Also we would like to explore if approaches with a more weakly supervised nature, like topic modelling and Latent Dirichlet Allocation based systems, that try to jointly model the polarity and other facets of documents could benefit from the information coming from in-domain word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6."
},
{
"text": "This work has been supported by Vicomtech-IK4 and partially funded by TUNER project (TIN2015-65308-C5-1-R).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "7."
},
{
"text": "https://www.cs.uic.edu/\\\u02dcliub/FBS/ sentiment-analysis.html\\#lexicon 3 A WordNet synset in a set of synonym words that denote the same concept",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.yelp.com/dataset_challenge 6 http://snap.stanford.edu/data/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available at https://dl.dropboxusercontent. com/u/7852658/files/restaur_adjs_test.txt and https://dl.dropboxusercontent.com/u/ 7852658/files/laptops_adjs_test.txt respectively 12 http://alt.qcri.org/semeval2015/task12/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available at http://alt.qcri.org/semeval2015/ task12/index.php?id=data-and-tools",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Q-WordNet : Extracting Polarity from WordNet Senses. Seventh Conference on International Language Resources and Evaluation Malta Retrieved May",
"authors": [
{
"first": "R",
"middle": [],
"last": "Agerri",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Garcia",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "25",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agerri, R. and Garcia, A. (2009). Q-WordNet : Extracting Polarity from WordNet Senses. Seventh Conference on International Language Resources and Evaluation Malta Retrieved May, 25:2010.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Senti-WordNet 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining",
"authors": [
{
"first": "S",
"middle": [],
"last": "Baccianella",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Esuli",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)",
"volume": "0",
"issue": "",
"pages": "2200--2204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baccianella, S., Esuli, A., and Sebastiani, F. (2010). Senti- WordNet 3.0: An Enhanced Lexical Resource for Sen- timent Analysis and Opinion Mining. Proceedings of the Seventh International Conference on Language Re- sources and Evaluation (LREC'10), 0:2200-2204.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Neural Probabilistic Language Model",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Janvin",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2003,
"venue": "The Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bengio, Y., Ducharme, R., Vincent, P., and Janvin, C. (2003). A Neural Probabilistic Language Model. The Journal of Machine Learning Research, 3:1137-1155.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An unsupervised aspect-sentiment model for online reviews",
"authors": [
{
"first": "S",
"middle": [],
"last": "Brody",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brody, S. and Elhadad, N. (2010). An unsupervised aspect-sentiment model for online reviews. The 2010",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "804--812",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference of the North American Chapter of the Association for Computational Linguistics, (June):804- 812.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Latent semantic indexing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Dumais",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Furnas",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Landauer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Deerwester",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Deerwester",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Text Retrieval Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dumais, S., Furnas, G., Landauer, T., Deerwester, S., Deer- wester, S., et al. (1995). Latent semantic indexing. In Proceedings of the Text Retrieval Conference.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Latent semantic analysis. Annual review of information science and technology",
"authors": [
{
"first": "S",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "38",
"issue": "",
"pages": "188--230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dumais, S. T. (2004). Latent semantic analysis. Annual re- view of information science and technology, 38(1):188- 230.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "SentiWordNet : A Publicly Available Lexical Resource for Opinion Mining",
"authors": [
{
"first": "A",
"middle": [],
"last": "Esuli",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of LREC 2006",
"volume": "",
"issue": "",
"pages": "417--422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Esuli, A. and Sebastiani, F. (2006). SentiWordNet : A Publicly Available Lexical Resource for Opinion Min- ing. Proceedings of LREC 2006, pages 417-422.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Unsupervised word polarity tagging by exploiting continuous word representations",
"authors": [
{
"first": "A",
"middle": [],
"last": "Garc\u00eda-Pablos",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Cuadros",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Rigau",
"suffix": ""
}
],
"year": 2015,
"venue": "Procesamiento del Lenguaje Natural",
"volume": "55",
"issue": "",
"pages": "127--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Garc\u00eda-Pablos, A., Cuadros, M., and Rigau, G. (2015). Un- supervised word polarity tagging by exploiting continu- ous word representations. Procesamiento del Lenguaje Natural, 55:127-134.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sentiment Analysis : How to Derive Prior Polarities from Senti-WordNet. Emnlp",
"authors": [
{
"first": "M",
"middle": [],
"last": "Guerini",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Gatti",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "1259--1269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guerini, M., Gatti, L., and Turchi, M. (2013). Sentiment Analysis : How to Derive Prior Polarities from Senti- WordNet. Emnlp, pages 1259-1269.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Predicting the semantic orientation of adjectives",
"authors": [
{
"first": "V",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
},
{
"first": "K",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hatzivassiloglou, V. and McKeown, K. R. (1997). Pre- dicting the semantic orientation of adjectives. Proceed- ings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Embedding Word Similarity with Neural Machine Translation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Devin",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hill, F., Cho, K., Jean, S., Devin, C., and Bengio, Y. (2014). Embedding Word Similarity with Neural Machine Trans- lation. pages 1-12.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Mining opinion features in customer reviews",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hu, M. and Liu, B. (2004). Mining opinion features in customer reviews. AAAI.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improving word representations via global context and multiple word prototypes",
"authors": [
{
"first": "E",
"middle": [
"H"
],
"last": "Huang",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "873--882",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, E. H., Socher, R., Manning, C. D., and Ng, A. (2012). Improving word representations via global con- text and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computa- tional Linguistics: Long Papers-Volume 1, pages 873- 882.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "SensEmbed: Learning Sense Embeddings for Word and Relational Similarity",
"authors": [
{
"first": "I",
"middle": [],
"last": "Iacobacci",
"suffix": ""
},
{
"first": "M",
"middle": [
"T"
],
"last": "Pilehvar",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "95--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iacobacci, I., Pilehvar, M. T., and Navigli, R. (2015). SensEmbed: Learning Sense Embeddings for Word and Relational Similarity. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguis- tics and the 7th International Joint Conference on Nat- ural Language Processing (Volume 1: Long Papers), (1):95-105.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Swiss-Chocolate : Sentiment Detection using Sparse SVMs and Part-Of-Speech n -Grams",
"authors": [
{
"first": "M",
"middle": [],
"last": "Jaggi",
"suffix": ""
},
{
"first": "E",
"middle": [
"T H"
],
"last": "Zurich",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Cieliebak",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "601--604",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaggi, M., Zurich, E. T. H., and Cieliebak, M. (2014). Swiss-Chocolate : Sentiment Detection using Sparse SVMs and Part-Of-Speech n -Grams. (SemEval):601- 604.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "WordRank: Learning Word Embeddings via Robust Ranking",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Yun",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Yanardag",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Matsushima",
"suffix": ""
},
{
"first": "S",
"middle": [
"V N"
],
"last": "Vishwanathan",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji, S., Yun, H., Yanardag, P., Matsushima, S., and Vish- wanathan, S. V. N. (2015). WordRank: Learning Word Embeddings via Robust Ranking. pages 1-12.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Deriving adjectival scales from continuous space word representations",
"authors": [
{
"first": "J",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2013,
"venue": "Emnlp",
"volume": "",
"issue": "",
"pages": "1625--1630",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim, J.-k. (2013). Deriving adjectival scales from continuous space word representations. Emnlp, (October):1625-1630.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "NRC-Canada-2014 : Detecting Aspects and Sentiment in Customer Reviews",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "437--442",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kiritchenko, S., Zhu, X., Cherry, C., Mohammad, S. M., and Mohammad, S. (2014). NRC-Canada-2014 : De- tecting Aspects and Sentiment in Customer Reviews. (SemEval):437-442.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Distributed Representations of Sentences and Documents",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Machine Learning -ICML 2014",
"volume": "32",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Le, Q. and Mikolov, T. (2014). Distributed Representa- tions of Sentences and Documents. International Con- ference on Machine Learning -ICML 2014, 32:1188- 1196.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Do multi-sense embeddings improve natural language understanding?",
"authors": [
{
"first": "J",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.01070"
]
},
"num": null,
"urls": [],
"raw_text": "Li, J. and Jurafsky, D. (2015). Do multi-sense embeddings improve natural language understanding? arXiv preprint arXiv:1506.01070.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Joint Sentiment / Topic Model for Sentiment Analysis. Cikm",
"authors": [
{
"first": "C",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "N",
"middle": [
"P"
],
"last": "Road",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Ex",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "375--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, C., Road, N. P., and Ex, E. (2009). Joint Sentiment / Topic Model for Sentiment Analysis. Cikm, pages 375- 384.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Generating Polarity Lexicons with WordNet propagation in five languages",
"authors": [
{
"first": "I",
"middle": [],
"last": "Maks",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Izquierdo",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Frontini",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Agerri",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Azpeitia",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Vossen",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "1155--1161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maks, I., Izquierdo, R., Frontini, F., Agerri, R., Azpeitia, A., and Vossen, P. (2014). Generating Polarity Lexi- cons with WordNet propagation in five languages. pages 1155-1161.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Efficient Estimation of Word Representations in Vector Space",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a). Efficient Estimation of Word Representations in Vec- tor Space. arXiv preprint arXiv:1301.3781, pages 1-12, January.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Exploiting Similarities among Languages for Machine Translation",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Q",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {
"arXiv": [
"arXiv:1309.4168v1"
]
},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Le, Q. V., and Sutskever, I. (2013b). Exploit- ing Similarities among Languages for Machine Transla- tion. In arXiv preprint arXiv:1309.4168v1, pages 1-10.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Distributed Representations of Words and Phrases and their Compositionality",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Sutskever, I., Chen, K., Corrado, G., and Dean, J. (2013c). Distributed Representations of Words and Phrases and their Compositionality. arXiv preprint arXiv: . . . , pages 1-9, October.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "W.-T",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, T., Yih, W.-t., and Zweig, G. (2013d). Linguis- tic regularities in continuous space word representations. Proceedings of NAACL-HLT, pages 746-751.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Opinion mining and sentiment analysis. Foundations and trends in information retrieval",
"authors": [
{
"first": "B",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "2",
"issue": "",
"pages": "1--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pang, B. and Lee, L. (2008). Opinion mining and sen- timent analysis. Foundations and trends in information retrieval, 2(1-2):1-135.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Aspect term extraction for sentiment analysis: New datasets, new evaluation measures and an improved unsupervised method",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pavlopoulos",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of LASMEACL",
"volume": "",
"issue": "",
"pages": "44--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pavlopoulos, J. and Androutsopoulos, I. (2014). Aspect term extraction for sentiment analysis: New datasets, new evaluation measures and an improved unsupervised method. Proceedings of LASMEACL, pages 44-52.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pennington, J. and Manning, C. ). Glove: Global vectors for word representation. Emnlp2014.Org.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Extracting product features and opinions from reviews. Natural language processing and text mining",
"authors": [
{
"first": "A",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "339--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Popescu, A. and Etzioni, O. (2005). Extracting product features and opinions from reviews. Natural language processing and text mining, (October):339-346.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Opinion word expansion and target extraction through double propagation",
"authors": [
{
"first": "G",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bu",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiu, G., Liu, B., Bu, J., and Chen, C. (2011). Opin- ion word expansion and target extraction through double propagation. Computational linguistics, (July 2010).",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Determining the Polarity of Words through a Common Online Dictionary",
"authors": [
{
"first": "C",
"middle": [],
"last": "Ramos",
"suffix": ""
},
{
"first": "N",
"middle": [
"C"
],
"last": "Marques",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramos, C. and Marques, N. C. (2005). Determining the Polarity of Words through a Common Online Dictionary.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Ultradense word embeddings by orthogonal transformation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ebert",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.07572"
]
},
"num": null,
"urls": [],
"raw_text": "Rothe, S., Ebert, S., and Sch\u00fctze, H. (2016). Ultradense word embeddings by orthogonal transformation. arXiv preprint arXiv:1602.07572.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Symmetric Pattern Based Word Embeddings for Improved Word Similarity Prediction",
"authors": [
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schwartz, R., Reichart, R., and Rappoport, A. (2014). Symmetric Pattern Based Word Embeddings for Im- proved Word Similarity Prediction. 353.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. newdesign.aclweb.org.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "The general inquirer: A computer approach to content analysis",
"authors": [
{
"first": "P",
"middle": [
"J"
],
"last": "Stone",
"suffix": ""
},
{
"first": "D",
"middle": [
"C"
],
"last": "Dunphy",
"suffix": ""
},
{
"first": "M",
"middle": [
"S"
],
"last": "Smith",
"suffix": ""
}
],
"year": 1966,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stone, P. J., Dunphy, D. C., and Smith, M. S. (1966). The general inquirer: A computer approach to content analy- sis.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Lexicon-Based Methods for Sentiment",
"authors": [
{
"first": "M",
"middle": [],
"last": "Taboada",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Brooke",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Tofiloski",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Voll",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 2010,
"venue": "Analysis. Computational Linguistics",
"volume": "37",
"issue": "",
"pages": "267--307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taboada, M., Brooke, J., Tofiloski, M., Voll, K., and Stede, M. (2011). Lexicon-Based Methods for Senti- ment Analysis. Computational Linguistics, 37(Septem- ber 2010):267-307.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Learning Sentiment-Specific Word Embedding",
"authors": [
{
"first": "D",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Qin",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "1555--1565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tang, D., Wei, F., Yang, N., Zhou, M., Liu, T., and Qin, B. (2014a). Learning Sentiment-Specific Word Embed- ding. Acl, pages 1555-1565.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Learning sentiment-specific word embedding for twitter sentiment classification",
"authors": [
{
"first": "D",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Qin",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1555--1565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tang, D., Wei, F., Yang, N., Zhou, M., Liu, T., and Qin, B. (2014b). Learning sentiment-specific word embed- ding for twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Com- putational Linguistics, pages 1555-1565.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Word Representations: A Simple and General Method for Semisupervised Learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turian, J., Ratinov, L., and Bengio, Y. (2010). Word Rep- resentations: A Simple and General Method for Semi- supervised Learning. Proceedings of the 48th Annual Meeting of the Association for Computational Linguis- tics, pages 384-394.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews",
"authors": [
{
"first": "P",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turney, P. D. (2002). Thumbs Up or Thumbs Down? Se- mantic Orientation Applied to Unsupervised Classifica- tion of Reviews. Computational Linguistics, (July):8.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Simple , Robust and ( almost ) Unsupervised Generation of Polarity Lexicons for Multiple Languages",
"authors": [
{
"first": "S",
"middle": [],
"last": "Vicente",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Agerri",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Rigau",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vicente, S., Agerri, R., and Rigau, G. (2014). Simple , Ro- bust and ( almost ) Unsupervised Generation of Polarity Lexicons for Multiple Languages. Eacl2014.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Aspect and entity extraction for opinion mining",
"authors": [
{
"first": "L",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "Data mining and knowledge discovery for big data",
"volume": "",
"issue": "",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, L. and Liu, B. (2014). Aspect and entity extrac- tion for opinion mining. In Data mining and knowledge discovery for big data, pages 1-40. Springer.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"content": "<table><tr><td colspan=\"3\">Laptops dataset computed similarities</td></tr><tr><td>excellent</td><td>horrible</td><td>slow</td></tr><tr><td>outstanding</td><td>terrible</td><td>counterintuitive</td></tr><tr><td>exceptional</td><td>deplorable</td><td>painfully</td></tr><tr><td>awesome</td><td>awful</td><td>unstable</td></tr><tr><td>incredible</td><td>abysmal</td><td>sluggish</td></tr><tr><td>excelent</td><td>poor</td><td>choppy</td></tr><tr><td>amazing</td><td>horrid</td><td>fast</td></tr><tr><td>excellant</td><td>lousy</td><td>buggy</td></tr><tr><td>fantastic</td><td>whining</td><td>slows</td></tr><tr><td>terrific</td><td>horrendous</td><td>frustratingly</td></tr><tr><td>superb</td><td>unprofessional</td><td>flaky</td></tr></table>",
"num": null,
"type_str": "table",
"text": ""
},
"TABREF2": {
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Most similar words in the word embedding space computed on laptops reviews dataset, according to the cosine similarity, for words excellent, horrible and slow"
},
"TABREF4": {
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Resturants 200 adjs lexicon results"
},
"TABREF6": {
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Laptops 200 adjs lexicon results"
},
"TABREF8": {
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Semeval 2015 restaurants results\u2022 Only adjectives and verbs (e.g. hate, recommend)are taken into account to calculate polarity (common verbs like be and have are omitted)"
},
"TABREF9": {
"html": null,
"content": "<table><tr><td colspan=\"5\">LAPTOPS (SEMEVAL 2015 DATASET)</td><td/></tr><tr><td>Name</td><td/><td>Prec.</td><td>Rec.</td><td>F1</td><td>Acc.</td></tr><tr><td>GI</td><td colspan=\"2\">posit. 0.631 neg. 0.751</td><td>0.939 0.328</td><td>0.755 0.456</td><td>0.651</td></tr><tr><td>BingLiu</td><td>posit. neg.</td><td>0.64 0.821</td><td>0.96 0.343</td><td>0.768 0.484</td><td>0.669</td></tr><tr><td>SWN</td><td>posit. neg.</td><td>0.63 0.671</td><td>0.903 0.345</td><td>0.742 0.456</td><td>0.638</td></tr><tr><td>QWN-PPV</td><td colspan=\"4\">posit. 0.605 0.9411 0.736 neg. 0.675 0.228 0.341</td><td>0.614</td></tr><tr><td>NRC CAN.</td><td colspan=\"2\">posit. 0.653 neg. 0.75</td><td>0.922 0.409</td><td>0.764 0.529</td><td>0.673</td></tr><tr><td>PMI W 5</td><td colspan=\"2\">posit. 0.622 neg. 0.58</td><td>0.841 0.366</td><td>0.715 0.449</td><td>0.611</td></tr><tr><td>W2V DOM</td><td colspan=\"2\">posit. 0.728 neg. 0.673</td><td>0.825 0.636</td><td>0.774 0.654</td><td>0.708</td></tr><tr><td>W2V GEN</td><td colspan=\"2\">posit. 0.533 neg. 0.362</td><td>0.443 0.5</td><td>0.484 0.42</td><td>0.441</td></tr><tr><td>GloVe DOM</td><td>posit. neg.</td><td>0.59 0.762</td><td>0.971 0.159</td><td>0.734 0.263</td><td>0.604</td></tr><tr><td>GloVe GEN</td><td colspan=\"2\">posit. 0.571 neg. 0.528</td><td>0.932 0.120</td><td>0.708 0.196</td><td>0.567</td></tr></table>",
"num": null,
"type_str": "table",
"text": "shows the results for restaurants dataset while table 6 shows the results for laptops dataset. These results have been calculated using the evaluation script provided by the SemEval 2015 organizers during the competition 13 . The results show that there is no a clear winner, and the best performing lexicon vary depending on the domain. Some"
}
}
}
}