Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R09-1048",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:00:30.496983Z"
},
"title": "Multi-entity Sentiment Scoring",
"authors": [
{
"first": "Karo",
"middle": [],
"last": "Moilanen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Oxford University Computing Laboratory",
"location": {}
},
"email": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Pulman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Oxford University Computing Laboratory",
"location": {}
},
"email": ""
},
{
"first": "{",
"middle": [],
"last": "Karo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Oxford University Computing Laboratory",
"location": {}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Moilanen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Oxford University Computing Laboratory",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a compositional framework for modelling entity-level sentiment (sub)contexts, and demonstrate how holistic multi-entity polarity scoring emerges as a by-product of compositional sentiment parsing. A data set of five annotators' multi-entity judgements is presented, and a human ceiling is established for the challenging new task. The accuracy of an initial implementation, which includes both supervised learning and heuristic distance-based scoring methods, is 5.6\u223c6.8 points below the human ceiling amongst sentences and 8.1\u223c8.7 points amongst phrases.",
"pdf_parse": {
"paper_id": "R09-1048",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a compositional framework for modelling entity-level sentiment (sub)contexts, and demonstrate how holistic multi-entity polarity scoring emerges as a by-product of compositional sentiment parsing. A data set of five annotators' multi-entity judgements is presented, and a human ceiling is established for the challenging new task. The accuracy of an initial implementation, which includes both supervised learning and heuristic distance-based scoring methods, is 5.6\u223c6.8 points below the human ceiling amongst sentences and 8.1\u223c8.7 points amongst phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The ability to detect author sentiment towards various entities in text is a fundamental goal in sentiment analysis, and holds great promise for many applications. Entities, which can comprise anything from mentions of people or organisations to concrete or even abstract objects, condition what a text is ultimately about. Besides the intrinsic value of entity scoring, the success of document-and sentence-level analysis is also decided by how accurately entities in them can be modelled. Deep entity analysis unfortunately presents the most difficult challenges, be they linguistic or computational. One of the most recent developments in the area -compositional semanticshas shown potential for sentence-and expression-level analysis in both logic-oriented [11] , [9] and machine learning-oriented [3] paradigms. Our goal in this paper is to further that avenue by extending it to entity-level sentiment analysis.",
"cite_spans": [
{
"start": 761,
"end": 765,
"text": "[11]",
"ref_id": "BIBREF10"
},
{
"start": 768,
"end": 771,
"text": "[9]",
"ref_id": "BIBREF8"
},
{
"start": 802,
"end": 805,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Entity-level approaches have so far involved relatively shallow methods which usually presuppose some pre-given topic or entity of relevance to be classified or scored ( \u00a75.3). Other proposals have attempted specific semantic sentiment roles such as evident sentiment HOLDERs, SOURCEs, TARGETs, or EXPERI-ENCERs ( \u00a75.2). What characterises these approaches is that only a few specific entities in text are analysed while all others are left unanalysed. While shallow approaches can capture some amount of explicitly expressed sentiment, they ignore all layers of implicit sentiment pertaining to a multitude of other entities. We believe that access to these rich layers is required for deeper logical sentiment reasoning in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We take a different view on the problem and investigate the possibility of a holistic multi -entity analysis in that we make no categorical distinctions between individual entity mentions, topics, or sentiment roles of any kind. We instead refer to all base nouns simply as entity markers which may (or may not) serve the above metafunctions, and aim at classifying all such markers in sentences using a single, unified approach. For the sentence in Ex. 1, we envisage a classifier that classifies all of the bracketed entities as positive (+) , neutral (N) , or negative (-) (NB. / = 'or'):",
"cite_spans": [
{
"start": 540,
"end": 543,
"text": "(+)",
"ref_id": null
},
{
"start": 554,
"end": 557,
"text": "(N)",
"ref_id": null
},
{
"start": 572,
"end": 575,
"text": "(-)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) \"Here's the [thing] (N)/(+) : Other [studies] (N)/(+) have found that [clergy] (+) , and not [psychologists] (-)/(+) or other mental [health] (+) [experts] (+)/ (-) , are the most common [source] (+)/(N) of [help] (+) sought in [times] (N)/(-) of psychological [distress] (-) .\"",
"cite_spans": [
{
"start": 16,
"end": 23,
"text": "[thing]",
"ref_id": null
},
{
"start": 83,
"end": 86,
"text": "(+)",
"ref_id": null
},
{
"start": 146,
"end": 149,
"text": "(+)",
"ref_id": null
},
{
"start": 165,
"end": 168,
"text": "(-)",
"ref_id": null
},
{
"start": 232,
"end": 239,
"text": "[times]",
"ref_id": null
},
{
"start": 276,
"end": 279,
"text": "(-)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Note that, in this kind of deep analysis, not only can the polarity of an entity differ from the global, sentential reading but it may also depend heavily on one's subjective point of view: for example, the entity [experts] is logically either positive or negative, arguably. Simple keyword spotting, window-based techniques, and even statistical features have limited power in multi-entity analysis because of the inherently overlapping and interdependent nature of entities. We argue in this paper that the analytical strategy towards this problem needs to be grammatical in nature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Going beyond existing shallow single-entity approaches to deep multi-entity scoring requires the 'conventional' definitional scope of sentiment to be extended to include not only 1) explicit subjective expressions of sentiment, opinions, and emotions, but also 2) implicit subjective expressions and connotations describing some positive (desirable, favourable), negative (undesirable, unfavourable), or neutral (objective) state of affairs in the world. Our classification task is accordingly much wider than most past work in the area. We now illustrate how existing compositional approaches can be extended for multi-entity scoring purposes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We adopted the compositional sentiment model described in [11] as a basis for our scoring framework. In idem., polarity classification is broken down into binary combinatory steps whereby two syntactic input (IN) constituents are combined at a time, and a threevalued polarity logic controlled by a sentiment grammar calculates a polarity for the resultant composite constituent. The process starts with word-level lexical 2.1 Compositional Processes. The model in idem. operates with positive (POS), negative (NEG), and neutral (NTR) polarities, and reversive (\u00ac) and equative (=) polarity shifting values. Non-neutral sentiment propagation is modelled by allowing nonneutral (POS, NEG) constituents to override NTR ones (e.g. \"[funny (+) ",
"cite_spans": [
{
"start": 58,
"end": 62,
"text": "[11]",
"ref_id": "BIBREF10"
},
{
"start": 736,
"end": 739,
"text": "(+)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Parsing",
"sec_num": "2"
},
{
"text": "). The model supports polarity-reversing compositions (cf. [14] ) in which reversive (\u00ac) constituents reverse non-neutral ones (e.g. \"[no [\u00ac] talent (+) ] (-) \"; \"[tax (-) decreases [\u00ac] ] (+) \"), and the resolution of non-neutral polarity conflicts (e.g. \"[bad (-) luck (+) ] (-) \"; \"[cancer (-) cure (+) ] (+) \").",
"cite_spans": [
{
"start": 59,
"end": 63,
"text": "[14]",
"ref_id": "BIBREF13"
},
{
"start": 138,
"end": 141,
"text": "[\u00ac]",
"ref_id": null
},
{
"start": 168,
"end": 171,
"text": "(-)",
"ref_id": null
},
{
"start": 182,
"end": 185,
"text": "[\u00ac]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Parsing",
"sec_num": "2"
},
{
"text": "2. Crucially then, a constituent may be superordinate in one syntactic environment but subordinate somewhere else: consider \"helpline (+) \" in \"[abuse helpline] (+) \" vs. \"[useless helpline] (-) \", for example. The effects of different syntactic environments on IN constituent rankings are specified in a hand-written sentiment grammar which is described in more detail in [11] . Table 1 illustrates some sample grammatical rankings.",
"cite_spans": [
{
"start": 373,
"end": 377,
"text": "[11]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Parsing",
"sec_num": "2"
},
{
"text": "2.3 Pre-processing. Raw text is first processed with a dependency parser 1 . A flat parse tree is then generated in which each constituent head is linked to zero or more pre-and/or post-head dependents. Each leaf node is assigned a prior sentiment polarity and reversal value. These are obtained from an extensive word-class-specific, general-purpose main sentiment lexicon of 57103 sentiment words (22402 ADJ, 6487 ADV, 19004 N, 9210 V), and from an auxiliary list of 312343 known NTR words. Our main lexicon, which was compiled manually based on WordNet 2.1 synsets and glosses, contains 21341 POS, 7036 NTR, and 28726 NEG entries; 1700 (3%) have (\u00ac) reversal features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Parsing",
"sec_num": "2"
},
{
"text": "2.4 Parsing. Sentiment analysis starts with the main lexical head verb of the root clause (or the head noun of a main clausal NP), and first descends recursively down to its lowermost atomic leaf constituents. Through a recursive bottom-up traversal of the dependency tree, each constituent's internal polarity is 1 Connexor Machinese Syntax (www.connexor.com) resolved before it is combined with its parent constituent. When parsing a constituent, the parser follows a fixed order in combining the constituent head (H i ) first with j post-head (R i+1 : i+j ) dependents and then with k pre-head (L i\u2212k : i\u22121 ) dependents (schematised in Fig. 1 ). Each combinatory step operates on the head and only one of its dependents, and consults the sentiment grammar ( \u00a72.2) to determine which element is SPR and assigns the resultant compositional polarity to the head-dependent pair. ",
"cite_spans": [
{
"start": 314,
"end": 315,
"text": "1",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 639,
"end": 645,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Sentiment Parsing",
"sec_num": "2"
},
{
"text": "[Li\u2212k] [Li\u22121] [Hi] [Ri+1] [Hi : Ri+1] . . . [Ri+j] [Hi : Ri+j] [Li\u22121 : Hi : Ri+j] . . . [Li\u2212k : Hi : Ri+j]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Parsing",
"sec_num": "2"
},
{
"text": "Since each constituent -a head with k pre-j post-head dependents -stands for a unique (sub)part of the sentence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "(i.e. [L i\u2212k : H i : R i+j ])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": ", a constituent and its internal polarity constitutes a sentiment (sub)context in the sentence. Each constituent consequently shapes the polarities of the entity marker(s) inside it. Leafnode (sub)contexts holding but a single entity marker can be seen as intrinsically lexical for they represent atomic pieces of information without alluding to any higher context(s). In contrast, (sub)contexts in which entity markers fall under the influence of other words are extrinsically contextual. Importantly then, the very possibility of expressing opinions and sentiments about an entity means that a sentence can exhibit many contextual polarities for it. These can and often do differ from the atomic lexical polarity of the entity and the polarity of the sentence. In the headline \"[EU opposes [credit] crunch rescue package] (-) \", the entity [credit] is shaped by six (sub)contexts (Ex. 2):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "(2) 1: [ [credit] ] (+) 2: [ [credit] crunch ] (-) 3: [ [credit] crunch rescue ] (+) 4: [ [credit] crunch rescue package ] (+) 5: [ opposes [credit] crunch rescue package ] (-) 6: [ EU opposes [credit] crunch rescue package ] (-)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "We aim at including in our analysis not only the two extremes (1: atomic lexical, 6: global sentential) but all intermediate levels of sentiment as well. Seen as a stack of (sub)contexts, the occurrences of an entity across all (sub)contexts along the atomic-global continuum give rise to three gradient polarity distribution scores (#POS, #NTR, #NEG). Entity-level sentiment scoring thus involves measuring how many times each entity was found in POS, NTR, and NEG (sub)contexts. The scoring process is incremental in that each time the parser has calculated a compositional polarity for a constituent (i.e. a (sub)context), we locate all entity markers inside the (sub)context, and, for each found entity marker, use the polarity distribution within the (sub)context to increment the entity's polarity counts, accordingly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "The main challenge is how (sub)contexts' polarity distributions are actually measured. We experimented with two possible scoring methods. Our scoring framework is however not restricted to any particular scoring method(s) per se as other scorers can be plugged in.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "3.1 Distance Scoring. The most basic method for measuring the polarity distribution of a (sub)context is a bidirectional polarity search around an entity marker word. For polarity p \u2208 {POS, NTR, NEG}, in a (sub)context with n neighbouring words with p around an entity marker word at word ID w m , the following distance scoring function is used within each (sub)context:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "dist(p) = n X i=1 1 worddist(wm, w p i ) \u2022 \u0398 clausedist(wm, w p i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "In addition to the raw distance between the entity marker and a neighbouring word (worddist), the distance between their respective (full) clause IDs is also considered (clausedist). The \u0398 coefficient, which was set experimentally at 1.75, boosts neighbouring words that are in the same (full) clause as the entity marker. Because only some higher-level (sub)contexts contain subregions with contrasting polarities (e.g. multiple clauses), distance scoring often suggests similar polarity distributions for all entities in a given (sub)context. 3.2 Syntactic Scoring. Distance scoring takes no notice of syntactic or lexical evidence around entity markers. Such blanket coverage risks being too broad. For more complex scoring, we used supervised learning with Support Vector Machines 2 . We apply the feature template in Table 2 to \u00b13 words around each entity marker (within a (sub)context). The PRIOR POLARITY and POLARITY REVERSAL features refer to a word's raw prior lexical polarity and polarity reversal values while GLOBAL POLARITY indicates the current (sub)context's internal polarity (as suggested by the parser). The DEPENDENCY TYPE, GRAMMATICAL RELATION, SYNTACTIC ROLE, and WORD CLASS features reflect the tags assigned to each word by the dependency parser. POLARITY WSD TYPE indicates whether a word is tagged in the lexicon as capable of bearing more than one polarity (e.g. \"lean (N)(+)(-) \", \"chicken (N)(-) \", \"bliss (+) \"). UNIGRAM features are also included. In total, 19502 binary features ( \u00a74.1) were used to train a polynomial kernel.",
"cite_spans": [],
"ref_spans": [
{
"start": 822,
"end": 829,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "Based on the observed variability in human annotations in the training data ( \u00a74.1), we trained five separate models (one per annotator), and run them as a committee. In each (sub)context, each entity marker word is submitted to the committee and the number of classifiers returning polarity p \u2208 {POS, NTR, NEG} as a class label is used to increment the entity's corresponding polarity counts: 3.3 Weights. The sentiment parsing process scores entities incrementally by measuring the polarity distribution of one (sub)context at a time and updating the entities in it. The cumulative polarity distributions D 1 . . . D n of an entity across all of its hosting (sub)contexts z 1 . . . z n ultimately determine the entity's final sentiment scores. However, simple cumulative sums do not suffice. In particular, individual (sub)contexts' scores need to be weighted because not all of them are equally salient: atomic (sub)contexts are evidently not very important, for example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "We experimented with three empirically discovered coefficients to control the weight of each (sub)context. g estimates the information gain of a (sub)context over its predecessor by boosting longer (sub)contexts. \u03b2 measures the length of a (sub)context in the sentence: longer (sub)contexts are again boosted. Abrupt polarity changes between (sub)contexts are boosted by v : for example, a NEG (sub)context followed by a POS one may indicate a shift in perspective or negation. For each entity, the cumulative score for polarity p \u2208 {POS, NTR, NEG} in a sentence with n (sub)contexts (z 1 . . . z n ) is obtained as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "scr(p) = n X i=1 g\u03b2vDi len(zi) D i = dist(p) or svmvote(p) score from (sub)context z i g = length(z i ) -length(z i\u22121 ) \u03b2 = length(z i ) / length(sentence) v = 1.75 if polarity of z i is not polarity of z i\u22121 , else 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "3.4 Sample Analysis. Consider Ex. 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "(3) \"Finch said the decision to withdraw the application was a 'dispiriting decision which will harm London's reputation as a city which is well governed, and which hitherto has had a welcoming attitude to major overseas investors'.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "Since the sentence depicts a state of affairs that is negative/undesirable/unfavourable, all entities in it could be classified uniformly as NEG. However, the sentential negativity does not entail that \"[a city which is well governed] (+) \" and \"[a welcoming attitude to major overseas investors] (+) \" are NEG as such: instead, it merely makes an allusion to their involvement in a NEG context. The same holds for \"[London's reputation] (N)(+) \". We therefore expect the algorithm to assign different degrees of negativity (and positivity and neutrality) to the entities. Ex. 4 visualises the parser's entity scores. The polarity scores of the entity [London] (29% POS, 18% NTR, 53% NEG), which are illustrated in Table 3 , reflect the statement in that (i) [London] is NTR in itself, (ii) it has a positively-evaluated reputation, and (iii) it is affected by a NEG event. The other entities, from the most NEG [decision] to the most NTR [Finch], are tenable, too. Note that the scores represent each entity's involvement in three polarity contexts and may not as such indicate sentiment/polarity strength although small margins amongst the three values signal mixed (sub)contexts while large(r) margins can be equated with pure(r) polarities. We observed that interpreting these kinds of multi-entity scores is similar to interpreting automatically generated summaries in that, due to subjective scaling and class in-and exclusion preferences, the scores often afford many possible interpretations: whether the NEG score for [application] should be .82, SOMEWHAT NEG, or some other arbitrary value, for example, is secondary to the fact that the parser ranked the entity sensibly as NEG NTR POS. [20] or FBS 4 [4] come with incomplete entity annotations as only some entities (e.g. sentiment roles or product features) are usually included per text region. In contrast, we wish to evaluate all entity markers in a given text region. To achieve that, a new multientity data set was compiled from a cross-genre pool of 24 documents' dependency parses. Five annotators (three paid linguistics students, one of the authors, one volunteer) annotated 7904 entity markers as POS, NTR, or NEG (cf. Ex. 1). Cases displaying mixed sentiment or those infected with inescapable ambiguity were marked as ambiguous. In order to preclude misaligned annotations between annotators (cf. [20] : 34-42; [7] : 6), we made a decision to confine ourselves to base nouns only (cf. Ex. 1).",
"cite_spans": [
{
"start": 759,
"end": 767,
"text": "[London]",
"ref_id": null
},
{
"start": 1698,
"end": 1702,
"text": "[20]",
"ref_id": "BIBREF19"
},
{
"start": 1710,
"end": 1711,
"text": "4",
"ref_id": "BIBREF3"
},
{
"start": 2372,
"end": 2376,
"text": "[20]",
"ref_id": "BIBREF19"
},
{
"start": 2386,
"end": 2389,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 715,
"end": 722,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "!\"!# !\"$# !\"%# !\"&# !\"'# (\"!# )(# )!\"'# )!\"&# )!\"%# )!\"$# !# !\"$# !\"%# !\"&# !\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "The data set contains two subsections. The first (GS PHR) contains 4765 entities from 1500 syntactic constituent phrases of differing lengths (from six documents) while the second (GS SNTC) encompasses 3139 entities from 500 full sentences (from 18 documents). Both subsets were further split into 4/5 training and 1/5 testing sections, yielding for training 2490 entities (GS SNTC) vs. 3877 entities (GS PHR) ( \u00a73.2). 649 and 888 entities are given for testing, respectively. For syntactic scoring, the SVM classifier committee consisted of five separate models, each trained on 6367 entities (with 19502 features) from one annotator's combined GS SNTC and GS PHR training sections ( \u00a73.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "4.2 Human Ceiling. In order to estimate human performance in the new task, we compared each annotator against all others, and obtained five average accuracy and Kappa scores (Table 4 ). It is apparent that the task is highly subjective because the figures are only modest in a three-way condition (accuracy 62%; k .43\u223c.45) (see \u00a74.4). However, the task is considerably less vague in a two-way non-neutral condition (86\u223c89% accuracy; k .70\u223c.78).",
"cite_spans": [],
"ref_spans": [
{
"start": 174,
"end": 182,
"text": "(Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "4.3 Error Classification. The inter-annotator agreement levels point towards increased ambiguity with NTR polarity due to differing personal degrees of sensitivity towards neutrality/objectivity. Not all classification errors are then equal for classifying a POS case as NTR is more tolerable than classifying it as NEG, for example. We found it useful to characterise three distinct error classes or disagreements between human H and algorithm A. FATAL errors (H (\u03b1) A (\u00ac\u03b1) \u03b1\u2208{+ -}) are those where the non-neutral polarity is completely wrong: such errors affect the performance of the parser adversely. GREEDY errors (H (N) A (\u03b1) \u03b1\u2208{+ -}) are those where the algorithm wrongly made a decision to jump one way or the other, displaying oversensitivity towards non-neutral polarities. LAZY errors (H (\u03b1) A (N) \u03b1\u2208{+ -}) indicate that the algorithm chose to sit on the fence and displayed oversensitivity towards NTR polarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "4.4 Test Conditions. The highest-scoring polarity (1 st rank) amongst each entity's three polarity counts is compared against the gold standard. All ambiguous cases were excluded, as were a few tie scores amongst short phrases. We compare the DIST and SVM scorers against a fully-COMPOSitional baseline that simply uses the internal polarity of a (sub)context to score its entities. A hybrid DIST+SVM method is also evaluated. All experiments were conducted under a (i) three-way ALL POL (POS:NTR:NEG), and a (ii) two-way NON NTR (POS:NEG, with FATAL errors only) classification condition. The proportions of finding a match in the algorithm's 1 st , 2 nd , and 3 rd polarity ranks are included. The algorithm's average figures against five annotators are given in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 765,
"end": 772,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "4.5 Results. In absolute terms, the results are modest. But in comparison with the low human ceiling, the algorithm's best scores are only 5.6\u223c8.7 points behind (ALL POL). Both scorers outperformed the fully COMPOSitional baseline -a realisation implying that entity-level sentiment is weakly compositional although, interestingly, non-compositional scoring can be approached compositionally. Shorter constituents with less contextual evidence (GS PHR) were, as expected, more challenging than longer, holistic constituents (GS SNTC). Most notable is the performance of the heuristic DIST method which generally equalled or outperformed the SVM committee. The hybrid combination (DIST+SVM) resulted in a small boost. The two complementary scoring methods appear to neutralise each other's errors as DIST displays oversensitivity towards POS and NEG labels (cf. more GREEDY errors) while SVM suggests NTR in many cases (cf. mostly LAZY errors). The correct label was in the parser's 1 st and 2 nd ranks in 79\u223c85% of the cases (ALL POL) which confirms that the parser generally points at the right direction. Matching past observations in the area, the average gap between three-way ALL POL and two-way NON NTR classification accuracy is noticeable at 20\u223c25 points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "4.6 Future Work. Further research is needed to address cases of 'sentiment overflow' where an entity's scores are incorrectly shaped by (sub)contexts beyond its natural sentiment zone boundaries. Although en- Table 3 : Sample analysis of (sub)contexts containing the entity \"London\" (with POS:NTR:NEG scores) [which will harm London's reputation as a city which is well governed, and which hitherto has had a welcoming attitude to major overseas investors'] (-) [Finch said the decision to withdraw the application was a 'dispiriting decision which will harm London's reputation as a city which is well governed, and which hitherto has had a welcoming attitude to major overseas investors'] (-) Contextual, global , taking discourse structure, Named Entities, semantic roles, and reported speech into account would be beneficial. Entity markers can be chained through anaphora/co-reference resolution which can lead to significant boosts [6] . The values for the weighting coefficients ( \u00a73.3) and the exploratory learning features for syntactic scoring ( \u00a73.2) can be optimised, and other scorers may be employed.",
"cite_spans": [
{
"start": 691,
"end": 694,
"text": "(-)",
"ref_id": null
},
{
"start": 938,
"end": 941,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 209,
"end": 216,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "SUBCONTEXT TYPE ENTITY MARKER [London's] (N) Lexical",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Scoring",
"sec_num": "3"
},
{
"text": "5.1 Compositional Analysis. A few systems that exploit the compositional properties of sentiment in differing degrees have been proposed. The system closest to our framework is [9] who describe a tool for phrase-and sentence-level classification. A sentiment composition model is described which uses a cascade of transducers relying on lexical sentiment seeds, a phrasal chunker, and hand-written pattern-matching rules. Instead of making use of compositional rules (cf. \u00a72.2), [3] incorporated compositional semantics into structured inference-based learning with lexical, negator, and voting features. [12] describe a hybrid system for detecting sentiment expressions about a topic that combines a rule-based sentiment extractor with a learning-based topic classifier. For the former, phrasal chunking and shallow parsing patterns are used to combine elements in specific syntactic cases. However, no explicit details about compositional processes are given. [17] uses scored prior polarities from sentiment lexica and knowledge bases with dependency parsing to generate verb-centric ACTOR-ACTION-OBJECT frames (each with optional internal modifiers), and calculate contextual polarities at different structural levels using hand-written polarity combination rules. A shallow compositional affect sensing approach with lexical, phrasal, and sentential linking and ranking patterns is proposed in [13] . 5.2 Entities. In classifying raw entity mentions without deep sentiment semantics, the primary focus has been on relatively shallow techniques restricted to specific topical mentions, or product names, features, and attributes. Goalwise, the approach closest to our multi-entity framework is [6] who classify entities (topics) expressed in IR search queries. Matched query entities are expanded through co-reference and meronymy analysis of concrete entities' parts and features to generate a set of topical entity mentions. These are paired with topically relevant sentiment expressions targeting them, and aggregate scores for the query entities are calculated using a sentiment propagation graph. For each sentiment expression, candidate target mentions are ranked with proximity-based, heuristic, and supervised learning-based scorers. The product feature mining and summarisation system described in [5] classifies feature mentions based on neighbouring adjectives and sentential polarity frequencies. [4] propose a more complex approach targeting products' parts and attributes with a holistic lexicon-and distance-based method that exploits local and global clause-, sentence-, and review-level evidence and patterns in disambiguating ambiguous words, irregular/idiomatic constructions, and polarity conflicts. A relaxation labelling technique was used in [15] to classify product feature mentions by sequential analyses of words, features, and sentences with syntactic dependency, lexical, and collocational constraints. [10] extract opinions with fixed opinion frames which capture for a given entity an attribute and a sentiment expression with its HOLDER.",
"cite_spans": [
{
"start": 177,
"end": 180,
"text": "[9]",
"ref_id": "BIBREF8"
},
{
"start": 479,
"end": 482,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 605,
"end": 609,
"text": "[12]",
"ref_id": "BIBREF11"
},
{
"start": 962,
"end": 966,
"text": "[17]",
"ref_id": "BIBREF16"
},
{
"start": 1399,
"end": 1403,
"text": "[13]",
"ref_id": "BIBREF12"
},
{
"start": 1698,
"end": 1701,
"text": "[6]",
"ref_id": "BIBREF5"
},
{
"start": 2311,
"end": 2314,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 2413,
"end": 2416,
"text": "[4]",
"ref_id": "BIBREF3"
},
{
"start": 2769,
"end": 2773,
"text": "[15]",
"ref_id": "BIBREF14"
},
{
"start": 2935,
"end": 2939,
"text": "[10]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "5.3 Sentiment Roles. The inventory of possible semantic roles specific to sentiment is unclear. Past proposals have targeted some of the most obvious roles encompassing opinion HOLDERs, SOURCEs, TAR-GETs, or EXPERIENCERs. [1] model the information filtering structures of opinions and facts with a supervised approach to identify the hierarchical structure of perspective and speech expressions using syntactic dominance features, and to recursively determine local and global parent-child relations amongst such ex- pressions. However, only SOURCEs were targeted. A global Integer Linear Programming-driven constraintbased inference approach was used in [2] for joint extraction of sentiment expressions, SOURCEs, and their link relations using sequence tagging and relation classifiers with lexical, positional, and syntactic frame features. [7] extract HOLDERs and TOPICs using opinion verbs and adjectives, and FrameNet-driven semantic frame role labelling. In detecting HOLDERs, Maximum Entropy modelling with syntactic dependency features between sentiment expressions and candidate entities was used in [8] . [16] , who highlight the insufficiency of automatic semantic role labelling in resolving SOURCEs and TARGETs, discuss the complexity involved in the task ranging from attribution, multiple SOURCEs and TARGETs, semantic scope, referents, discourse structure, inference, and TARGET relations, amongst others. The interrelation between sentiment roles and discourse structures is discussed further in [18] who propose transitive opinion frames for linking TOPICs. The role of co-reference resolution is discussed in [19] alongside a TOPIC annotation scheme that links opinions based on topical co-reference (cf. [6] ).",
"cite_spans": [
{
"start": 222,
"end": 225,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 655,
"end": 658,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 844,
"end": 847,
"text": "[7]",
"ref_id": "BIBREF6"
},
{
"start": 1110,
"end": 1113,
"text": "[8]",
"ref_id": "BIBREF7"
},
{
"start": 1116,
"end": 1120,
"text": "[16]",
"ref_id": "BIBREF15"
},
{
"start": 1514,
"end": 1518,
"text": "[18]",
"ref_id": "BIBREF17"
},
{
"start": 1629,
"end": 1633,
"text": "[19]",
"ref_id": "BIBREF18"
},
{
"start": 1725,
"end": 1728,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "This paper presents a principled, structural framework for modelling entity-level sentiment (sub)contexts, and in doing that, it sheds light on the role of (non-)compositional semantics in entity-level sentiment analysis. We demonstrated how compositional sentiment parsing lends itself naturally to multi-entity sentiment scoring with minimal modification. Initial results obtained from two scoring methods suggest that, despite the inherent complexity and subjectivity of the task, compositional sentiment parsing can generate sensible analyses that emulate human multi-entity sentiment judgements effectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://www.cs.pitt.edu/mpqa/ 4 http://www.cs.uic.edu/~liub/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Playing the telephone game: determining the hierarchical structure of perspective and speech expressions",
"authors": [
{
"first": "E",
"middle": [],
"last": "Breck",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING 2004",
"volume": "",
"issue": "",
"pages": "120--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Breck and C. Cardie. Playing the telephone game: deter- mining the hierarchical structure of perspective and speech ex- pressions. In Proceedings of COLING 2004, pages 120-126, Geneva, Aug. 2004.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Joint extraction of entities and relations for opinion recognition",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Breck",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of EMNLP 2006",
"volume": "",
"issue": "",
"pages": "431--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Choi, E. Breck, and C. Cardie. Joint extraction of enti- ties and relations for opinion recognition. In Proceedings of EMNLP 2006, pages 431-439, Sydney, Jul. 2006.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning with compositional semantics as structural inference for subsentential sentiment analysis",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP 2008",
"volume": "",
"issue": "",
"pages": "793--801",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Choi and C. Cardie. Learning with compositional semantics as structural inference for subsentential sentiment analysis. In Proceedings of EMNLP 2008, pages 793-801, Honolulu, Oct. 2008.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A holistic lexicon-based approach to opinion mining",
"authors": [
{
"first": "X",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "P",
"middle": [
"S"
],
"last": "Yu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 1st ACM Intl. Conference on Web Search and Data Mining (WSDM 2008)",
"volume": "",
"issue": "",
"pages": "231--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Ding, B. Liu, and P. S. Yu. A holistic lexicon-based ap- proach to opinion mining. In Proceedings of the 1st ACM Intl. Conference on Web Search and Data Mining (WSDM 2008), pages 231-240, Palo Alto, Feb. 2008.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACM SIGKDD Intl. Conference on Knowledge Discovery & Data Mining (KDD 2004)",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Hu and B. Liu. Mining and summarizing customer re- views. In Proceedings of the ACM SIGKDD Intl. Conference on Knowledge Discovery & Data Mining (KDD 2004), pages 168-177, Seattle, Aug. 2004.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Targeting sentiment expressions through supervised ranking of linguistic configurations",
"authors": [
{
"first": "J",
"middle": [
"S"
],
"last": "Kessler",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Nicolov",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 3rd Intl. Conference on Weblogs and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. S. Kessler and N. Nicolov. Targeting sentiment expressions through supervised ranking of linguistic configurations. In Pro- ceedings of the 3rd Intl. Conference on Weblogs and Social Media (ICWSM 2009), San Jose, May 2009.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Extracting opinions, opinion holders, and topics expressed in online news media text",
"authors": [
{
"first": "S.-M",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL 2006 Workshop on Sentiment and Subjectivity in Text",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S.-M. Kim and E. Hovy. Extracting opinions, opinion holders, and topics expressed in online news media text. In Proceed- ings of the COLING/ACL 2006 Workshop on Sentiment and Subjectivity in Text, pages 1-8, Sydney, Jul. 2006.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Identifying and analyzing judgment opinions",
"authors": [
{
"first": "S.-M",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of HLT/NAACL-2006",
"volume": "",
"issue": "",
"pages": "200--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S.-M. Kim and E. Hovy. Identifying and analyzing judgment opinions. In Proceedings of HLT/NAACL-2006, pages 200- 207, New York, Jun. 2006.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A tool for polarity classification of human affect from panel group texts",
"authors": [
{
"first": "M",
"middle": [],
"last": "Klenner",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Petrakis",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Fahrni",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Intl. Conference on Affective Computing and Intelligent Interaction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Klenner, S. Petrakis, and A. Fahrni. A tool for polarity classification of human affect from panel group texts. In Pro- ceedings of the Intl. Conference on Affective Computing and Intelligent Interaction (ACII 2009), Amsterdam, Sep. 2009.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Extracting aspectevaluation and aspect-of relations in opinion mining",
"authors": [
{
"first": "N",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP/CoNLL",
"volume": "",
"issue": "",
"pages": "1065--1074",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Kobayashi, K. Inui, and Y. Matsumoto. Extracting aspect- evaluation and aspect-of relations in opinion mining. In Pro- ceedings of EMNLP/CoNLL 2007, pages 1065-1074, Prague, Jun. 2007.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sentiment composition. In Proceedings of RANLP",
"authors": [
{
"first": "K",
"middle": [],
"last": "Moilanen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pulman",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "378--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Moilanen and S. Pulman. Sentiment composition. In Pro- ceedings of RANLP 2007, pages 378-382, Borovets, Sep. 2007.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Towards a robust metric of opinion",
"authors": [
{
"first": "K",
"middle": [],
"last": "Nigam",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hurst",
"suffix": ""
}
],
"year": 2006,
"venue": "Computing Attitude and Affect in Text: Theory and Applications",
"volume": "",
"issue": "",
"pages": "265--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Nigam and M. Hurst. Towards a robust metric of opinion. In Y. Qu, J. Shanahan, and J. Wiebe, editors, Computing At- titude and Affect in Text: Theory and Applications, pages 265-280. Springer, 2006.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Towards semantic affect sensing in sentences",
"authors": [
{
"first": "A",
"middle": [],
"last": "Osherenko",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the AISB 2008 Symposium on Affective Language in Human and Machine",
"volume": "",
"issue": "",
"pages": "41--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Osherenko. Towards semantic affect sensing in sentences. In Proceedings of the AISB 2008 Symposium on Affective Lan- guage in Human and Machine, pages 41-44, Aberdeen, Apr. 2008.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Contextual valence shifters",
"authors": [
{
"first": "L",
"middle": [],
"last": "Polanyi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Zaenen",
"suffix": ""
}
],
"year": 2006,
"venue": "Computing Attitude and Affect in Text: Theory and Applications",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Polanyi and A. Zaenen. Contextual valence shifters. In Y. Qu, J. Shanahan, and J. Wiebe, editors, Computing At- titude and Affect in Text: Theory and Applications, pages 1-10. Springer, 2006.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Extracting product features and opinions from reviews",
"authors": [
{
"first": "A.-M",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of HLT/EMNLP 2005",
"volume": "",
"issue": "",
"pages": "339--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.-M. Popescu and O. Etzioni. Extracting product features and opinions from reviews. In Proceedings of HLT/EMNLP 2005, pages 339-346, Vancouver, Oct. 2005.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Finding the sources and targets of subjective expressions",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of LREC 2008",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Ruppenhofer, S. Somasundaran, and J. Wiebe. Finding the sources and targets of subjective expressions. In Proceedings of LREC 2008, Marrakech, May 2008.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An Analytical Approach for Affect Sensing from Text",
"authors": [
{
"first": "M",
"middle": [
"A M"
],
"last": "Shaikh",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. A. M. Shaikh. An Analytical Approach for Affect Sensing from Text. PhD thesis, University of Tokyo, 2008.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Discourse level opinion interpretation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of COLING 2008",
"volume": "",
"issue": "",
"pages": "801--808",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Somasundaran, J. Wiebe, and J. Ruppenhofer. Discourse level opinion interpretation. In Proceedings of COLING 2008, pages 801-808, Manchester, Aug. 2008.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Annotating topics of opinion",
"authors": [
{
"first": "V",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of LREC 2008",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Stoyanov and C. Cardie. Annotating topics of opinion. In Proceedings of LREC 2008, Marrakech, May 2008.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Annotating expressions of opinions and emotions in language. Language Resources and Evaluation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "39",
"issue": "",
"pages": "165--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Wiebe, T. Wilson, and C. Cardie. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 39(2-3):165-210, May 2005.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Head-dependents combination schema",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "svmvote(p) = # of SVMs classifying entity as pSVMs' predicted class labels are required to fulfill one post-classification polarity axiom: if a (sub)context does not contain any words with POS or NEG prior polarities (i.e. it is fully NTR), non-neutral predictions are discarded and asserted as NTR instead.2 Johnson, M. (2008). SVM.NET 1.4. (www.matthewajohnson. org/software/svm.html). Based on Chang, C. & Lin, C. (2001). LIBSVM. (www.csie.ntu.edu.tw/~cjlin/libsvm/).",
"uris": null
},
"TABREF0": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>Mod:AdjP</td><td>Head:N</td><td>[funny blunders] (+)</td></tr><tr><td>Mod:Nom</td><td>Head:N</td><td>[error reduction] (+)</td></tr><tr><td>Mod:AdvP</td><td>Head:Adj</td><td>[badly decorated] (-)</td></tr><tr><td>Head:Adj</td><td>Comp:PP</td><td>[sick of fame] (-)</td></tr><tr><td>Head:N</td><td>Comp:VP</td><td>[market gone sour] (-)</td></tr><tr><td>Head:Pred</td><td>Comp:DirObj</td><td>[end the hostility] (+)</td></tr><tr><td>Head:Pred . . .</td><td>Adjunct:Adv</td><td>[smiled painfully] (-)</td></tr><tr><td colspan=\"3\">seeds, proceeds recursively via intermediate syntactic</td></tr><tr><td colspan=\"3\">levels, and terminates at the top sentence level.</td></tr></table>",
"text": "Sample Constituent Rankings"
},
"TABREF1": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>IN constituents to dictate whose sentiment dominates:</td></tr><tr><td>the stronger of the two (superordinate (SPR)) domi-</td></tr><tr><td>nates the weaker one (subordinate (SUB)) (i.e. SPR</td></tr><tr><td>SUB). The weights are not stored in any individ-</td></tr><tr><td>ual IN constituents but are latent in specific syntac-</td></tr><tr><td>tic constructions such as [Mod:Adj Head:N] (i.e. ad-</td></tr><tr><td>jectival premodification of head nouns) or [Head:V</td></tr><tr><td>Comp:NP] (i.e. direct object complements of verbs).</td></tr></table>",
"text": "2 Sentiment Grammar. Since the polarity of a composite constituent can differ from the two IN polarities, the IN constituents can not be equally salient. The model assigns relative weights to the two"
},
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>PRIOR POLARITY</td><td>GLOBAL POLARITY</td><td>UNIGRAM</td></tr><tr><td>POLARITY REVERSAL</td><td>POLARITY WSD TYPE</td><td/></tr><tr><td>DEPENDENCY TYPE</td><td>SYNTACTIC ROLE</td><td/></tr><tr><td colspan=\"2\">GRAMMATICAL RELATION WORD CLASS</td><td/></tr></table>",
"text": "SVM entity feature template"
},
"TABREF7": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td>GS SNTC (3139)</td><td/><td/><td/><td>GS PHR (4765)</td><td/></tr><tr><td colspan=\"2\">k Human-1 .50</td><td>.82</td><td>66.82</td><td>90.99</td><td>.49</td><td>.74</td><td>66.83</td><td>87.90</td></tr><tr><td colspan=\"2\">Human-2 .48</td><td>.77</td><td>65.03</td><td>88.67</td><td>.49</td><td>.71</td><td>66.87</td><td>86.43</td></tr><tr><td colspan=\"2\">Human-3 .34</td><td>.79</td><td>52.79</td><td>89.60</td><td>.33</td><td>.72</td><td>55.09</td><td>86.73</td></tr><tr><td colspan=\"2\">Human-4 .51</td><td>.80</td><td>66.90</td><td>89.70</td><td>.47</td><td>.66</td><td>64.46</td><td>82.88</td></tr><tr><td colspan=\"2\">Human-5 .40</td><td>.72</td><td>58.80</td><td>86.21</td><td>.36</td><td>.69</td><td>54.89</td><td>85.14</td></tr><tr><td>Avg</td><td>.45</td><td>.78</td><td>62.07</td><td>89.03</td><td>.43</td><td>.70</td><td>61.63</td><td>85.81</td></tr><tr><td colspan=\"5\">tity markers (and any sentiment roles therein) are</td><td/><td/><td/><td/></tr><tr><td colspan=\"5\">linked through a variety of complex means [16][18][6]</td><td/><td/><td/><td/></tr></table>",
"text": "Human accuracy and inter-annotator agreement scores on the gold standard ALL POL k NON NTR Acc ALL POL Acc NON NTR k ALL POL k NON NTR Acc ALL POL Acc NON NTR"
},
"TABREF8": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td colspan=\"2\">ALL POL</td><td colspan=\"2\">NON NTR</td><td/><td colspan=\"2\">Ranks (ALL POL)</td><td/><td colspan=\"3\">Errors (ALL POL)</td></tr><tr><td>Data set</td><td>Scoring</td><td>Acc</td><td>k</td><td>Acc</td><td>k</td><td>1</td><td>2</td><td>3</td><td>1+2</td><td>FATAL</td><td colspan=\"2\">GREEDY LAZY</td></tr><tr><td>GS SNTC</td><td>HUMAN</td><td>62.07</td><td>.45</td><td>89.03</td><td>.78</td><td/><td/><td/><td/><td>17.99</td><td>41.01</td><td>41.01</td></tr><tr><td/><td>COMPOS</td><td>52.20</td><td>.28</td><td>71.71</td><td>.45</td><td/><td/><td/><td/><td>38.66</td><td>38.13</td><td>23.20</td></tr><tr><td/><td>DIST</td><td colspan=\"2\">56.44 .35</td><td>79.32</td><td>.59</td><td colspan=\"4\">56.44 28.04 15.52 84.48</td><td>28.32</td><td>35.69</td><td>35.99</td></tr><tr><td/><td>SVM</td><td>50.04</td><td>.28</td><td>79.49</td><td>.58</td><td>50.04</td><td colspan=\"3\">30.64 19.31 80.69</td><td colspan=\"2\">14.60 14.11</td><td>71.28</td></tr><tr><td/><td>DIST+SVM</td><td>54.12</td><td>.33</td><td colspan=\"2\">82.21 .64</td><td>54.12</td><td colspan=\"3\">30.31 15.56 84.44</td><td>16.03</td><td>19.56</td><td>64.42</td></tr><tr><td>GS PHR</td><td>HUMAN</td><td>61.63</td><td>.43</td><td>85.81</td><td>.70</td><td/><td/><td/><td/><td>18.38</td><td>40.81</td><td>40.81</td></tr><tr><td/><td>COMPOS</td><td>48.70</td><td>.24</td><td>65.56</td><td>.34</td><td/><td/><td/><td/><td>32.28</td><td>44.48</td><td>23.23</td></tr><tr><td/><td>DIST</td><td>51.42</td><td>.27</td><td>68.73</td><td>.40</td><td>51.42</td><td colspan=\"3\">27.51 21.07 78.93</td><td>27.41</td><td>39.68</td><td>32.91</td></tr><tr><td/><td>SVM</td><td>52.74</td><td>.25</td><td colspan=\"2\">77.70 .52</td><td>52.74</td><td colspan=\"3\">24.73 22.53 77.47</td><td colspan=\"2\">12.42 20.70</td><td>66.88</td></tr><tr><td/><td>DIST+SVM</td><td colspan=\"2\">52.92 .27</td><td>73.60</td><td>.48</td><td colspan=\"4\">52.92 26.08 21.00 79.00</td><td>18.52</td><td>28.71</td><td>52.77</td></tr></table>",
"text": "Multi-entity scoring results"
}
}
}
}