Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E12-1033",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:36:28.278525Z"
},
"title": "Generalization Methods for In-Domain and Cross-Domain Opinion Holder Extraction",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Spoken Language Systems Saarland University",
"location": {
"postCode": "D-66123",
"settlement": "Saarbr\u00fccken",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Spoken Language Systems Saarland University",
"location": {
"postCode": "D-66123",
"settlement": "Saarbr\u00fccken",
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we compare three different generalization methods for in-domain and cross-domain opinion holder extraction being simple unsupervised word clustering, an induction method inspired by distant supervision and the usage of lexical resources. The generalization methods are incorporated into diverse classifiers. We show that generalization causes significant improvements and that the impact of improvement depends on the type of classifier and on how much training and test data differ from each other. We also address the less common case of opinion holders being realized in patient position and suggest approaches including a novel (linguisticallyinformed) extraction method how to detect those opinion holders without labeled training data as standard datasets contain too few instances of this type. (3) Mrs. Bennet does what she can to get Jane and Bingley together and embarrasses her daughters by doing so.",
"pdf_parse": {
"paper_id": "E12-1033",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we compare three different generalization methods for in-domain and cross-domain opinion holder extraction being simple unsupervised word clustering, an induction method inspired by distant supervision and the usage of lexical resources. The generalization methods are incorporated into diverse classifiers. We show that generalization causes significant improvements and that the impact of improvement depends on the type of classifier and on how much training and test data differ from each other. We also address the less common case of opinion holders being realized in patient position and suggest approaches including a novel (linguisticallyinformed) extraction method how to detect those opinion holders without labeled training data as standard datasets contain too few instances of this type. (3) Mrs. Bennet does what she can to get Jane and Bingley together and embarrasses her daughters by doing so.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Opinion holder extraction is one of the most important subtasks in sentiment analysis. The extraction of sources of opinions is an essential component for complex real-life applications, such as opinion question answering systems or opinion summarization systems (Stoyanov and Cardie, 2011) . Common approaches designed to extract opinion holders are based on data-driven methods, in particular supervised learning.",
"cite_spans": [
{
"start": 263,
"end": 290,
"text": "(Stoyanov and Cardie, 2011)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we examine the role of generalization for opinion holder extraction in both indomain and cross-domain classification. Generalization may not only help to compensate the availability of labeled training data but also conciliate domain mismatches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to illustrate this, compare for instance (1) and (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) Malaysia did not agree to such treatment of Al-Qaeda soldiers as they were prisoners-of-war and should be accorded treatment as provided for under the Geneva Convention. (2) Japan wishes to build a $21 billion per year aerospace industry centered on commercial satellite development.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Though both sentences contain an opinion holder, the lexical items vary considerably. However, if the two sentences are compared on the basis of some higher level patterns, some similarities become obvious. In both cases the opinion holder is an entity denoting a person and this entity is an agent 1 of some predictive predicate (i.e. agree in (1) and wishes in (2)), more specifically, an expression that indicates that the agent utters a subjective statement. Generalization methods ideally capture these patterns, for instance, they may provide a domain-independent lexicon for those predicates. In some cases, even higher order features, such as certain syntactic constructions may vary throughout the different domains. In (1) and (2), the opinion holders are agents of a predictive predicate, whereas the opinion holder her daughters in (3) is a patient 2 of embarrasses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "If only sentences, such as (1) and (2), occur in the training data, a classifier will not correctly extract the opinion holder in (3), unless it obtains additional knowledge as to which predicates take opinion holders as patients.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we will consider three different generalization methods being simple unsupervised word clustering, an induction method and the usage of lexical resources. We show that generalization causes significant improvements and that the impact of improvement depends on how much training and test data differ from each other. We also address the issue of opinion holders in patient position and present methods including a novel extraction method to detect these opinion holders without any labeled training data as standard datasets contain too few instances of them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the context of generalization it is also important to consider different classification methods as the incorporation of generalization may have a varying impact depending on how robust the classifier is by itself, i.e. how well it generalizes even with a standard feature set. We compare two stateof-the-art learning methods, conditional random fields and convolution kernels, and a rule-based method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a labeled dataset we mainly use the MPQA 2.0 corpus . We adhere to the definition of opinion holders from previous work (Wiegand and Klakow, 2010; Wiegand and Klakow, 2011a; Wiegand and Klakow, 2011b) , i.e. every source of a private state or a subjective speech event is considered an opinion holder.",
"cite_spans": [
{
"start": 123,
"end": 149,
"text": "(Wiegand and Klakow, 2010;",
"ref_id": "BIBREF28"
},
{
"start": 150,
"end": 176,
"text": "Wiegand and Klakow, 2011a;",
"ref_id": "BIBREF29"
},
{
"start": 177,
"end": 203,
"text": "Wiegand and Klakow, 2011b)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "This corpus contains almost exclusively news texts. In order to divide it into different domains, we use the topic labels from (Stoyanov et al., 2004) . By inspecting those topics, we found that many of them can grouped to a cluster of news items discussing human rights issues mostly in the context of combating global terrorism. This means that there is little point in considering every single topic as a distinct (sub)domain and, therefore, we consider this cluster as one single domain ETHICS. 3 For our cross-domain evaluation, we want to have another topic that is fairly different from this set of documents. By visual inspection, we found that the topic discussing issues regarding the International Space Station would suit our purpose. It is henceforth called SPACE. In addition to these two (sub)domains, we chose some text type that is not even news text in order to have a very distant domain. Therefore, we had to use some text not included in the MPQA corpus. Existing text collections containing product reviews (Kessler et al., 2010; Toprak et al., 2010) , which are generally a popular resource for sentiment analysis, were not found suitable as they only contain few distinct opinion holders. We finally used a few summaries of fictional work (two Shakespeare plays and one novel by Jane Austen 4 ) since their language is notably different from that of news texts and they contain a large number of different opinion holders (therefore opinion holder extraction is a meaningful task on this text type). These texts make up our third domain FICTION. We manually labeled it with opinion holder information by applying the annotation scheme of the MPQA corpus. Table 1 lists the properties of the different domain corpora. Note that ETHICS is the largest domain. We consider it our primary (source) domain as it serves both as a training and (in-domain) test set. Due to their size, the other domains only serve as test sets (target domains).",
"cite_spans": [
{
"start": 127,
"end": 150,
"text": "(Stoyanov et al., 2004)",
"ref_id": "BIBREF23"
},
{
"start": 499,
"end": 500,
"text": "3",
"ref_id": null
},
{
"start": 1029,
"end": 1051,
"text": "(Kessler et al., 2010;",
"ref_id": "BIBREF12"
},
{
"start": 1052,
"end": 1072,
"text": "Toprak et al., 2010)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 1679,
"end": 1686,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "For some of our generalization methods, we also need a large unlabeled corpus. We use the North American News Text Corpus (LDC95T21).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "The simplest generalization method that is considered in this paper is word clustering. By that, we understand the automatic grouping of words occurring in similar contexts. Such clusters are usually computed on a large unlabeled corpus. Unlike lexical features, features based on clusters are less sparse and have been proven to significantly improve data-driven classifiers in related tasks, such as named-entity recognition (Turian et I. Madrid, Dresden, Bordeaux, Istanbul, Caracas, Manila, ... II. Toby, Betsy, Michele, Tim, Jean-Marie, Rory, Andrew, ... III. detest, resent, imply, liken, indicate, suggest, owe, expect, ... IV. disappointment, unease, nervousness, dismay, optimism, ... V. remark, baby, book, saint, manhole, maxim, coin, batter, ... al., 2010). Such a generalization is, in particular, attractive as it is cheaply produced. As a stateof-the-art clustering method, we consider Brown clustering (Brown et al., 1992) as implemented in the SRILM-toolkit (Stolcke, 2002) . We induced 1000 clusters which is also the configuration used in (Turian et al., 2010) . 5 Table 2 illustrates a few of the clusters induced from our unlabeled dataset introduced in Section ( \u00a7) 2. Some of these clusters represent location or person names (e.g. I. & II.). This exemplifies why clustering is effective for named-entity recognition. We also find clusters that intuitively seem to be meaningful for our task (e.g. III. & IV.) but, on the other hand, there are clusters that contain words that with the exception of their part of speech do not have anything in common (e.g. V.).",
"cite_spans": [
{
"start": 427,
"end": 757,
"text": "(Turian et I. Madrid, Dresden, Bordeaux, Istanbul, Caracas, Manila, ... II. Toby, Betsy, Michele, Tim, Jean-Marie, Rory, Andrew, ... III. detest, resent, imply, liken, indicate, suggest, owe, expect, ... IV. disappointment, unease, nervousness, dismay, optimism, ... V. remark, baby, book, saint, manhole, maxim, coin, batter, ...",
"ref_id": null
},
{
"start": 918,
"end": 938,
"text": "(Brown et al., 1992)",
"ref_id": "BIBREF3"
},
{
"start": 975,
"end": 990,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF22"
},
{
"start": 1058,
"end": 1079,
"text": "(Turian et al., 2010)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 1084,
"end": 1091,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Word Clustering (Clus)",
"sec_num": "3.1"
},
{
"text": "The major shortcoming of word clustering is that it lacks any task-specific knowledge. The opposite type of generalization is the usage of manually compiled lexicons comprising predicates that indicate the presence of opinion holders, such as supported, worries or disappointed in (4)-(6). We follow Wiegand and Klakow (2011b) who found that those predicates can be best obtained by using a subset of Levin's verb classes (Levin, 1993) and the strong subjective expressions of the Subjectivity Lexicon . For those predicates it is also important to consider in which argument position they usually take an opinion holder. Bethard et al. (2004) found the majority of holders are agents (4). A certain number of predicates, however, also have opinion holders in patient position, e.g. (5) and (6). Wiegand and Klakow (2011b) found that many of those latter predicates are listed in one of Levin's verb classes called amuse verbs. While on the evaluation on the entire MPQA corpus, opinion holders in patient position are fairly rare (Wiegand and Klakow, 2011b) , we may wonder whether the same applies to the individual domains that we consider in this work. Table 3 lists the proportion of those opinion holders (computed manually) based on a random sample of 100 opinion holder mentions from those corpora. The table shows indeed that on the domains from the MPQA corpus, i.e. ETHICS and SPACE, those opinion holders play a minor role but there is a notably higher proportion on the FICTION-domain.",
"cite_spans": [
{
"start": 300,
"end": 326,
"text": "Wiegand and Klakow (2011b)",
"ref_id": "BIBREF30"
},
{
"start": 422,
"end": 435,
"text": "(Levin, 1993)",
"ref_id": "BIBREF18"
},
{
"start": 622,
"end": 643,
"text": "Bethard et al. (2004)",
"ref_id": "BIBREF2"
},
{
"start": 796,
"end": 822,
"text": "Wiegand and Klakow (2011b)",
"ref_id": "BIBREF30"
},
{
"start": 1031,
"end": 1058,
"text": "(Wiegand and Klakow, 2011b)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 1157,
"end": 1164,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Manually Compiled Lexicons (Lex)",
"sec_num": "3.2"
},
{
"text": "Lexical resources are potentially much more expressive than word clustering. This knowledge, however, is usually manually compiled, which makes this solution much more expensive. Wiegand and Klakow (2011a) present an intermediate solution for opinion holder extraction inspired by distant supervision (Mintz et al., 2009) . The output of that method is also a lexicon of predicates but it is automatically extracted from a large unlabeled corpus. This is achieved by collecting predicates that frequently co-occur with prototypical opinion holders, i.e. common nouns such as opponents (7) or critics (8), if they are an agent of that predicate. The rationale behind this is that those nouns act very much like actual opinion holders and therefore can be seen as a proxy. 7Opponents say these arguments miss the point. (8) Critics argued that the proposed limits were unconstitutional.",
"cite_spans": [
{
"start": 301,
"end": 321,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distant Supervision with Prototypical Opinion Holders",
"sec_num": "3.3.1"
},
{
"text": "This method reduces the human effort to specifying a small set of such prototypes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distant Supervision with Prototypical Opinion Holders",
"sec_num": "3.3.1"
},
{
"text": "Following the best configuration reported in (Wiegand and Klakow, 2011a) , we extract 250 verbs, 100 nouns and 100 adjectives from our unlabeled corpus ( \u00a72).",
"cite_spans": [
{
"start": 45,
"end": 72,
"text": "(Wiegand and Klakow, 2011a)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distant Supervision with Prototypical Opinion Holders",
"sec_num": "3.3.1"
},
{
"text": "The downside of using prototypical opinion holders as a proxy for opinion holders is that it anguish * , astonish, astound, concern, convince, daze, delight, disenchant * , disappoint, displease, disgust, disillusion, dissatisfy, distress, embitter * , enamor * , engross, enrage, entangle * , excite, fatigue * , flatter, fluster, flummox * , frazzle * , hook * , humiliate, incapacitate * , incense, interest, irritate, obsess, outrage, perturb, petrify * , sadden, sedate * , shock, stun, tether * , trouble is limited to agentive opinion holders. Opinion holders in patient position, such as the ones taken by amuse verbs in (5) and (6), are not covered. Wiegand and Klakow (2011a) show that considering less restrictive contexts significantly drops classification performance. So the natural extension of looking for predicates having prototypical opinion holders in patient position is not effective. Sentences, such as (9), would mar the result.",
"cite_spans": [
{
"start": 659,
"end": 685,
"text": "Wiegand and Klakow (2011a)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extension for Opinion Holders in Patient Position",
"sec_num": "3.3.2"
},
{
"text": "(9) They criticized their opponents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension for Opinion Holders in Patient Position",
"sec_num": "3.3.2"
},
{
"text": "In (9) the prototypical opinion holder opponents (in the patient position) is not a true opinion holder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension for Opinion Holders in Patient Position",
"sec_num": "3.3.2"
},
{
"text": "Our novel method to extract those predicates rests on the observation that the past participle of those verbs, such as shocked in (10), is very often identical to some predicate adjective (11) having a similar if not identical meaning. For the predicate adjective, the opinion holder is, however, its subject/agent and not its patient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension for Opinion Holders in Patient Position",
"sec_num": "3.3.2"
},
{
"text": "(10) He had shocked verb me. holder:patient (11) I was shocked adj . holder:agent Instead of extracting those verbs directly (10), we take the detour via their corresponding predicate adjectives (11). This means that we collect all those verbs (from our large unlabeled corpus ( \u00a72)) for which there is a predicate adjective that coincides with the past participle of the verb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension for Opinion Holders in Patient Position",
"sec_num": "3.3.2"
},
{
"text": "To increase the likelihood that our extracted predicates are meaningful for opinion holder extraction, we also need to check the semantic type in the relevant argument position, i.e. make sure that the agent of the predicate adjective (which would be the patient of the corresponding verb) is an entity likely to be an opinion holder. Our initial attempts with prototypical opinion holders were too restrictive, i.e. the number of prototypical opinion holders co-occurring with those adjectives was too small. Therefore, we widen the semantic type of this position from prototypical opinion holders to persons. This means that we allow personal pronouns (i.e. I, you, he, she and we) to appear in this position. We believe that this relaxation can be done in that particular case, as adjectives are much more likely to convey opinions a priori than verbs .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension for Opinion Holders in Patient Position",
"sec_num": "3.3.2"
},
{
"text": "An intrinsic evaluation of the predicates that we thus extracted from our unlabeled corpus is difficult. The 250 most frequent verbs exhibiting this special property of coinciding with adjectives (this will be the list that we use in our experiments) contains 42% entries of the amuse verbs ( \u00a73.2). However, we also found many other potentially useful predicates on this list that are not listed as amuse verbs (Table 4) . As amuse verbs cannot be considered a complete golden standard for all predicates taking opinion holders as patients, we will focus on a task-based evaluation of our automatically extracted list ( \u00a76).",
"cite_spans": [],
"ref_spans": [
{
"start": 412,
"end": 421,
"text": "(Table 4)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Extension for Opinion Holders in Patient Position",
"sec_num": "3.3.2"
},
{
"text": "In the following, we present the two supervised classifiers we use in our experiments. Both classifiers incorporate the same levels of representations, including the same generalization methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data-driven Methods",
"sec_num": "4"
},
{
"text": "The supervised classifier most frequently used for information extraction tasks, in general, are conditional random fields (CRF) (Lafferty et al., 2001 ). Using CRF, the task of opinion holder extraction is framed as a tagging problem in which given a sequence of observations x = x 1 x 2 . . . x n (words in a sentence) a sequence of output tags y = y 1 y 2 . . . y n indicating the boundaries of opinion holders is computed by modeling the conditional probability P (x|y).",
"cite_spans": [
{
"start": 129,
"end": 151,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields (CRF)",
"sec_num": "4.1"
},
{
"text": "The features we use (Table 5) are mostly inspired by Choi et al. (2005) and by the ones used for plain support vector machines (SVMs) in (Wiegand and Klakow, 2010) . They are organized into groups. The basic group Plain does not contain any generalization method. Each other group is dedicated to one specific generalization method that we want to examine (Clus, Induc and Lex). Apart from considering generalization features indicating the presence of generalization types, we also consider those types in conjunction with semantic roles. As already indicated above, semantic roles are especially important for the detection of opinion holders. Unfortunately, the cor- responding feature from the Plain feature group that also includes the lexical form of the predicate is most likely a sparse feature. For the opinion holder me in (10), for example, it would correspond to A1 shock. Therefore, we introduce for each generalization method an additional feature replacing the sparse lexical item by a generalization label, i.e. Clus: A1 CLUSTER-35265, Induc: A1 INDUC-PRED and Lex: A1 LEX-PRED. 6 For this learning method, we use CRF++. 7 We choose a configuration that provides good performance on our source domain (i.e. ETHICS). 8 For semantic role labeling we use SWIRL 9 , for chunk parsing CASS (Abney, 1991) and for constituency parsing Stanford Parser (Klein and Manning, 2003) . Named-entity information is provided by Stanford Tagger (Finkel et al., 2005) .",
"cite_spans": [
{
"start": 53,
"end": 71,
"text": "Choi et al. (2005)",
"ref_id": "BIBREF4"
},
{
"start": 137,
"end": 163,
"text": "(Wiegand and Klakow, 2010)",
"ref_id": "BIBREF28"
},
{
"start": 1095,
"end": 1096,
"text": "6",
"ref_id": null
},
{
"start": 1137,
"end": 1138,
"text": "7",
"ref_id": null
},
{
"start": 1232,
"end": 1233,
"text": "8",
"ref_id": null
},
{
"start": 1301,
"end": 1314,
"text": "(Abney, 1991)",
"ref_id": "BIBREF0"
},
{
"start": 1360,
"end": 1385,
"text": "(Klein and Manning, 2003)",
"ref_id": "BIBREF15"
},
{
"start": 1444,
"end": 1465,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 20,
"end": 29,
"text": "(Table 5)",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Conditional Random Fields (CRF)",
"sec_num": "4.1"
},
{
"text": "Convolution kernels (CK) are special kernel functions. A kernel function K : X \u00d7 X \u2192 R computes the similarity of two data instances x i and x j (x i \u2227 x j \u2208 X). It is mostly used in SVMs that estimate a hyperplane to separate data instances from different classes H( x) = w \u2022 x + b = 0 where w \u2208 R n and b \u2208 R (Joachims, 1999) . In convolution kernels, the structures to be compared within the kernel function are not vectors comprising manually designed features but the underlying discrete structures, such as syntactic parse trees or part-of-speech sequences. Since they are directly provided to the learning algorithm, a classifier can be built without taking the effort of implementing an explicit feature extraction.",
"cite_spans": [
{
"start": 311,
"end": 327,
"text": "(Joachims, 1999)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Kernels (CK)",
"sec_num": "4.2"
},
{
"text": "We take the best configuration from (Wiegand and Klakow, 2010) that comprises a combination of three different tree kernels being two tree kernels based on constituency parse trees (one with predicate and another with semantic scope) and a tree kernel encoding predicate-argument structures based on semantic role information. These representations are illustrated in Figure 1 . The resulting kernels are combined by plain summation.",
"cite_spans": [
{
"start": 36,
"end": 62,
"text": "(Wiegand and Klakow, 2010)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 368,
"end": 376,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Convolution Kernels (CK)",
"sec_num": "4.2"
},
{
"text": "In order to integrate our generalization methods into the convolution kernels, the input structures, i.e. the linguistic tree structures, have to be augmented. For that we just add additional nodes whose labels correspond to the respective generalization types (i.e. Clus: CLUSTER-ID, Induc: INDUC-PRED and Lex: LEX-PRED). The nodes are added in such a way that they (directly) dominate the leaf node for which they provide a generalization. 10 If several generalization methods are used and several of them apply for the same lexical unit, then the (vertical) order of the generalization nodes is LEX-PRED INDUC-PRED CLUSTER-ID. 11 Figure 2 illustrates the predicate argument structure from Figure 1 augmented with INDUC-PRED and CLUSTER-IDs.",
"cite_spans": [
{
"start": 442,
"end": 444,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 633,
"end": 641,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 692,
"end": 700,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Convolution Kernels (CK)",
"sec_num": "4.2"
},
{
"text": "For this learning method, we use the SVMLight-TK toolkit. 12 Again, we tune the parameters to our source domain (ETHICS). 13",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Kernels (CK)",
"sec_num": "4.2"
},
{
"text": "Finally, we also consider rule-based classifiers (RB). The main difference towards CRF and CK is that it is an unsupervised approach not requiring training data. We re-use the framework by Wiegand and Klakow (2011b). The candidate set are all noun phrases in a test set. A candidate is classified as an opinion holder if all of the following conditions hold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based Classifiers (RB)",
"sec_num": "5"
},
{
"text": "\u2022 The candidate denotes a person or group of persons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based Classifiers (RB)",
"sec_num": "5"
},
{
"text": "\u2022 There is a predictive predicate in the same sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based Classifiers (RB)",
"sec_num": "5"
},
{
"text": "\u2022 The candidate has a pre-specified semantic role in the event that the predictive predicate evokes (default: agent-role).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based Classifiers (RB)",
"sec_num": "5"
},
{
"text": "The set of predicates is obtained from a given lexicon. For predicates that take opinion holders as patients, the default agent-role is overruled. We consider several classifiers that differ in the lexicon they use. RB-Lex uses the combination of the manually compiled lexicons presented in \u00a73.2. RB-Induc uses the predicates that have been automatically extracted from a large unlabeled corpus using the methods presented in \u00a73.3. RB-Induc+Lex considers the union of those lexicons. In order to examine the impact of modeling opinion holders in patient position, we also introduce two versions of each lexicon. AG just considers predicates in agentive position while AG+PT also considers predicates that take opinion holders as patients. For example, RB-Induc AG+P T is a classifier that uses automatically extracted predicates in order to detect opinion holders in both agent and patient argument position, i.e. RB-Induc AG+P T also covers our novel extraction method for patients ( \u00a73.3.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based Classifiers (RB)",
"sec_num": "5"
},
{
"text": "The output of clustering will exclusively be evaluated in the context of learning-based meth- ods, since there is no straightforward way of incorporating this output into a rule-based classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based Classifiers (RB)",
"sec_num": "5"
},
{
"text": "CK and RB have an instance space that is different from the one of CRF. While CRF produces a prediction for every word token in a sentence, CK and RB only produce a prediction for every noun phrase. For evaluation, we project the predictions from RB and CK to word token level in order to ensure comparability. We evaluate the sequential output with precision, recall and F-score as defined in (Johansson and Moschitti, 2010; Johansson and Moschitti, 2011) . only be measured on the FICTION-domain since this is the only domain with a significant proportion of those opinion holders (Table 3) . Table 7 shows the performance of the learningbased methods CRF and CK on an in-domain evaluation (ETHICS-domain) using different amounts of labeled training data. We carry out a 5-fold cross-validation and use n% of the training data in the training folds. The table shows that CK is more robust than CRF. The fewer training data are used the more important generalization becomes. CRF benefits much more from generalization than CK. Interestingly, the CRF configuration with the best generalization is usually as good as plain CK. This proves the effectiveness of CK. In principle, Lex is the strongest generalization method while Clus is by far the weakest. For Clus, systematic improvements towards no generalization (even though they are minor) can only be observed with CRF. As far as combinations are concerned, either Lex+Induc or All performs best. This in-domain evaluation proves that opinion holder extraction is different from namedentity recognition. Simple unsupervised generalization, such as word clustering, is not effective and popular sequential classifiers are less robust than margin-based tree-kernels. Table 8 complements Table 7 in that it compares the learning-based methods with the best rule-based classifier and also displays precision and recall. RB achieves a high recall, whereas the learning-based methods always excel RB in precision. 14 Applying generalization to the learningbased methods results in an improvement of both recall and precision if few training data are used. The impact on precision decreases, however, the more training data are added. There is always a significant increase in recall but learning-based methods may not reach the level of RB even though they use the same resources. This is a side-effect of preserving a much higher precision. It also explains why learning-based methods with generalization may have a lower F-score than RB.",
"cite_spans": [
{
"start": 394,
"end": 425,
"text": "(Johansson and Moschitti, 2010;",
"ref_id": "BIBREF10"
},
{
"start": 426,
"end": 456,
"text": "Johansson and Moschitti, 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 583,
"end": 592,
"text": "(Table 3)",
"ref_id": "TABREF3"
},
{
"start": 595,
"end": 602,
"text": "Table 7",
"ref_id": "TABREF10"
},
{
"start": 1720,
"end": 1747,
"text": "Table 8 complements Table 7",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Learning-based Methods Table 9 presents the results of out-of-domain classifiers. The complete ETHICS-dataset is used for training. Some properties are similar to the previous experiments: CK always outperforms CRF. RB provides a high recall whereas the learningbased methods maintain a higher precision. Similar to the in-domain setting using few labeled training data, the incorporation of generalization increases both precision and recall. Moreover, a combination of generalization methods is better than just using one method on average, although Lex is again a fairly robust individual generalization method. Generalization is more effective in this setting than on the in-domain evaluation using all training data, in particular for CK, since the training and test data are much more different from each other and suitable generalization methods partly close that gap. There is a notable difference in precision between the SPACE-and FICTION-domain (and also the source domain ETHICS (Table 8) ). We strongly assume that this is due to the distribution of opinion holders in those datasets ( Table 1) . The FICTION-domain contains much more opinion holders, therefore the chance that a predicted opinion holder is correct is much higher.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 9",
"ref_id": "TABREF15"
},
{
"start": 991,
"end": 1000,
"text": "(Table 8)",
"ref_id": "TABREF12"
},
{
"start": 1099,
"end": 1107,
"text": "Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Out-of-Domain Evaluation of",
"sec_num": "6.3"
},
{
"text": "With regard to recall, a similar level of performance as in the ETHICS-domain can only be achieved in the SPACE-domain, i.e. CK achieves a recall of 60%. In the FICTION-domain, however, the recall is much lower (best recall of CK is below 47%). This is no surprise as the SPACEdomain is more similar to the source domain than the FICTION-domain since ETHICS and SPACE are news texts. FICTION contains more out-ofdomain language. Therefore, RB (which exclusively uses domain-independent knowledge) outperforms both learning-based methods including the ones incorporating generalization. Similar results have been observed for rule-based classifiers from other tasks in cross-domain sentiment analysis, such as subjectivity detection and polarity classification. High-level information as it is encoded in a rule-based classifier generalizes better than learning-based methods (Andreevskaia and Bergler, 2008; Lambov et al., 2009) .",
"cite_spans": [
{
"start": 875,
"end": 907,
"text": "(Andreevskaia and Bergler, 2008;",
"ref_id": "BIBREF1"
},
{
"start": 908,
"end": 928,
"text": "Lambov et al., 2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Out-of-Domain Evaluation of",
"sec_num": "6.3"
},
{
"text": "We set up another experiment exclusively for the FICTION-domain in which we combine the output of our best learning-based method, i.e. CK, with the prediction of a rule-based classifier. The combined classifier will predict an opinion holder, if either classifier predicts one. The motivation for this is the following: The FICTION-domain is the only domain to have a significant proportion of opinion holders appearing as patients. We want to know how much of them can be recognized with the best out-of-domain classifier using training data with only very few instances of this type and what benefit the addition of using various RBs which have a clearer notion of these constructions brings about. Moreover, we already observed that the learning-based methods have a bias towards preserving a high precision and this may have as a consequence that the generalization features incorporated into CK will not receive sufficiently large weights. Unlike the SPACE-domain where a sufficiently high recall is already achieved with CK (presumably due to its stronger similarity towards the source domain) the FICTION-domain may be more severely affected by this bias and evidence from RB may compensate for this. Table 10 shows the performance of those combined classifiers. For all generalization types considered, there is, indeed, an improvement by adding information from RB resulting in a large boost in recall. Already the application of our induction approach Induc results in an increase of more than 8% points compared to plain CK. The table also shows that there is always some improvement if RB considers opinion holders as patients (AG+PT). This can be considered as some evidence that (given the available data we use) opinion holders in patient position can only be effectively extracted with the help of RBs. It is also further evidence that our novel approach to extract those predicates ( \u00a73.3.2) is effective. The combined approach in Table 10 not only outperforms CK (discussed above) but also RB (Table 6 ). We manually inspected the output of the classifiers to find also cases in which CK detect opinion holders that RB misses. CK has the advantage that it is not only bound to the relationship between candidate holder and predicate. It learns further heuristics, e.g. that sentence-initial mentions of persons are likely opinion holders. In (12), for example, this heuristics fires while RB overlooks this instance as to give someone a share of advice is not part of the lexicon. ",
"cite_spans": [],
"ref_spans": [
{
"start": 1208,
"end": 1216,
"text": "Table 10",
"ref_id": "TABREF1"
},
{
"start": 1948,
"end": 1956,
"text": "Table 10",
"ref_id": "TABREF1"
},
{
"start": 2011,
"end": 2019,
"text": "(Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Out-of-Domain Evaluation of",
"sec_num": "6.3"
},
{
"text": "The research on opinion holder extraction has been focusing on applying different data-driven approaches. Choi et al. (2005) and Choi et al. (2006) those methods has not yet been attempted. In this work, we compare the popular state-of-the-art learning algorithms conditional random fields and convolution kernels for the first time. All these data-driven methods have been evaluated on the MPQA corpus. Some generalization methods are incorporated but unlike this paper they are neither systematically compared nor combined. The role of resources that provide the knowledge of argument positions of opinion holders is not covered in any of these works. This kind of knowledge should be directly learnt from the labeled training data. In this work, we found, however, that the distribution of argument positions of opinion holders varies throughout the different domains and, therefore, cannot be learnt from any arbitrary out-of-domain training set. Bethard et al. (2004) and Kim and Hovy (2006) explore the usefulness of semantic roles provided by FrameNet (Fillmore et al., 2003) . Bethard et al. (2004) use this resource to acquire labeled training data while in (Kim and Hovy, 2006) FrameNet is used within a rule-based classifier mapping frame-elements of frames to opinion holders. Bethard et al. (2004) only evaluate on an artificial dataset (i.e. a subset of sentences from FrameNet and PropBank (Kingsbury and Palmer, 2002) ). The only realistic test set on which Kim and Hovy (2006) evaluate their approach are news texts. Their method is compared against a simple rule-based baseline and, unlike this work, not against a robust data-driven algorithm. (Wiegand and Klakow, 2011b) is similar to (Kim and Hovy, 2006) in that a rule-based approach is used relying on the relationship towards predictive predicates. Diverse resources are considered for obtaining such words, however, they are only evaluated on the entire MPQA corpus.",
"cite_spans": [
{
"start": 106,
"end": 124,
"text": "Choi et al. (2005)",
"ref_id": "BIBREF4"
},
{
"start": 129,
"end": 147,
"text": "Choi et al. (2006)",
"ref_id": "BIBREF5"
},
{
"start": 951,
"end": 972,
"text": "Bethard et al. (2004)",
"ref_id": "BIBREF2"
},
{
"start": 977,
"end": 996,
"text": "Kim and Hovy (2006)",
"ref_id": "BIBREF13"
},
{
"start": 1050,
"end": 1082,
"text": "FrameNet (Fillmore et al., 2003)",
"ref_id": null
},
{
"start": 1085,
"end": 1106,
"text": "Bethard et al. (2004)",
"ref_id": "BIBREF2"
},
{
"start": 1167,
"end": 1187,
"text": "(Kim and Hovy, 2006)",
"ref_id": "BIBREF13"
},
{
"start": 1289,
"end": 1310,
"text": "Bethard et al. (2004)",
"ref_id": "BIBREF2"
},
{
"start": 1405,
"end": 1433,
"text": "(Kingsbury and Palmer, 2002)",
"ref_id": "BIBREF14"
},
{
"start": 1663,
"end": 1690,
"text": "(Wiegand and Klakow, 2011b)",
"ref_id": "BIBREF30"
},
{
"start": 1705,
"end": 1725,
"text": "(Kim and Hovy, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "The only cross-domain evaluation of opinion holder extraction is reported in (Li et al., 2007) using the MPQA corpus as a training set and the NT-CIR collection as a test set. A low cross-domain performance is obtained and the authors conclude that this is due to the very different annotation schemes of those corpora.",
"cite_spans": [
{
"start": 77,
"end": 94,
"text": "(Li et al., 2007)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "We examined different generalization methods for opinion holder extraction. We found that for indomain classification, the more labeled training data are used, the smaller is the impact of generalization. Robust learning methods, such as convolution kernels, benefit less from generalization than weaker classifiers, such as conditional random fields. For cross-domain classification, generalization is always helpful. Distant domains are problematic for learning-based methods, however, rule-based methods provide a reasonable recall and can be effectively combined with the learning-based methods. The types of generalization that help best are manually compiled lexicons followed by an induction method inspired by distant supervision. Finally, we examined the case of opinion holders as patients and also presented a novel automatic extraction method that proved effective. Such dedicated extraction methods are important as common labeled datasets (from the news domain) do not provide sufficient training data for these constructions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "By agent we always mean constituents being labeled as A0 in PropBank(Kingsbury and Palmer, 2002).2 By patient we always mean constituents being labeled as A1 in PropBank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The cluster is the union of documents with the following MPQA-topic labels: axisofevil, guantanamo, humanrights, mugabe and settlements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "available at: www.absoluteshakespeare.com/ guides/{othello|twelfth night}/summary/ {othello|twelfth night} summary.htm www.wikisummaries.org/Pride and Prejudice",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also experimented with other sizes but they did not produce a better overall performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Predicates in patient position are given the same generalization label as the predicates in agent position. Specially marking them did not result in a notable improvement.7 http://crfpp.sourceforge.net 8 The soft margin parameter \u2212c is set to 1.0 and all features occurring less than 3 times are removed.9 http://www.surdeanu.name/mihai/swirl",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that even for the configuration Plain the trees are already augmented with named-entity information.11 We chose this order as it roughly corresponds to the specificity of those generalization types.12 disi.unitn.it/moschitti 13 The cost parameter \u2212j(Morik et al., 1999) was set to 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The reason for RB having a high recall is extensively discussed in(Wiegand and Klakow, 2011b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was funded by the German Federal Ministry of Education and Research (Software-Cluster) under grant no. \"01IC10S01\". The authors thank Alessandro Moschitti, Benjamin Roth and Josef Ruppenhofer for their technical support and interesting discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Parsing By Chunks",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 1991,
"venue": "Principle-Based Parsing. Kluwer Academic Publishers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Abney. 1991. Parsing By Chunks. In Robert Berwick, Steven Abney, and Carol Tenny, editors, Principle-Based Parsing. Kluwer Academic Pub- lishers, Dordrecht.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "When Specialists and Generalists Work Together: Overcoming Domain Dependence in Sentiment Tagging",
"authors": [
{
"first": "Alina",
"middle": [],
"last": "Andreevskaia",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Bergler",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL/HLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alina Andreevskaia and Sabine Bergler. 2008. When Specialists and Generalists Work Together: Over- coming Domain Dependence in Sentiment Tagging. In Proceedings of the Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies (ACL/HLT), Columbus, OH, USA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Extracting Opinion Propositions and Opinion Holders using Syntactic and Lexical Cues",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ashley",
"middle": [],
"last": "Thornton",
"suffix": ""
},
{
"first": "Vasileios",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2004,
"venue": "Computing Attitude and Affect in Text: Theory and Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bethard, Hong Yu, Ashley Thornton, Vasileios Hatzivassiloglou, and Dan Jurafsky. 2004. Extract- ing Opinion Propositions and Opinion Holders us- ing Syntactic and Lexical Cues. In Computing At- titude and Affect in Text: Theory and Applications. Springer-Verlag.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"V"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Desouza",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "Jenifer",
"middle": [
"C"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Peter V. deSouza, Robert L. Mer- cer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based n-gram models of natural lan- guage. Computational Linguistics, 18:467-479.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Identifying Sources of Opinions with Conditional Random Fields and Extraction Patterns",
"authors": [
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yejin Choi, Claire Cardie, Ellen Riloff, and Sid- dharth Patwardhan. 2005. Identifying Sources of Opinions with Conditional Random Fields and Extraction Patterns. In Proceedings of the Con- ference on Human Language Technology and Em- pirical Methods in Natural Language Processing (HLT/EMNLP), Vancouver, BC, Canada.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Joint Extraction of Entities and Relations for Opinion Recognition",
"authors": [
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Breck",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yejin Choi, Eric Breck, and Claire Cardie. 2006. Joint Extraction of Entities and Relations for Opinion Recognition. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing (EMNLP), Sydney, Australia.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Background to",
"authors": [
{
"first": ".",
"middle": [
"J"
],
"last": "Charles",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"R"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "Miriam",
"middle": [
"R"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Petruck",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles. J. Fillmore, Christopher R. Johnson, and Miriam R. Petruck. 2003. Background to",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating Non-local Informa- tion into Information Extraction Systems by Gibbs Sampling. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), Ann Arbor, MI, USA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Making Large-Scale SVM Learning Practical",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Kernel Methods -Support Vector Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 1999. Making Large-Scale SVM Learning Practical. In B. Sch\u00f6lkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning. MIT Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Reranking Models in Fine-grained Opinion Analysis",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the International Conference on Computational Linguistics (COLING), Bejing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Johansson and Alessandro Moschitti. 2010. Reranking Models in Fine-grained Opinion Anal- ysis. In Proceedings of the International Confer- ence on Computational Linguistics (COLING), Be- jing, China.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Extracting Opinion Expressions and Their Polarities -Exploration of Pipelines and Joint Models",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Johansson and Alessandro Moschitti. 2011. Extracting Opinion Expressions and Their Polari- ties -Exploration of Pipelines and Joint Models. In Proceedings of the Annual Meeting of the Associa- tion for Computational Linguistics (ACL), Portland, OR, USA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The ICWSM JDPA 2010 Sentiment Corpus for the Automotive Domain",
"authors": [
{
"first": "Jason",
"middle": [
"S"
],
"last": "Kessler",
"suffix": ""
},
{
"first": "Miriam",
"middle": [],
"last": "Eckert",
"suffix": ""
},
{
"first": "Lyndsay",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Nicolov",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the International AAAI Conference on Weblogs and Social Media Data Challange Workshop (ICWSM-DCW)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason S. Kessler, Miriam Eckert, Lyndsay Clarke, and Nicolas Nicolov. 2010. The ICWSM JDPA 2010 Sentiment Corpus for the Automotive Do- main. In Proceedings of the International AAAI Conference on Weblogs and Social Media Data Challange Workshop (ICWSM-DCW), Washington, DC, USA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Extracting Opinions, Opinion Holders, and Topics Expressed in Online News Media Text",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Soo",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the ACL Workshop on Sentiment and Subjectivity in Text",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soo-Min Kim and Eduard Hovy. 2006. Extracting Opinions, Opinion Holders, and Topics Expressed in Online News Media Text. In Proceedings of the ACL Workshop on Sentiment and Subjectivity in Text, Sydney, Australia.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "From TreeBank to PropBank",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Kingsbury and Martha Palmer. 2002. From TreeBank to PropBank. In Proceedings of the Conference on Language Resources and Evaluation (LREC), Las Palmas, Spain.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Accurate Unlexicalized Parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Accu- rate Unlexicalized Parsing. In Proceedings of the Annual Meeting of the Association for Computa- tional Linguistics (ACL), Sapporo, Japan.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional Random Fields: Prob- abilistic Models for Segmenting and Labeling Se- quence Data. In Proceedings of the International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Sentiment Classification across Domains",
"authors": [
{
"first": "Dinko",
"middle": [],
"last": "Lambov",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Dias",
"suffix": ""
},
{
"first": "Veska",
"middle": [],
"last": "Noncheva",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Portuguese Conference on Artificial Intelligence (EPIA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dinko Lambov, Ga\u00ebl Dias, and Veska Noncheva. 2009. Sentiment Classification across Domains. In Proceedings of the Portuguese Conference on Artifi- cial Intelligence (EPIA), Aveiro, Portugal. Springer- Verlag.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "English Verb Classes and Alternations: A Preliminary Investigation",
"authors": [
{
"first": "Beth",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beth Levin. 1993. English Verb Classes and Alter- nations: A Preliminary Investigation. University of Chicago Press.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Experiments of Opinion Analysis on the Corpora MPQA and NTCIR-6",
"authors": [
{
"first": "Yangyong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "Hamish",
"middle": [],
"last": "Cunningham",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the NTCIR-6 Workshop Meeting",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangyong Li, Kalina Bontcheva, and Hamish Cun- ningham. 2007. Experiments of Opinion Analy- sis on the Corpora MPQA and NTCIR-6. In Pro- ceedings of the NTCIR-6 Workshop Meeting, Tokyo, Japan.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Distant Supervision for Relation Extraction without Labeled Data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL/IJCNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant Supervision for Relation Extrac- tion without Labeled Data. In Proceedings of the Joint Conference of the Annual Meeting of the As- sociation for Computational Linguistics and the In- ternational Joint Conference on Natural Language Processing of the Asian Federation of Natural Lan- guage Processing (ACL/IJCNLP), Singapore.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Combining Statistical Learning with a Knowledge-based Approach -A Case Study in Intensive Care Monitoring",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Morik",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Brockhausen",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings the International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Morik, Peter Brockhausen, and Thorsten Joachims. 1999. Combining Statistical Learn- ing with a Knowledge-based Approach -A Case Study in Intensive Care Monitoring. In Proceedings the International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Automatically Creating General-Purpose Opinion Summaries from Text",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the International Conference on Spoken Language Processing (ICSLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM -An Extensible Lan- guage Modeling Toolkit. In Proceedings of the In- ternational Conference on Spoken Language Pro- cessing (ICSLP), Denver, CO, USA. Veselin Stoyanov and Claire Cardie. 2011. Auto- matically Creating General-Purpose Opinion Sum- maries from Text. In Proceedings of Recent Ad- vances in Natural Language Processing (RANLP), Hissar, Bulgaria.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Evaluating an Opinion Annotation Scheme Using a New Multi-Perspective Question and Answer Corpus",
"authors": [
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Litman",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the AAAI Spring Symposium on Exploring Attitude and Affect in Text",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Veselin Stoyanov, Claire Cardie, Diane Litman, and Janyce Wiebe. 2004. Evaluating an Opinion An- notation Scheme Using a New Multi-Perspective Question and Answer Corpus. In Proceedings of the AAAI Spring Symposium on Exploring Attitude and Affect in Text, Menlo Park, CA, USA.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Sentence and Expression Level Annotation of Opinions in User-Generated Discourse",
"authors": [
{
"first": "Cigdem",
"middle": [],
"last": "Toprak",
"suffix": ""
},
{
"first": "Niklas",
"middle": [],
"last": "Jakob",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cigdem Toprak, Niklas Jakob, and Iryna Gurevych. 2010. Sentence and Expression Level Annotation of Opinions in User-Generated Discourse. In Pro- ceedings of the Annual Meeting of the Associa- tion for Computational Linguistics (ACL), Uppsala, Sweden.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Word Representations: A Simple and General Method for Semi-supervised Learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word Representations: A Simple and Gen- eral Method for Semi-supervised Learning. In Pro- ceedings of the Annual Meeting of the Associa- tion for Computational Linguistics (ACL), Uppsala, Sweden.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning Subjective Language",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Bruce",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Bell",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe, Theresa Wilson, Rebecca Bruce, Matthew Bell, and Melanie Martin. 2004. Learn- ing Subjective Language. Computational Linguis- tics, 30(3).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Annotating Expressions of Opinions and Emotions in Language. Language Resources and Evaluation",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "39",
"issue": "",
"pages": "164--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating Expressions of Opinions and Emotions in Language. Language Resources and Evaluation, 39(2/3):164-210.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Convolution Kernels for Opinion Holder Extraction",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the ACL (HLT/NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand and Dietrich Klakow. 2010. Convo- lution Kernels for Opinion Holder Extraction. In Proceedings of the Human Language Technology Conference of the North American Chapter of the ACL (HLT/NAACL), Los Angeles, CA, USA.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Prototypical Opinion Holders: What We can Learn from Experts and Analysts",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of Recent Advances in Natural Language Processing (RANLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand and Dietrich Klakow. 2011a. Proto- typical Opinion Holders: What We can Learn from Experts and Analysts. In Proceedings of Recent Ad- vances in Natural Language Processing (RANLP), Hissar, Bulgaria.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The Role of Predicates in Opinion Holder Extraction",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the RANLP Workshop on Information Extraction and Knowledge Acquisition (IEKA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand and Dietrich Klakow. 2011b. The Role of Predicates in Opinion Holder Extraction. In Proceedings of the RANLP Workshop on Informa- tion Extraction and Knowledge Acquisition (IEKA), Hissar, Bulgaria.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Recognizing Contextual Polarity in Phraselevel Sentiment Analysis",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Hoffmann",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (HLT/EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing Contextual Polarity in Phrase- level Sentiment Analysis. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Process- ing (HLT/EMNLP), Vancouver, BC, Canada.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "I always supported this idea. holder:agent. (5) This worries me. holder:patient (6) He disappointed me. holder:patient",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "The different structures (left: constituency trees, right: predicate argument structure) derived from Sentence (1) for the opinion holder candidate Malaysia used as input for convolution kernels (CK).",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Predicate argument structure augmented with generalization nodes.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "She later gives Charlotte her share of advice on running a household.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"text": "Statistics of the different domain corpora.",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF2": {
"text": "Some automatically induced clusters.",
"content": "<table><tr><td>ETHICS</td><td>SPACE</td><td>FICTION</td></tr><tr><td>1.47</td><td>2.70</td><td>11.59</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF3": {
"text": "Percentage of opinion holders as patients.",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF4": {
"text": "Examples of the automatically extracted verbs taking opinion holders as patients ( * : not listed as amuse verb).",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF5": {
"text": "GroupFeaturesPlainToken features: unigrams and bigrams POS/chunk/named-entity features: unigrams, bigrams and trigrams Constituency tree path to nearest predicate Nearest predicate Semantic role to predicate+lexical form of predicate",
"content": "<table><tr><td/><td>Cluster features: unigrams, bigrams and trigrams</td></tr><tr><td>Clus</td><td>Semantic role to predicate+cluster-id of predicate</td></tr><tr><td/><td>Cluster-id of nearest predicate</td></tr><tr><td/><td>Is there predicate from induced lexicon within win-</td></tr><tr><td/><td>dow of 5 tokens?</td></tr><tr><td>Induc</td><td>Semantic role to predicate, if predicate is contained in</td></tr><tr><td/><td>induced lexicon</td></tr><tr><td/><td>Is nearest predicate contained in induced lexicon?</td></tr><tr><td/><td>Is there predicate from manually compiled lexicons</td></tr><tr><td/><td>within window of 5 tokens?</td></tr><tr><td>Lex</td><td>Semantic role to predicate, if predicate is contained in manually compiled lexicons</td></tr><tr><td/><td>Is nearest predicate contained in manually compiled</td></tr><tr><td/><td>lexicons?</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF6": {
"text": "Feature set for CRF.",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF8": {
"text": "",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF9": {
"text": "Plain CRF 32.14 35.24 41.03 51.05 55.13 CK 42.15 46.34 51.14 56.39 59.52 +Clus CRF 33.06 37.11 43.47 52.05 56.18 CK 42.02 45.86 51.11 56.59 59.77 +Induc CRF 37.28 42.31 46.54 54.27 56.",
"content": "<table><tr><td>shows the cross-domain performance of</td></tr><tr><td>the different rule-based classifiers. RB-Lex per-</td></tr><tr><td>forms better than RB-Induc. In comparison to the</td></tr><tr><td>domains ETHICS and SPACE the difference is</td></tr><tr><td>larger on FICTION. Presumably, this is due to the</td></tr><tr><td>fact that the predicates in Induc are extracted from</td></tr><tr><td>a news corpus ( \u00a72). Thus, Induc may slightly suf-</td></tr><tr><td>fer from a domain mismatch. A combination of</td></tr><tr><td>the two classifiers, i.e. RB-Lex+Induc, results in</td></tr><tr><td>a notable improvement in the FICTION-domain.</td></tr><tr><td>The approaches that also detect opinion holders as</td></tr><tr><td>patients (AG+PT) including our novel approach</td></tr><tr><td>( \u00a73.3.2) are effective. A notable improvement can</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF10": {
"text": "",
"content": "<table><tr><td>: F-score of in-domain (ETHICS) learning-</td></tr><tr><td>based classifiers.</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF11": {
"text": "26.61 35.24 58.26 38.47 46.34 All 62.85 35.96 45.75 63.18 41.50 50.10 50 Plain 59.85 44.50 51.05 59.60 53.50 56.39 All 62.99 50.80 56.24 61.91 56.20 58.92 100 Plain 64.14 48.33 55.13 62.38 56.91 59.52 All 64.75 54.32 59.08 63.81 59.24 61.44 RB 47.38 60.32 53.07 47.38 60.32 53.07",
"content": "<table><tr><td/><td/><td>CRF</td><td/><td>CK</td><td/></tr><tr><td>Size</td><td>Feat.</td><td>Prec Rec</td><td>F1</td><td>Prec Rec</td><td>F1</td></tr><tr><td>10</td><td>Plain</td><td>52.17</td><td/><td/><td/></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF12": {
"text": "Comparison of best RB with learning-based approaches on in-domain classification.",
"content": "<table><tr><td>Algorithms</td><td colspan=\"2\">Generalization Prec Rec</td><td>F</td></tr><tr><td>CK (Plain)</td><td/><td colspan=\"2\">66.90 41.48 51.21</td></tr><tr><td>CK</td><td>Induc</td><td colspan=\"2\">67.06 45.15 53.97</td></tr><tr><td>CK+RB AG</td><td>Induc</td><td colspan=\"2\">60.22 54.52 57.23</td></tr><tr><td colspan=\"2\">CK+RB AG+P T Induc</td><td colspan=\"2\">61.09 58.14 59.58</td></tr><tr><td>CK</td><td>Lex</td><td colspan=\"2\">69.45 46.65 55.81</td></tr><tr><td>CK+RB AG</td><td>Lex</td><td colspan=\"2\">67.36 59.02 62.91</td></tr><tr><td colspan=\"2\">CK+RB AG+P T Lex</td><td colspan=\"2\">68.25 63.28 65.67</td></tr><tr><td>CK</td><td>Induc+Lex</td><td colspan=\"2\">69.73 46.17 55.55</td></tr><tr><td>CK+RB AG</td><td>Induc+Lex</td><td colspan=\"2\">61.41 65.56 63.42</td></tr><tr><td colspan=\"2\">CK+RB AG+P T Induc+Lex</td><td colspan=\"2\">62.26 70.56 66.15</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF13": {
"text": "Combination of out-of-domain CK and rulebased classifiers on FICTION (i.e. distant domain).",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF15": {
"text": "Comparison of best RB with learning-based approaches on out-of-domain classification.",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
}
}
}
}