Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R15-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:57:29.993489Z"
},
"title": "Structural Alignment for Comparison Detection",
"authors": [
{
"first": "Wiltrud",
"middle": [],
"last": "Kessler",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Jonas",
"middle": [],
"last": "Kuhn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "There tends to be a substantial proportion of reviews that include explicit textual comparisons between the reviewed item and another product. To the extent that such comparisons can be captured reliably by automatic means, they can provide an extremely helpful input to support a process of choice. As the small amount of available training data limits the development of robust systems to automatically detect comparisons, this paper investigates how to use semi-supervised strategies to expand a small set of labeled sentences. Specifically, we use structural alignment, a method that starts out from a seed set of manually annotated data and finds similar unlabeled sentences to which the labels can be projected. We present several adaptations of the method to our task of comparison detection and show that adding the found expansion sentences slightly improves over a non-expanded baseline in low-resource settings, i.e., when a very small amount of training data is available.",
"pdf_parse": {
"paper_id": "R15-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "There tends to be a substantial proportion of reviews that include explicit textual comparisons between the reviewed item and another product. To the extent that such comparisons can be captured reliably by automatic means, they can provide an extremely helpful input to support a process of choice. As the small amount of available training data limits the development of robust systems to automatically detect comparisons, this paper investigates how to use semi-supervised strategies to expand a small set of labeled sentences. Specifically, we use structural alignment, a method that starts out from a seed set of manually annotated data and finds similar unlabeled sentences to which the labels can be projected. We present several adaptations of the method to our task of comparison detection and show that adding the found expansion sentences slightly improves over a non-expanded baseline in low-resource settings, i.e., when a very small amount of training data is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sentiment analysis is an NLP task that has received considerable attention in recent years. If we consider the actual situations in which people are interested in aggregated subjective assessments of some product (or location, service etc.) by other users, a typical scenario is that they are in the process of making some choice -such as a purchase decision among a set of candidate products. It is clear that for this decision a plain polarity scoring for entire review texts is of limited use and we need a more detailed analysis. In this work, we focus on what is presumably the most useful kind of expression when it comes to supporting a pro-cess of choice: there tends to be a substantial proportion of reviews (about 10% of sentences) that include explicit textual comparisons, e.g., \"X is better than Y\". To the extent that such subjective comparisons can be captured reliably by automatic means, they can provide an extremely helpful basis for coming up with a decision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The analysis of comparisons has the disadvantage that data for supervised training can no longer be derived from star ratings. Existing manually annotated sentiment analysis data sets include some proportion of comparisons, however, for a reliable supervised training, a larger data set is required. Moreover, vocabulary differences across product categories make it advisable to use domain-specific training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "If enough (human and/or financial) resources are available, the most effective approach is of course to invest in quality-controlled manual annotation of a relatively large amount of training data. However, since the higher-level semantic structure of comparisons as they appear in reviews is clear-cut, the problem setting could respond favorably to weakly supervised training strategies that start out from a seed set of manually annotated data. The experiments we present in this paper are exploring this very question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Comparisons can be mapped to a predicateargument structure, so we cast the task of detecting them as a semantic role-labeling (SRL) problem (Hou and Li, 2008; Kessler and Kuhn, 2013) . Starting with a small set of labeled seed sentences, we use structural alignment (F\u00fcrstenau and Lapata, 2009) , which has been successfully applied to SRL, to automatically find and annotate sentences that are similar to these seed sentences as a way to get more training data.",
"cite_spans": [
{
"start": 140,
"end": 158,
"text": "(Hou and Li, 2008;",
"ref_id": "BIBREF6"
},
{
"start": 159,
"end": 182,
"text": "Kessler and Kuhn, 2013)",
"ref_id": "BIBREF11"
},
{
"start": 266,
"end": 294,
"text": "(F\u00fcrstenau and Lapata, 2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are several challenges that make our task different from a typical SRL setting: Our data is not news, but user-generated data (product reviews), which is much more noisy. We have a smaller, more fixed set of roles for the arguments (two entities that are compared in some aspect), but these arguments are further away from the predicates. And, like all sentiment-related task, we have to deal with subjectivity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we want to investigate whether structural alignment can successfully be used for getting additional training data for the task of comparison detection. We present some adaptations of the method to our task of comparison detection and experiment with varying numbers of seed sentences and gathered expansion sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Sentiment analysis has in recent years moved from the document-level prediction of polarity or star rating to a more fine-grained analysis. Jindal and Liu (2006a) are the first to specifically distinguish comparison sentences from other sentences in product reviews. In follow-up work, Jindal and Liu (2006b) detect comparison arguments with label sequential rules and Ganapathibhotla and Liu (2008) identify the preferred entity in a ranked comparison. Xu et al. (2011) use Conditional Random Fields in relation extraction approach. We follow previous work (Hou and Li, 2008; Kessler and Kuhn, 2013) and tackle comparisons with a SRL approach, but move from a completely supervised setting to a semi-supervised one.",
"cite_spans": [
{
"start": 140,
"end": 162,
"text": "Jindal and Liu (2006a)",
"ref_id": "BIBREF7"
},
{
"start": 286,
"end": 308,
"text": "Jindal and Liu (2006b)",
"ref_id": "BIBREF8"
},
{
"start": 369,
"end": 399,
"text": "Ganapathibhotla and Liu (2008)",
"ref_id": "BIBREF4"
},
{
"start": 454,
"end": 470,
"text": "Xu et al. (2011)",
"ref_id": "BIBREF14"
},
{
"start": 558,
"end": 576,
"text": "(Hou and Li, 2008;",
"ref_id": "BIBREF6"
},
{
"start": 577,
"end": 600,
"text": "Kessler and Kuhn, 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Several unsupervised or weakly supervised approaches have been presented for SRL. Gildea and Jurafsky (2002) -the first work that tackles SRL as an independent task -use bootstrapping, where an initial system is trained on the available data, applied to a large unlabeled corpus, and the resulting annotations are then used to re-train the model. Abend et al. (2009) do unsupervised argument identification by using pointwise mutual information to determine which constituents are the most probable arguments. Other approaches use the extensive resources that exist for SRL as a basis, e.g., Swier and Stevenson (2005) leverage VerbNet which lists possible argument structures allowable for each predicate. For comparison detection we do not have extensive resources to tap into. We do however think that a small seed set of comparison sentences can be annotated in reasonable time for any new domain or language. This set may not be sufficiently large for bootstrapping, but it can be used as an initial seed set. In this work, we use structural alignment (F\u00fcrstenau and Lapata, 2009) to expand this seed set with similar sentences in a semi-supervised way.",
"cite_spans": [
{
"start": 82,
"end": 108,
"text": "Gildea and Jurafsky (2002)",
"ref_id": "BIBREF5"
},
{
"start": 347,
"end": 366,
"text": "Abend et al. (2009)",
"ref_id": "BIBREF0"
},
{
"start": 592,
"end": 618,
"text": "Swier and Stevenson (2005)",
"ref_id": "BIBREF13"
},
{
"start": 1057,
"end": 1085,
"text": "(F\u00fcrstenau and Lapata, 2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The goal of our work is to get more training data for comparison detection in a semi-supervised way. We implement structural alignment proposed by F\u00fcrstenau and Lapata (2009) and F\u00fcrstenau and Lapata (2012) , a method for finding unlabeled sentences that are similar to existing labeled seed sentences (originally proposed for SRL). The basic hypothesis is that predicates that appear in a similar syntactic and semantic context will behave similarly with respect to their arguments so that the labels from the seed sentences can be projected to the unlabeled sentences. These newly labeled sentences can then be used as additional training data.",
"cite_spans": [
{
"start": 147,
"end": 174,
"text": "F\u00fcrstenau and Lapata (2009)",
"ref_id": "BIBREF2"
},
{
"start": 179,
"end": 206,
"text": "F\u00fcrstenau and Lapata (2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "Given a small set of labeled sentences (seed corpus) and a large set of unlabeled sentences (expansion corpus). We collect expansion sentences for a predicate p of a seed sentence s with the follwing steps for every unlabeled sentence u.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Outline of structural alignment",
"sec_num": "3.1"
},
{
"text": "1. Sentence selection: Consider u iff it contains a predicate compatible with p. 2. Argument candidate creation: Get all argument candidates from s and from u. 3. Alignment scoring: Score every possible alignment between the two argument candidate sets. 4. Store best-scoring alignment and its score iff at least one role-bearing node is covered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Outline of structural alignment",
"sec_num": "3.1"
},
{
"text": "When all unlabeled sentences have been processed, we choose the k sentences with the highest alignment similarity scores as expansion sentences for the seed predicate p. We project the labels of the arguments in the seed sentence onto their aligned words in these unlabeled sentences and add the newly labeled sentences to our data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Outline of structural alignment",
"sec_num": "3.1"
},
{
"text": "In the following we will discuss the main steps of the expansion algorithm and give some details. Figure 1 illustrates each step for a pair of example sentences from our data.",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 106,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Outline of structural alignment",
"sec_num": "3.1"
},
{
"text": "We consider all sentences with the exact same lemma for the predicate as possible expansion sentences. In contrast to the original approach, we use the part of speech (POS) tag instead of the lemma Labeled seed sentence with predicate \"higher/JJR\" (system dependency parse, snippet):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence selection",
"sec_num": "3.2"
},
{
"text": "This camera has just a bit higher learning curve than the Canon SLRs . . . NMOD NMOD NMOD AMOD NMOD OBJ NMOD NMOD NMOD PMOD aspect entity entity Unlabeled expansion sentence with compatible predicate \"larger/JJR\" (system dependency parse, snippet):",
"cite_spans": [],
"ref_spans": [
{
"start": 75,
"end": 161,
"text": "NMOD NMOD NMOD AMOD NMOD OBJ NMOD NMOD NMOD PMOD aspect entity entity",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence selection",
"sec_num": "3.2"
},
{
"text": "This camera has a somewhat larger body than many digital cameras . . . Labeled side (real arguments): \"camera\", \"curve\", \"SLRs\" Unlabeled side (dependency-filtered): \"somewhat\" (\u2193 / child), \"body\" (\u2191 / parent), \"a\" (\u2191\u2193), \"cameras\" (\u2191\u2193, prep. collapsed), \"has\" (\u2191\u2191), \"camera\" (\u2191\u2191\u2193) Unlabeled side (path-filtered): no candidates found for all adjectives and adverbs in comparative or superlative form (see Figure 1 where both predicates are \"JJR\"), as exchanging them is without any influence on the syntactic structure or the arguments of the comparison. Like the original approach, we only consider single-word predicates.",
"cite_spans": [],
"ref_spans": [
{
"start": 404,
"end": 412,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "NMOD SBJ",
"sec_num": null
},
{
"text": "F\u00fcrstenau and Lapata (2009) use the direct descendants and siblings of the predicate as argument candidates (both SRL arguments and nonarguments). In our labeled data, this find only 17% of the actual labeled comparison arguments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument candidate creation",
"sec_num": "3.3"
},
{
"text": "The challenge is to enlarge the set of argument candidates, while keeping the number of candidates manageable so that alignments can be calculated in reasonable time. Similar to what has been proposed for SRL arguments (Xue and Palmer, 2004) , we use all ancestors of the predicate until the root and their direct descendants, plus all descendants of the predicate itself. We remove prepositions (F\u00fcrstenau and Lapata, 2009) and conjunctions (F\u00fcrstenau and Lapata, 2012) which can never be arguments, and add their direct children to the candidate set. We also impose a distance limit and exclude numbers and punctuation. Applied to our labeled data, this dependencyfiltered method finds 87% of all real arguments.",
"cite_spans": [
{
"start": 219,
"end": 241,
"text": "(Xue and Palmer, 2004)",
"ref_id": "BIBREF15"
},
{
"start": 396,
"end": 424,
"text": "(F\u00fcrstenau and Lapata, 2009)",
"ref_id": "BIBREF2"
},
{
"start": 442,
"end": 470,
"text": "(F\u00fcrstenau and Lapata, 2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument candidate creation",
"sec_num": "3.3"
},
{
"text": "As a second method (path-filtered), we get the paths from the predicate to each argument in the labeled sentence and search for the exact same paths (compared by dependency relations) in the unlabeled sentence. All nodes on the path are extracted as candidates (F\u00fcrstenau and Lapata, 2012) . The method is very precise, but also often fails to find any candidates. On labeled side, we only take the actual labeled arguments of the comparison, as our candidate sets are relatively big and noisy and our interest is solely in finding good alignments for the projection of the real arguments. You can see the resulting candidates for the example in Figure 1 .",
"cite_spans": [
{
"start": 261,
"end": 289,
"text": "(F\u00fcrstenau and Lapata, 2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 646,
"end": 654,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Argument candidate creation",
"sec_num": "3.3"
},
{
"text": "The similarity of an alignment between two sentences s and u is the averaged sum of all word alignment similarities, themselves the averaged sum of different word similarity measures:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment scoring",
"sec_num": "3.4"
},
{
"text": "score s (s, u) = 1 |M | |M | i=1 1 |S| j\u2208S sim j (w i , \u03c3(w i ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment scoring",
"sec_num": "3.4"
},
{
"text": "where M is the set of candidates on labeled side, w i \u2208 M one of these candidates, \u03c3(w i ) the candidate on unlabeled side aligned with w i , and S is the set of similarities to calculate. Unaligned w i receive a word similarity of zero. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment scoring",
"sec_num": "3.4"
},
{
"text": "v() simneigh v(canon), v(i) v(digital), v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment scoring",
"sec_num": "3.4"
},
{
"text": ")| + 1). simlev \u2191 2 \u2193 2 \u2191 1 \u2193 2 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment scoring",
"sec_num": "3.4"
},
{
"text": "75 Similarity in number of \"up\"s (\u2191) and \"down\"s (\u2193) on the dependency path from argument to predicate. The \u2191 and \u2193 parts are calculated separately and averaged. simpath \u2191 bit \u2193 curve, than \u2193 body, than 0.70 Average simdep of all words on the the dependency path from argument to the predicate. The \u2191 and \u2193 parts are calculated separately, similarity for unpaired words is 0. Figure 1 with \"SLRs\" as w i and \"cameras\" as \u03c3(w i ).",
"cite_spans": [],
"ref_spans": [
{
"start": 376,
"end": 384,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Alignment scoring",
"sec_num": "3.4"
},
{
"text": "We compare the syntactic and semantic similarity of the two candidates with a variety of similarity measures that are listed in Table 1 along with values they take for the example from Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 135,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 185,
"end": 193,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Alignment scoring",
"sec_num": "3.4"
},
{
"text": "We use two combinations of similarity measures: flat similarities only (S = {vs, dep}) which corresponds to the similarity measures used in the original work, and similarities that include context (all, S = {vs, neigh, dep, tok, lev, path}).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment scoring",
"sec_num": "3.4"
},
{
"text": "As our core labeled data set we use comparison sentences from English camera reviews 1 (Kessler and Kuhn, 2014) . We divide the data into five folds and use one fold as seed data and the rest as test data. The full seed data contains 342 sentences with 415 predicates. The test data contains 1365 sentences with 1693 predicates.",
"cite_spans": [
{
"start": 87,
"end": 111,
"text": "(Kessler and Kuhn, 2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "As the unlabeled expansion data, we use a set of 280.000 camera review sentences from epinions.com. Note that expansion sentences are never used in testing, we always only test on human-annotated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "To calculate vector space similarities we use co-occurrence vectors (symmetric window of 2 words, retain 2000 most frequent dimensions) extracted from a large set of reviews with a total of 40 million tokens. This set includes the above expansion corpus, the electronics part of the HUGE corpus (Jindal and Liu, 2008) and camera reviews from amazon.com.",
"cite_spans": [
{
"start": 295,
"end": 317,
"text": "(Jindal and Liu, 2008)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "1 http://www.ims.uni-stuttgart. de/forschung/ressourcen/korpora/ reviewcomparisons/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "We retrain the MATE Semantic Role Labeling system (Bj\u00f6rkelund et al., 2009) 2 on our data and use a typical pipeline setting with three classification steps: predicate identification, argument identification and argument classification. We distinguish three argument types: two entities and one aspect. We use standard SRL features (Johansson and Nugues, 2007) based on the output of the MATE dependency parser. This setup is equivalent to (Kessler and Kuhn, 2013) .",
"cite_spans": [
{
"start": 50,
"end": 75,
"text": "(Bj\u00f6rkelund et al., 2009)",
"ref_id": "BIBREF1"
},
{
"start": 332,
"end": 360,
"text": "(Johansson and Nugues, 2007)",
"ref_id": "BIBREF10"
},
{
"start": 440,
"end": 464,
"text": "(Kessler and Kuhn, 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System for comparison detection",
"sec_num": "4.2"
},
{
"text": "To evaluate whether the found expansion sentences are useful, we add the k best expansion sentences per seed predicate to the seed data and train on this expanded corpus. We use the test data for evaluation and compare classification performance of training on the expanded seed data with the baseline trained on the seed data only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.3"
},
{
"text": "We test four versions of the expansion:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.3"
},
{
"text": "PATH-FLAT path-filtered candidate creation and flat similarities (closest to the original work). DEP-FLAT dependency-filtered candidate creation and flat similarities. PATH-CONTEXT path-filtered candidate creation and context similarities. DEP-CONTEXT dependency-filtered candidate creation and context similarities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.3"
},
{
"text": "There are two main questions we investigate:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.3"
},
{
"text": "1. How many seed sentences should be used (varying d)? 2. How many expansion sentences should be used per seed (varying k)? We expect that the training data expansion is helpful in low-resource and high-precision settings (i.e., d and k are small). This corresponds to a scenario where only a limited amount of sentences has been annotated for a new domain or language. We consider this to be a more realistic scenario for our task than the one used in (F\u00fcrstenau and Lapata, 2012) , where a fixed number of training examples per frame is used, as in contrast to SRL we do not expect to know predicates or frames for comparisons in advance. Figure 2 shows some results for comparison argument identification in terms of F 1 score. The different curves represent expanding and training on different percentages d of the seed set, from 10% to 100% (full seed set). Note that the lowest setting uses only 34 seed sentences.",
"cite_spans": [
{
"start": 453,
"end": 481,
"text": "(F\u00fcrstenau and Lapata, 2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 641,
"end": 649,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4.3"
},
{
"text": "The x-axis shows k, the number of expansion sentences added per seed sentence. The value 0 corresponds to the baseline, i.e., training on the seed sentences only. In line with the results reported for SRL, for most cases as k gets larger, the amount of introduced noise outweighs the benefits of additional training data, so performance drops.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "For PATH-FLAT, DEP-FLAT and PATH-CONTEXT, almost no setting manages to improve over the non-expanded baseline, every added expansion sentence only decreases performance. For DEP-CONTEXT, in some cases, especially for low values for d there is a small improvement. To illustrate the different sentences selected by the systems, consider this example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "(1) a. \"I felt more [comfortable] Sentence 1a is the seed sentence, sentence 1b is the sentence selected by DEP-FLAT, sentence 1c is selected by DEP-CONTEXT. While choosing \"comfortable\" in sentence 1b to be aligned with the labeled aspect seems like a perfect match in isolation, 1c is a much better choice in context. Figure 3 shows learning curves for argument identification for each system with the best setting for k (usually 1, 10 for DEP-CONTEXT). All systems except DEP-CONTEXT are nearly always below the baseline. The best value of k for DEP-CONTEXT in our experiments is 10, which is shown in the graph. The results are very similar for all k \u2265 5, for lower values of k, the results drop below the baseline. The best setting manages to improve over the non-expanded baseline in low resource settings, but the curves get closer to each other when more seed data is added and the effect disappears at the end.",
"cite_spans": [
{
"start": 20,
"end": 33,
"text": "[comfortable]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 320,
"end": 328,
"text": "Figure 3",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Due to space restrictions we are only able to show argument identification results, but the trends are very similar for predicate identification and argument classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "If we look at the sentences found by the expansion systems, we can identify two main problems with the extracted sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "One problem that affects all sentiment-related tasks is subjectivity. Often sentiment words (or in our case comparison words) appear in nonsentiment (non-comparative) contexts, but these contexts are very hard to distinguish from each other. Consider this example: Sentence 2a is the seed sentence, sentence 2b is the best sentence selected by the context-aware system. Though the two phrases \"smaller SD media\" and \"higher quality pics\" are a very good match, the word \"higher\" in sentence 2b does not express a product comparison. Instead, it describes a type of picture. Such uses are relatively frequent and often mistakenly chosen as expansion sentences. Such \"false positives\" mainly affect predicate identification, but errors in this first step are propagated through the pipeline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "Another type of error is caused by the nonaligned part of sentences. Sentences are sometimes rather long and contain other predicates besides the expanded predicate. Consider this example (3a seed, 3b context-aware system):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "(3) a. \"That said, the larger LCD [screen]aspect is really an improvement.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "b. \"The smaller 2-inch [screen]aspect has higher resolution of 118,000 pixels!\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "The additional predicate \"higher\" in the expansion sentence is not detected, thereby creating a \"false negative\" example for the predicate identification classifier and the subsequent steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "In this paper we investigate whether structural alignment, a semi-supervised method that has been successfully used for projecting SRL annotations to unlabeled sentences, can be adapted to the task of detecting comparisons. We find that some adjustments are necessary in order for the method to be applicable. First, we need to adapt the method of candidate selection to reflect that our arguments are further away from the predicate, while at the same time keeping the number of candidates manageable. Second, we need to adapt the similarity measure for scoring argument alignments to include context-aware measures. When we add the found expansion sentences to our training data, we can slightly improve over a nonexpanded baseline in low-resource settings, i.e., when only a very small amount of training data in the desired domain or language is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "There are many directions for future work. We have presented one possible context-aware similarity measure, but there are many other possibilities that can be explored. Two main issues are false positive and false negative predicates found by the expansion, the former being introduced by not detecting non-subjective usage of comparative words, the latter through other predicates besides the identified one being present in an expansion sentence. Doing subjectivity analysis to filter out non-comparative usages, and simplifying sentences or pre-selecting only short and simple sentences for expansion could improve results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://code.google.com/p/mate-tools/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work reported in this paper was supported by a Nuance Foundation grant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised argument identification for semantic role labeling",
"authors": [
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL '09",
"volume": "",
"issue": "",
"pages": "28--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omri Abend, Roi Reichart, and Ari Rappoport. 2009. Unsupervised argument identification for semantic role labeling. In Proceedings of ACL '09, pages 28- 36.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multilingual Semantic Role Labeling",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "Bj\u00f6rkelund",
"suffix": ""
},
{
"first": "Love",
"middle": [],
"last": "Hafdell",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Nugues",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of CoNLL '09 Shared Task",
"volume": "",
"issue": "",
"pages": "43--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders Bj\u00f6rkelund, Love Hafdell, and Pierre Nugues. 2009. Multilingual Semantic Role Labeling. In Pro- ceedings of CoNLL '09 Shared Task, pages 43-48.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semisupervised semantic role labeling",
"authors": [
{
"first": "Hagen",
"middle": [],
"last": "F\u00fcrstenau",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EACL '09",
"volume": "",
"issue": "",
"pages": "220--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hagen F\u00fcrstenau and Mirella Lapata. 2009. Semi- supervised semantic role labeling. In Proceedings of EACL '09, pages 220-228.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Semisupervised semantic role labeling via structural alignment",
"authors": [
{
"first": "Hagen",
"middle": [],
"last": "F\u00fcrstenau",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Linguistics",
"volume": "38",
"issue": "1",
"pages": "135--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hagen F\u00fcrstenau and Mirella Lapata. 2012. Semi- supervised semantic role labeling via structural alignment. Computational Linguistics, 38(1):135- 171.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Mining opinions in comparative sentences",
"authors": [
{
"first": "Murthy",
"middle": [],
"last": "Ganapathibhotla",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of COLING '08",
"volume": "",
"issue": "",
"pages": "241--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murthy Ganapathibhotla and Bing Liu. 2008. Mining opinions in comparative sentences. In Proceedings of COLING '08, pages 241-248.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic labeling of semantic roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "238",
"issue": "3",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguis- tics, 238(3):245-288.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Mining Chinese comparative sentences by semantic role labeling",
"authors": [
{
"first": "Feng",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Guo-Hui",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ICMLC '08",
"volume": "",
"issue": "",
"pages": "2563--2568",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng Hou and Guo-hui Li. 2008. Mining Chinese comparative sentences by semantic role labeling. In Proceedings of ICMLC '08, pages 2563-2568.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Identifying comparative sentences in text documents",
"authors": [
{
"first": "Nitin",
"middle": [],
"last": "Jindal",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of SIGIR '06",
"volume": "",
"issue": "",
"pages": "244--251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitin Jindal and Bing Liu. 2006a. Identifying compar- ative sentences in text documents. In Proceedings of SIGIR '06, pages 244-251.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Mining comparative sentences and relations",
"authors": [
{
"first": "Nitin",
"middle": [],
"last": "Jindal",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of AAAI '06",
"volume": "",
"issue": "",
"pages": "1331--1336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitin Jindal and Bing Liu. 2006b. Mining comparative sentences and relations. In Proceedings of AAAI '06, pages 1331-1336.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Opinion spam and analysis",
"authors": [
{
"first": "Nitin",
"middle": [],
"last": "Jindal",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of WSDM '08",
"volume": "",
"issue": "",
"pages": "219--230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitin Jindal and Bing Liu. 2008. Opinion spam and analysis. In Proceedings of WSDM '08, pages 219- 230, New York, NY, USA. ACM.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Syntactic representations considered for frame-semantic analysis",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Nugues",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of TLT Workshop '07",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Johansson and Pierre Nugues. 2007. Syntactic representations considered for frame-semantic anal- ysis. In Proceedings of TLT Workshop '07, page 12.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Detection of product comparisons -How far does an out-of-thebox semantic role labeling system take you?",
"authors": [
{
"first": "Wiltrud",
"middle": [],
"last": "Kessler",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP '13",
"volume": "",
"issue": "",
"pages": "1892--1897",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wiltrud Kessler and Jonas Kuhn. 2013. Detection of product comparisons -How far does an out-of-the- box semantic role labeling system take you? In Pro- ceedings of EMNLP '13, pages 1892-1897.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A corpus of comparisons in product reviews",
"authors": [
{
"first": "Wiltrud",
"middle": [],
"last": "Kessler",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of LREC '14",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wiltrud Kessler and Jonas Kuhn. 2014. A corpus of comparisons in product reviews. In Proceedings of LREC '14.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Exploiting a verb lexicon in automatic semantic role labelling",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Swier",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Stevenson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of HLT/EMNLP '05",
"volume": "",
"issue": "",
"pages": "883--890",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Swier and Suzanne Stevenson. 2005. Exploit- ing a verb lexicon in automatic semantic role la- belling. In Proceedings of HLT/EMNLP '05, pages 883-890.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Mining comparative opinions from customer reviews for competitive intelligence",
"authors": [
{
"first": "Kaiquan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"Shaoyi"
],
"last": "Liao",
"suffix": ""
},
{
"first": "Jiexun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuxia",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2011,
"venue": "Decis. Support Syst",
"volume": "50",
"issue": "4",
"pages": "743--754",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiquan Xu, Stephen Shaoyi Liao, Jiexun Li, and Yuxia Song. 2011. Mining comparative opinions from customer reviews for competitive intelligence. Decis. Support Syst., 50(4):743-754, March.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Calibrating features for semantic role labeling",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP '04",
"volume": "",
"issue": "",
"pages": "88--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue and Martha Palmer. 2004. Calibrating features for semantic role labeling. In Proceedings of EMNLP '04, pages 88-94.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "score for best alignment (solid lines): scores(s, u) = 1/3 \u2022 (0.68 + 0.82 + 0.73) = 0.74 Figure 1: Steps of structural alignment for an example seed and an example unlabeled sentence.",
"num": null
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"text": "F 1 score for argument identification when using different percentages d of the corpus as seed data (top to bottom: 100%, 50%, 25%, 10%) and expanding with different k numbers of candidates.",
"num": null
},
"FIGREF5": {
"uris": null,
"type_str": "figure",
"text": "Learning curves (F 1 score for argument identification) with varying amounts of seed data d.",
"num": null
},
"FIGREF6": {
"uris": null,
"type_str": "figure",
"text": "(2) a. \"This is largely a function of the much smaller [SD media]entity.\"b. \"Plan for 8 higher quality [pics]entity or about 24 medium quality pics with internal memory .\"",
"num": null
},
"TABREF1": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Similarity measures for word alignment similarity. Columns 2-4 give the compared values and similarities for the example from"
}
}
}
}