|
{ |
|
"paper_id": "K16-1007", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:10:57.586757Z" |
|
}, |
|
"title": "Learning to Jointly Predict Ellipsis and Comparison Structures", |
|
"authors": [ |
|
{ |
|
"first": "Omid", |
|
"middle": [], |
|
"last": "Bakhshandeh", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Rochester", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Wellwood", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Northwestern University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Allen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Rochester", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Domain-independent meaning representation of text has received a renewed interest in the NLP community. Comparison plays a crucial role in shaping objective and subjective opinion and measurement in natural language, and is often expressed in complex constructions including ellipsis. In this paper, we introduce a novel framework for jointly capturing the semantic structure of comparison and ellipsis constructions. Our framework models ellipsis and comparison as interconnected predicate-argument structures, which enables automatic ellipsis resolution. We show that a structured prediction model trained on our dataset of 2,800 gold annotated review sentences yields promising results. Together with this paper we release the dataset and an annotation tool which enables two-stage expert annotation on top of tree structures.", |
|
"pdf_parse": { |
|
"paper_id": "K16-1007", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Domain-independent meaning representation of text has received a renewed interest in the NLP community. Comparison plays a crucial role in shaping objective and subjective opinion and measurement in natural language, and is often expressed in complex constructions including ellipsis. In this paper, we introduce a novel framework for jointly capturing the semantic structure of comparison and ellipsis constructions. Our framework models ellipsis and comparison as interconnected predicate-argument structures, which enables automatic ellipsis resolution. We show that a structured prediction model trained on our dataset of 2,800 gold annotated review sentences yields promising results. Together with this paper we release the dataset and an annotation tool which enables two-stage expert annotation on top of tree structures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Representing the underlying meaning of text has been a long-standing topic of interest in computational linguistics. Recently there has been a renewed interest in representation of meaning for various tasks such as semantic parsing, where the task is to map a natural language sentence into its corresponding formal meaning representation (Zelle and Mooney, 1996; Berant and Liang, 2014) . Open-domain and broad-coverage semantic representation of text (Banarescu et al., 2013; Bos, 2008; Allen et al., 2008) is crucial for many language understanding tasks such as reading comprehension tests and question answering.", |
|
"cite_spans": [ |
|
{ |
|
"start": 339, |
|
"end": 363, |
|
"text": "(Zelle and Mooney, 1996;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 364, |
|
"end": 387, |
|
"text": "Berant and Liang, 2014)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 453, |
|
"end": 477, |
|
"text": "(Banarescu et al., 2013;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 478, |
|
"end": 488, |
|
"text": "Bos, 2008;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 489, |
|
"end": 508, |
|
"text": "Allen et al., 2008)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "With the rise of continuous-space models there is even more interest in capturing deeper generic semantics of text as opposed to surface word representations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One of the most common ways for expressing evaluative sentiment towards different entities is using comparison. A simple natural language example of comparison is Their pizza is the best. Capturing the underlying meaning of comparison structures, as opposed to their surface wording, is required for accurate evaluation of qualities and quantities. For instance, given a more complex comparison example, The pizza was great, but it was not as awesome as the sandwich, the state-ofthe-art sentiment analysis system (Manning et al., 2014) assigns an overall 'neutral' sentiment value, which clearly lacks deeper understanding of the comparison happening in the sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 514, |
|
"end": 536, |
|
"text": "(Manning et al., 2014)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Consider the generic meaning representation depicted in in Figure 1 according to frame semantic parsing 1 (Das et al., 2014) for the following sentence:", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 124, |
|
"text": "(Das et al., 2014)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 67, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) My Mazda drove faster than his Hyundai.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "It is evident that this meaning representation does not fully capture how the semantics of the adjective fast relates to the driving event, and what it actually means for a car to drive faster than another car. More importantly, there is an ellipsis in this sentence, the resolution of which results in complete understood reading of My Mazda drove faster than his Hyundai drove fast , which is in no way captured in Figure 1 tures which can capture the mentioned phenomena can enable the development of computational semantic models which are suitable for various reasoning tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 417, |
|
"end": 425, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper we introduce a joint theoretical model for comprehensive semantic representation of the structure of comparison and ellipsis in natural language. We jointly model comparison and ellipsis as inter-connected predicateargument structures, which enables automatic ellipsis resolution. The main contributions of this paper can be summarized as follows: (1) introducing a novel framework for jointly representing the semantics of comparison and ellipsis on top of syntactic trees, (2) releasing a dataset of 2,800 expert annotated user review comparison instances 3 , which significantly increases the size of the available resources on comparison structures in the community, (3) presenting a new structured prediction model for automatic extraction of semantic structures of comparison text together with ellipsis resolution, (4) releasing an interactive tool for tree-based human annotation of corpora, which can be helpful for many other annotation tasks in NLP.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To our knowledge, this paper presents the first comprehensive computational framework of its kind for ellipsis and comparison constructions. Our semantic model can be incorporated as a part of any broad-coverage semantic parser (Banarescu et al., 2013; Allen et al., 2008; Bos, 2008) for augmenting their meaning representation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 228, |
|
"end": 252, |
|
"text": "(Banarescu et al., 2013;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 253, |
|
"end": 272, |
|
"text": "Allen et al., 2008;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 283, |
|
"text": "Bos, 2008)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Broadly, elliptical constructions involve the omission of one or more phrases from a clause (such as 'drove fast' phrase at the end of example (1)) whose content can still be fully recovered from the unelided words of the sentence (Kennedy, 2003; Merchant, 2013) . Resolving ellipsis is crucial for deep language understanding. Although ellipsis has been studied in great depth in linguistics, there only have been a few computational studies of el-lipsis, most of which have focused on Verb Phrase Ellipsis (VPE) (Nielsen, 2004; Schiehlen, 2002; Hardt, 1997 ) such as Larry is not telling the truth, neither is Jim \u2206. where \u2206 is a verb phrase ellipsis site, which can be resolved to 'telling the truth'.", |
|
"cite_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 246, |
|
"text": "(Kennedy, 2003;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 247, |
|
"end": 262, |
|
"text": "Merchant, 2013)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 514, |
|
"end": 529, |
|
"text": "(Nielsen, 2004;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 530, |
|
"end": 546, |
|
"text": "Schiehlen, 2002;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 547, |
|
"end": 558, |
|
"text": "Hardt, 1997", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In 2010, a SemEval task was organized with the goals of (1) automatically detecting VPE in text, and (2) resolving the antecedent of each VPE (Bos and Spenader, 2011) . For this task, they manually annotated a portion of OntoNotes corpus, consisting of Wall Street Journal (WSJ) articles. Throughout all the 25 sections of WSJ, they found 487 instances of VPE (ranging from predicative ellipsis, deletion, and comparative constructions, to pseudo-gapping) in about 53,600 sentences. Among 487 ellipsis items, there were 96 comparative constructions. They show that simply searching the parse trees for empty VPs achieves a high precision (0.95) but low recall (0.58). Our work presents the first attempt on comparison ellipsis resolution of various types, within a semantically rich framework of comparisons.", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 166, |
|
"text": "(Bos and Spenader, 2011)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The syntax and semantics of comparison structures in natural language have been the subject of extensive systematic research in linguistics for a long time (Bresnan, 1973; Cresswell, 1976; Von Stechow, 1984) . Measurement in language is mainly expressed by using comparative morphemes such as more, less, -er, as, too, enough,est, etc 4 . The main component of the sentence carrying out the measurement can have either of adjective (JJ), adverb (RB), noun (NN), or verb (VB) parts of speech. The earliest efforts on the computational modeling of comparatives have been in the context of sentiment analysis, ranging from works on identifying sentences containing comparisons (Jindal and Liu, 2006b) to identifying the components of the comparisons in the form of triplets or other templatic patterns (Jindal and Liu, 2006a; Xu et al., 2011; Kessler and Kuhn, 2014) . These works provide a basis for computational analysis of comparatives, however, they lack depth and broader coverage as they are limited to only a few comparison patterns.", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 171, |
|
"text": "(Bresnan, 1973;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 172, |
|
"end": 188, |
|
"text": "Cresswell, 1976;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 207, |
|
"text": "Von Stechow, 1984)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 674, |
|
"end": 697, |
|
"text": "(Jindal and Liu, 2006b)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 799, |
|
"end": 822, |
|
"text": "(Jindal and Liu, 2006a;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 823, |
|
"end": 839, |
|
"text": "Xu et al., 2011;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 840, |
|
"end": 863, |
|
"text": "Kessler and Kuhn, 2014)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The most recent work on the computational semantics of comparison (Bakhshandeh and Allen, 2015) sets the stage for a deeper semantic representation of comparisons. Bakshandeh and Allen introduce the first computational semantic frame-work for representing the meaning of comparatives in natural language. This framework models comparisons as predicate-argument pairs interconnected via semantic role links. Our framework differs in the following crucial aspects: \u2212 Joint Ellipsis and Comparison Modeling: Effective modeling and reasoning on comparison structures requires addressing ellipsis as well. While Bakhshandeh and Allen only model comparisons, we provide a novel semantic framework for comprehensive annotation of ellipsis structures within comparison structures (details in Section 3.2). \u2212 Tree-based Structure Modeling: Bakhshandeh and Allen use span-based predicate-argument treatment, which is often prone to errors and lower inter-annotator agreement. We base our framework on top of constituency syntactic parse trees, which leads to more accurate 5 capture of semantic structures. \u2212 Reviews Dataset: While Bakhshandeh and Allen use newswire text, we shift our focus to the actual user reviews, which contain more comparison structures (Section 4.2). Furthermore, while their dataset included 531 sentences, we collect gold annotations for 2,800 sentences, which significantly increases the size of the available data for the community.", |
|
"cite_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 95, |
|
"text": "(Bakhshandeh and Allen, 2015)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this Section we introduce a novel semantic framework of comparison structures which incorporates ellipsis. Our framework extends and improves the state-of-the-art semantic framework for comparison structures in various ways (outlined in Section 2). We follow the model of interconnected predicate-argument structures. In this model the predicates are either comparison or ellipsis operators, and each predicate takes a set of arguments called its semantic frame. For instance, in [Sam] is the tallest [student] [in the gym], the morpheme -est expresses a comparison operator and the brackets delimit its various arguments. In this Section we provide details about our semantic framework.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Comprehensive Semantic Framework for Comparison", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Comparison structures are modeled as sets of inter-connected predicate-arguments. We base our comparison framework on Bakhshandeh and Allen (Bakhshandeh and Allen, 2015) , however, we extend and improve on the set of predicate types and arguments to capture more diverse structures which results in a different semantic framework.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 169, |
|
"text": "(Bakhshandeh and Allen, 2015)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison Structures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We consider two main categories of comparison predicates, each of which can grade any of the four parts of speech including adjectives, adverbs, nouns, and verbs. 1. Ordering: Indicates how two or more entities are ordered along a scale. The subtypes of this predicate are the following: -Comparatives with '>', '<' indicate that one degree is greater or lesser than another; expressed by the morphemes more/-er and less.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicates", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "(2) The steak is tastier than the potatoes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicates", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "(3) Tom ate more soup.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicates", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "-Equatives involving '\u2265' indicate that one degree meets or exceeds another; expressed by as in constructions such as as tall or as much.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicates", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "(4) The Mazda drives as fast as the Nissan.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicates", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "-Superlatives indicate an entity or event has the 'highest' or 'lowest' degree on a scale; expressed by most/-est and least.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicates", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "(5) That chef made the best soup.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicates", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "2. Extreme: Indicates having too much or enough of a quality or quantity. The subtypes of this predicate are the following: -Excessive indicate that an entity or event is 'too high' on a scale; expressed by too.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicates", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "-Assetive indicate that an entity or event has 'enough' of a degree; expressed by enough.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicates", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "Each predicate takes a set of arguments that we refer to as the predicate's 'semantic frame'. Following are the arguments included in our framework: -Scale (-/neutral/+) is the scale for the comparison, such as size, beauty, temperature. We assign the generic sentiment values positive (+), neutral, and negative (-) to the underlying scales.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Arguments", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "-Figure (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Arguments", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "-Standard (Std) is the reason a degree is 'too much' (excessive predicates) or 'enough' (assetive predicates). An individual j may be 'too tall to reach the top shelf ' but 'tall enough to get on this ride'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Arguments", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "-Differential (Diff) is an explicit phrase indicating the 'size' of a difference between degrees. For instance, '2 inches taller' or '6 degrees warmer'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Arguments", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "-Domain (Dom) is an explicit expression of the type of domain in which the comparison takes place (superlatives). An individual m may be 'the tallest girl' but not 'the tallest student'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Arguments", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "-Domain Specifier (D-Spec) is the specification of the domain argument, further narrowing the scope of the domain. An individual m may be 'the tallest girl in the class' but not 'the tallest girl in the country'. The Case of Copulas: A copula is a form of the verb to be that links the subject of a sentence with a predicate, such as was in the sentence She was a doctor. Comparison structures are often formed on the basis of copular constructions, for example (6a). Compare this with (6b), and their corresponding comparison structures. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Arguments", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "Perhaps the most common type of comparison structure is the comparative construction, with (13) as an example, where \u2206 marks an ellipsis site. Roughly, (13) is interpreted as a greaterthan relation between 'how appetizingly the steak sizzles' and 'how appetizingly the hamburger sizzles', which might be formalized as in 14with e 1 and e 2 representing the two sizzling events. 7The steak sizzled more appetizingly than the hamburger \u2206.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ellipsis Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(8) appetizingness(e1) > appetizingness(e2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ellipsis Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "On the surface, the sentence in (13) does not relate sizzle or appetizingly to the hamburger; these must be filled in for \u2206 by a process called ellipsis resolution-finding the antecedent of an ellipsis. Speakers of English are readily able to infer from the surface material in (13) that the dependent clause is interpreted as in 9, where the resolved ellipsis is written in subscript.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ellipsis Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(9) than the hamburgersizzled appetizingly It is clear that resolving ellipsis in comparison structures is crucial for language understanding and failure to do so would deliver an incorrect meaning representation. Numerous subtypes of elliptical constructions are distinguished in linguistics (Kennedy, 2003; Merchant, 2013; Yoshida et al., 2016) . In our framework we mainly include six types that can be detected in comparison structures: 'VP-deletion', 'Stripping' 6 , 'Pseudogapping', 'Gapping', 'Sluicing', and 'Subdeletion'. Ellipsis more often occurs in comparative and equative constructions (applicable to any of the four parts of speech) as follows.", |
|
"cite_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 308, |
|
"text": "(Kennedy, 2003;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 324, |
|
"text": "Merchant, 2013;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 325, |
|
"end": 346, |
|
"text": "Yoshida et al., 2016)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ellipsis Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Comparatives: Ellipsis takes place in the dependent clause headed by than. We indicate the three ellipsis possibilities for these clauses resuming (10), a nominal comparative. The elided materials are written in subscript.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ellipsis Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(10) Mary ate more rice ...", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ellipsis Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "-VP-deletion (aka 'Comparative Deletion'): ... than John did eat rice. -Stripping (aka 'Phrasal Comparative'):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ellipsis Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "... than John ate rice. -Gapping:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ellipsis Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "... than John, ate how-much soup. -Pseudogapping:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ellipsis Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "... than John did eat soup. -Sluicing:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ellipsis Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "... than someone, but I don't remember than who ate how-much rice.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ellipsis Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "-Subdeletion: ... than John ate how-much soup.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ellipsis Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Equatives: Ellipsis takes place in the dependent clause headed by as. We indicate the possibilities for these clauses resuming (11), a nominal equative.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ellipsis Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(11) Mary ate as much rice ...", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ellipsis Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "-VP-deletion: ... as John did eat how-much rice.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ellipsis Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Comparative (Comp)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ordering", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Mary is a lot more intelligent than Larry. Now that we have the ellipsis predicate types, we want to empirically model ellipsis constructions as predicate-argument structures with reference to an antecedent, where each ellipsis predicate is associated with its corresponding comparative predicate. The question is how to represent the ellipsis construction in a sentence. Consider the example of VP-deletion in the following adverbial comparative:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparative>", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(12) The steak was cooked more carefully than the burger \u2206.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparative>", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where \u2206 should be resolved to was cooked howcarefully. How is called the null operator, which serves as the placeholder for the measurement of a degree. In order to represent the resolution of the elided material such as \u2206, we first annotate the predicate of an ellipsis construction as an 'attachment' site in the syntactic tree, right next to the node that the elided material should follow. Hence, in (12), the token the burger will be annotated as the ellipsis predicate, which signifies the start of an ellipsis construction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparative>", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Defining the arguments for ellipsis predicates can be complicated. Here the goal is to thoroughly construct the antecedent of the elided material by annotating the existing words of the context sentence. In order to address this, we define the following three argument types for ellipsis:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparative>", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-Reference is the constituency node which is the base of an antecedent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparative>", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-EXclude (Ex) is the constituency node which should be excluded from the Reference.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparative>", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-How-much (?) is the constituency node which should be replaced by a null operator such as how or how-much; this is always the argument matching more/-er or as (much) in the context sentence. Following the above annotation schema the ellipsis site in (12) will be annotated as shown in red in Figure 2 . This shows how to do automatic ellipsis resolution given our representation: one should start after the node 'the burger', and perform the following: [was cooked more How ? carefully than the burger] Ref erence \u2212 [than the burger] EXclude = was cooked how carefully. Another important thing to note in Figure 2 is our treatment of the comparison structure (in green) jointly with ellipsis: The argument F (Figure) of the comparison predicate more is cooked. The G argument (Ground), is the second elided 'cooked' event, which should come from the ellipsis construction. We thus annotate the explicit node cooked as the Ground-Ellipsis (G/E) which also links the comparison construction to the ellipsis predicate.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 301, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF10" |
|
}, |
|
{ |
|
"start": 606, |
|
"end": 614, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF10" |
|
}, |
|
{ |
|
"start": 709, |
|
"end": 717, |
|
"text": "(Figure)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparative>", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The sentences used for annotation play a significant role in the diversity and comprehensiveness of the comparison structures represented in our dataset. Earlier work (Bakhshandeh and Allen, 2015) experimented with annotating semantic structures on OntoNotes dataset. We shift our focus to actual product and restaurant reviews, which inherently include many natural comparison instances. For this purpose we mainly use Google English Web Treebank 8 . This corpus contains more than 250,000 words in about 10,000 8 https://catalog.ldc.upenn.edu/ LDC2012T13 sentences of English weblogs, newsgroups, email, reviews (product, restaurant, etc.) and questionanswers, annotated with gold syntactic trees. This corpus is suitable for our task since it provides a good coverage of web domain text, mainly reviews.", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 196, |
|
"text": "(Bakhshandeh and Allen, 2015)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison Instance Sampling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In order to augment the volume of review content, we also use the Movie Reviews dataset (Pang and Lee, 2005) . This dataset consists of 11,855 sentences extracted from movie reviews. Given that these Movie reviews do not come with the syntactic trees, we used the Berkeley parser (Petrov et al., 2006) , which outperformed the other off-the-shelf parsers on comparison syntactic structure. Of course it is not efficient to include any arbitrary sentence of a corpus for manual annotation. We employ various linguistic filters to filter the sentences which potentially contain comparison. The details of this process can be found in the supplementary material.", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 108, |
|
"text": "(Pang and Lee, 2005)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 280, |
|
"end": 301, |
|
"text": "(Petrov et al., 2006)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison Instance Sampling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We trained six linguists to do the semantic annotation for comparison and ellipsis structures for the sampled comparison instances according to the framework presented in Section 3. The annotations were done via our interactive two-stage treebased annotation tool. In this tool, each annotator can be assigned with a set of tree-based annotation assignments, where pairing annotators to do the same task for inter-annotator analysis is also feasible. For this task, the annotations were done on top of constituency parse trees, and the annotators were instructed to choose the top-most constituency node when choosing the predicate or arguments. 9 Annotating on gold-standard syntactic trees helps with resolving ambiguous instances which have multiple interpretations. Furthermore, it gives annotators syntactic signals for choosing the types of predicates (e.g., adverbial vs adjectival comparatives), all of which increase the accuracy of our annotation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree-based Annotation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Our annotation tool sets up the data collection as a two-stage expert annotation process: (1) for each sentence, one expert annotates and submits the annotation, (2) another expert reviews the submission and either returns the submission with feedback or marks it as a gold. This recursive process ensures higher annotation quality. We iterate over the sentences until getting 100% interannotator agreement. On average, annotating every sentence takes about one minute and revising controversial sentences (12% of the time) takes about 4 minutes of expert annotation time. This process yields a total of 2,800 annotated sentences with 100% agreement. Figure 3 visualizes the distribution of various predicate types from the various resources. In order, these resources each include 11,855, 3,813, 3,488, 4,900, and 2,391 sentences. As this Figure depicts, reviews are indeed the richest resource for comparisons, with more comparison instances than any other resource of even a bigger size. There are a total of 5,564 comparison arguments in our dataset, with the distribution summarized in Table 2. The total number of ellipsis predicates is 240, with 197 Stripping, 31 VP-deletion and 12 Pseudo-gapping.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 651, |
|
"end": 659, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF11" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tree-based Annotation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this Section we describe our methodology for joint prediction of comparison and ellipsis structure for a given sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicting Semantic Structures", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We model the problem as a joint predicateargument prediction of comparison and ellipsis structures. It is important to note that our predicate-argument semantic structure itself looks similar to a dependency parse tree, however, as explained earlier, we base this representation on top of constituency parse trees. For each training sentence, we denote the underlying constituency tree as T . The set of all constituency nodes in T is V T . Each v \u2208 V T can be tagged as a comparison predicate c \u2208 C = {Comp, Sup, Eq, Exc, Ast} 10 , a comparison argument a c \u2208 A C = allcomparison-arguments, an ellipsis predicate e = 'Ellipsis', an ellipsis argument a e \u2208 A E = {Reference, Ex, '?'}, or NONE.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In Equation 1, we define a globally normalized model for the probability distribution of comparison labels over all v \u2208 V T if CompF ilter(T ) = True. We define CompF ilter to filter the following:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "-Any sentence containing a word with POS tag equal to JJR, RBR, JJS, or RBS.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "-Any sentence containing a comparison morpheme such as more, most, less, enough, too. The next step is to define the probability distribution in Equation 2 for ellipsis labels, conditioning on the comparison label. This is motivated by the fact that the Ellipsis predicate is dependent on its corresponding comparison predicate. Given the comparison and ellipsis predicate labels, for each comparison and ellipsis argument type we define a binomial probability distribution as defined in Equations 3 and 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "pC (c|v, T, \u03b8C ) \u221d exp(f C (c, T ) T \u03b8C ) (1) pE(e|c, v, T, \u03b8E) \u221d exp(f E (e, c, T ) T \u03b8E) (2) pA c (ac|c, e, v, T, \u03b8a c ) \u221d exp(f A C (c, e, T ) T \u03b8a c ) (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "pA e (ae|c, e, v, T, \u03b8a e ) \u221d exp(f A E (e, c, T ) T \u03b8a e ) (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In each of the above equations, f is the corresponding feature function. For predicates the main features are lexical features, bigram features, node's constituency position, node's minimum distance from leaves, and node's parent constituency label. For the arguments, we use the same feature-set as for the predicates, but also including the leftmost verb (for the case of copulas), the constituency path between argument and the predicate, and the predicate type. \u03b8 C , \u03b8 E , \u03b8 ac and \u03b8 ae are the parameters of the log-linear model. We calculate these parameters using Stochastic Gradient Descent algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For inference we model the problem as a structured prediction task. Given the syntactic tree of a given sentence, for each node we first select the predicate type with the highest p C . Then for each selected comparison predicate, we find the corresponding ellipsis predicate that has the highest p E probability. Define tc, te \u2208 R, where R is the set of all tuples of corresponding comparison and ellipsis predicates, tc is the index of the comparison predicate and te is the index of the ellipsis predicate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference of Ellipsis and Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We tackle the problem of argument assignment by using Integer Linear Programming, where one can pose domain-specific knowledge as constraints. We define a binary variable b ij and b ik where i is the a node in tree, j is a comparison argument label and k is a ellipsis argument label. For each tc, te , we maximize the linear Equation 5, subject to a few linguistically-motivated constraints.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference of Ellipsis and Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "max b ij ,b jk \u2208{0,1} i\u2208V T ,j\u2208A C ,k\u2208A E bijpA c (tc, te, i, j)+ b ik pA e (tc, te, i, k) (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference of Ellipsis and Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "ILP Constraints: Any specific comparison label calls for a unique set of constraints in the ILP formulation, which ensures the validity of predictions. For instance, the Superlative predicate type never takes any Ground arguments, or the argument Standard is only applicable to the excessive predicate type. We implement the semantic frame (as listed in Table 1 ) of each predicate type using hard ILP constraints. For example, in order to encode the semantic frame for predicate type Excessive, we employ the ILP constraints in Equation 6, which simply enforce this predicate to have 0 Ground arguments and maximum 1 Figure arguments. i\u2208V T ,j=Ground", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 354, |
|
"end": 361, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 618, |
|
"end": 636, |
|
"text": "Figure arguments.", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Joint Inference of Ellipsis and Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "bij = 0, i\u2208V T ,j=F igure bij \u2264 1", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Joint Inference of Ellipsis and Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We incorporate a few other ILP constraints for encoding our knowledge regarding ellipsis structures as well as comparison. For more details of these knowledge-driven constraints please refer to the supplementary material.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Inference of Ellipsis and Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We divide our dataset into train and train-dev (70%), and test (30%) sets. For evaluation of a given system prediction against the reference gold annotation, for each constituency node in the reference, we give the system a point in two ways: (1) Exact: the label assigned to the node by the system exactly matches the gold label;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Result", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "(2) Head: the reference label matches the label of the head word of the node in system's prediction. We report on Precision (P), Recall (R) and F1 score. We test three models: our comprehensive ILP model (detailed in Section 5), our model without the ILP constraints, and a rule-based baseline. The baseline encodes the same linguistically motivated ILP constraints via rules. It further uses a few pattern extraction functions for pinpointing comparison morphemes which detect comparison and ellipsis predicates. More details about the baseline can be found in the supplementary material. The results on predicate prediction is shown in Table 3 . Given that our ILP constraints only encode argument structures, in this Table we only compare the baseline with our full ILP model. As the results show, overall, the scores are high for predicting the predicates, with ellipsis predicates being the most challenging. The baseline has a near perfect prediction on Assetive and Superlative types, which shows that the linguistic patterns can capture these types well. Our model performs the poorest on Equatives. If we look at the specific cases it misses, it is often regarding the morpheme 'as', which takes part in many various linguistics constructions, many of which are not comparatives. For example, for the test sentence We will let them manage our other investment properties as well as us getting older., our system wrongly classifies 'as' as an equative ILP Model (Exact/Head) ILP No Constraints (Exact/Head) Baseline (Exact/Head) P R F1 P R F1 P R F1 Standard 0.40/0.80 0.42/0.84 0.41/0.82 0.00/0.00 0.71/1.00 0.00/0.00 0.00/0.00 0.00/0.00 0.00/0.00 Scale 0.58/0.64 0.89/0.99 0.70/0.78 0.02/0.02 0.94/1.00 0.04/0.04 0.47/0.69 0.67/0.98 0.55/0.81 Ground 0.27/0.48 0.46/0.84 0.34/0.61 0.00/0.00 0.98/1.00 0.01/0.01 0.06/0.18 0.24/0.71 0.10/0.29 Figure 0 .38/0.81 0.44/0.94 0.41/0.87 0.02/0.02 0.94/1.00 0.03/0.03 0.09/0.43 0.17/0.80 0.12/0.56 D-Specifier 0.41/0.63 0.57/0.87 0.48/0.73 0.00/0.00 1.00/1.00 0.01/0.01 0.00/0.00 0.00/0.00 0.00/0.00 Domain 0.56/0.76 0.66/0.91 0.61/0.83 0.01/0.01 0.99/1.00 0.01/0.01 0.00/0.39 0.00/0.55 0.00/0.46 Exclude 0.33/0.56 0.49/0.84 0.39/0.67 0.01/0.01 0.63/1.00 0.02/0.02 0.00/0.00 0.00/0.00 0.00/0.00 Ref 0.18/0.53 0.28/0.80 0.22/0.63 0.01/0.01 0.61/1.00 0.01/0.02 0.00/0.00 0.00/0.00 0.00/0.00 How-much 0.27/0.36 0.65/0.88 0.38/0.51 0.01/0.01 0.96/1.00 0.01/0.01 0.00/0.00 0.00/0.00 0.00/0.00 Average 0.37/0.61 0.54/0.87 0.43/0.71 0.01/0.01 0.86/1.00 0.10/0.10 0.20/0.42 0.36/0.73 0.25/0.52 Table 4 : Results of argument prediction on test set. The average for the models only takes into account non-zero results.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 638, |
|
"end": 645, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 1850, |
|
"end": 1859, |
|
"text": "Figure 0", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2537, |
|
"end": 2544, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Result", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "predicate, which is clearly an ambiguous and challenging test sentence. Analysis shows that the errors are often due to inaccuracies in automatically generated parse trees, e.g., challenging long sentences (average length > 12 tokens) with informal language which are generally hard to parse. The task of predicting arguments is a more demanding task. As you can see in Table 4 , the baseline model often fails at predicting the arguments. Our comprehensive ILP model consistently outperforms the No Constraints model, showing the effectiveness of our linguistically motivated ILP constraints. Our ILP model performs the best on Scale and Domain argument types, which is partly due to the frequency of these types in our dataset. We are planning on annotating more data to improve the argument prediction in future.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 370, |
|
"end": 377, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Result", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Systems that can understand comparison and make inferences about how entities and events compare in natural language are crucial for various NLP applications, ranging from question answering to product review analysis. Having a comprehensive semantic framework which can represent the underlying meaning of comparison structures is the first step toward enabling such an inference. In this paper we introduced a novel semantic framework for jointly capturing the meaning of comparison and ellipsis constructions. We modeled the problem as inter-connected predicateargument prediction. Based on this framework, we trained experts to annotate a dataset of ellipsis and comparison structures, which we are making publicly available 11 . Furthermore, we introduced a structured prediction model which can automatically extract comparison structures and perform ellipsis resolution for a given text, which performs reasonably well for major predicate and argument types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In future, we are planning on improving our joint prediction models for further improving the performance. Moreover, we plan on using our semantic framework for text comprehension applications.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Elliptical constructions involve the omission of one or more phrases from a clause, while the content can still be understood from the rest of the sentence (Kennedy, 2003; Merchant, 2013) . Resolving ellipsis in comparison structures is crucial for language understanding. Failure to do so for (13) as an example, would deliver an incorrect representation, something like 'how appetizingly the steak sizzled is greater than the hamburger'. To arrive at an interpretation equivalent to (14) in a way that systematically relates to the syntax of (13) requires a semantics for comparatives based on 'events' and 'degrees'.", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 171, |
|
"text": "(Kennedy, 2003;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 172, |
|
"end": 187, |
|
"text": "Merchant, 2013)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplementary Material Background on Ellipsis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(13) The steak sizzled more appetizingly than the hamburger \u2206.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplementary Material Background on Ellipsis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(14) appetizingness(e1) > appetizingness e2In event semantics, sentences like (15) and (16) are interpreted as existential statements about events (Davidson, 1967) . For example, (15) is interpreted as 'there is an event e whose Theme (T h) is the steak, and e is a sizzling event' (Parsons, 1990 ). A comparative like (13) is built on top of two clauses much like (15) and (16) (Bresnan, 1973) . In concert with appetizingly in (13), more introduces a greater-than relation between the degrees to which the two events are appetizing (Wellwood, 2015) . 'Degrees' represent points on a scale, said to be the output of a 'measure function' like appetizing (Cresswell, 1976; Kennedy, 1999) . In what follows, we first introduce this framework in the simpler case where no dependent clause appears in the sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 163, |
|
"text": "(Davidson, 1967)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 282, |
|
"end": 296, |
|
"text": "(Parsons, 1990", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 394, |
|
"text": "(Bresnan, 1973)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 534, |
|
"end": 550, |
|
"text": "(Wellwood, 2015)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 654, |
|
"end": 671, |
|
"text": "(Cresswell, 1976;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 672, |
|
"end": 686, |
|
"text": "Kennedy, 1999)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplementary Material Background on Ellipsis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the 'implicit' comparison 17, what is compared to must be recovered from the use context; this is indicated by the free variable \u03b4, standing for some degree. The interpretation of this sentence is read, 'there is an event e in which the steak sizzles, and e is appetizing to a degree greater than \u03b4'. When the dependent clause is present, the combination of ellipsis resolution and semantic composition delivers a degree that takes the place of \u03b4 in a representation like that in (17). (18) is read as, 'the maximal degree d to which there is an event e of the hamburger sizzling, and e is appetizing to at least degree d'. Semantically, the maximal degree (max d) is introduced by a null operator that we will call how (Kennedy, 2002) Putting the pieces together, (13) in fact has the richer and more accurate meaning representation in (20).", |
|
"cite_spans": [ |
|
{ |
|
"start": 723, |
|
"end": 738, |
|
"text": "(Kennedy, 2002)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplementary Material Background on Ellipsis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(20) \u2203e1[T h(e1)(s) & sizz(e1) & appetiz(e1) > max d.] \u2203e2[T h(e2)(h) & sizz(e2) & appetiz(e2) \u2265 d]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplementary Material Background on Ellipsis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In comparatives with more/-er, and equatives with as, how the 'scale' is introduced in the dependent clause differs according to the major part of speech of the comparison structure. For adjectival and adverbial comparisons (taller, as quickly), the scale is provided by those categories (height, appetizingness) and the null operator is simply how. For nominal and verbal comparisons (more rice, sizzle as much), much introduces a variable scale (\u00b5), and the null operator is called how-much.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplementary Material Background on Ellipsis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In addition to the major characteristics pointed out in the paper, our framework improves on the following issues as compared with Bakhshandeh and Allen (Bakhshandeh and Allen, 2015) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 153, |
|
"end": 182, |
|
"text": "(Bakhshandeh and Allen, 2015)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-While we also model comparison structures as predicate-argument pairs, we do not use additional semantic role links. We retain all semantic information on predicate and argument types, which results in better semantic generalization across all predicates (Section 3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-We categorize arguments into semantic frames associated with each predicate type. This enables addressing complex cases such as 'copulas' (Section 3.1.2) which play a crucial role in asserting properties about entities. Furthermore, we introduce a more comprehensive set of argument types which more accurately capture the syntactic and semantic properties of various predicate types. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Overall, our ILP constraints (which encode restrictions on the arguments of predicates) are either applied in general (to any predicate type) or are tailored to encode the semantic frame of a specific predicate. Following are our generic constraints: 1. The maximum number of arguments per node is 3. 2. The maximum number of arguments in the entire syntactic tree is 10. We incorporate the following ILP constraints for encoding knowledge regarding Ellipsis predicates:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Integer Linear Programming Constraints", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1 ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Integer Linear Programming Constraints", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "One approach for extracting sentences containing comparisons is to mine the text for some (automatically or manually created) patterns, then train a classifier for labeling comparison and noncomparison sentences (Jindal and Liu, 2006b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 235, |
|
"text": "(Jindal and Liu, 2006b)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection Methodology", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "However, the variety of comparison structures is so vast that being limited to some specific patterns or syntactic structures will not result in good coverage of comparisons. Instead, we use the following filter (CompF ilter) with a set of basic comparison structure linguistic markers for extracting potential comparison instances:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection Methodology", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-Any sentence containing a word with POS tag equal to JJR, RBR, JJS, or RBS.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection Methodology", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-Any sentence containing a comparison morpheme such as more, most, less, enough, too. This filter is guaranteed not to have any false negatives since it is exhaustive enough to capture any possible comparison sentence. We applied this filter to the English Web Corpus and the Movie Reviews dataset and extracted a pool of 2,800 sentences for final annotation in the next step. It is important to note that this filter will capture some cases which look like comparison instances at the surface level, but which are not so semantically (e.g., (21)-(22), extracted from the Google Web Treebank). Such negative examples help the quality of the final prediction models. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection Methodology", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We implemented a rule-based baseline for predicate-argument structure prediction. This model mainly uses POS and lexical wording rules for predicate prediction. For example, we have the following rule for predicate prediction: Any JJS POS tag can be tagged as a superlative predicate. For argument prediction, we mainly implement our knowledge-driven ILP constraints as rules. Furthermore, this baseline uses rules such as the following: in any than-clause, the first NP should be tagged as Ground argument. Also, the subject (if any) should be tagged as Figure argument , and the closest adjective to the comparison morpheme is the Scale indicator.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 555, |
|
"end": 570, |
|
"text": "Figure argument", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baseline Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We are releasing our interactive two-stage treebased annotation tool with this paper. In this tool each annotator can be assigned with a set of treebased annotation assignments, where pairing annotators to do the same task for inter-annotator analysis is also feasible. This annotation tool sets up the data collection as a two-stage expert annotation process: (1) for each sentence, one expert annotates and submits the annotation, (2) another expert reviews the submission and either returns the submission with feedback or marks it as a gold. Figure 4 shows a screen-shot of this tool.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 546, |
|
"end": 554, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF14" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Two-stage Tree-based Annotation Tool", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We used Semafor tool: http://demo.ark.cs.cmu.edu/parse 2 The same shortcomings are shared among other generic meaning representations such as LinGO English Resource Grammar (ERG)(Flickinger, 2011), Boxer(Bos, 2008), or AMR(Banarescu et al., 2013), among others.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Throughout this paper we refer to any statement comparing two or more entities as a comparison instance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "These morphemes are often referred to as the comparison operators.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This is crucial, given the fact that the syntactic structure of many comparison instances are complex, e.g., The server was the rudest ever and made me feel as I was wasting her time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "VP-deletion and stripping are the more frequent types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Whether this construction is grammatical is controversial.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This enables accurate capturing of arguments, e.g., in I am the tallest [in our school], the constituency node corresponding to the entire phrase in brackets is annotated as Domain-specifier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Each predicate should be further tagged with one of the four possible POS tags (JJ, RB, NN, VB), resulting in a total of 20 predicate types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to access the dataset and our interactive twostage tree-based annotation tool please refer to http:// cs.rochester.edu/~omidb.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the anonymous reviewers for their invaluable comments and Brian Rinehart and other annotators for their great work on the annotations. This work was supported in part by Grant W911NF-15-1-0542 with the US Defense Advanced Research Projects Agency (DARPA) and the Army Research Office (ARO).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgment", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Deep semantic analysis of text", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Allen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [], |
|
"last": "Swift", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Will De", |
|
"middle": [], |
|
"last": "Beaumont", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 2008 Conference on Semantics in Text Processing, STEP '08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "343--354", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James F. Allen, Mary Swift, and Will de Beaumont. 2008. Deep semantic analysis of text. In Proceed- ings of the 2008 Conference on Semantics in Text Processing, STEP '08, pages 343-354, Stroudsburg, PA, USA. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Semantic framework for comparison structures in natural language", |
|
"authors": [ |
|
{ |
|
"first": "Omid", |
|
"middle": [], |
|
"last": "Bakhshandeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Allen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "993--1002", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Omid Bakhshandeh and James Allen. 2015. Semantic framework for comparison structures in natural lan- guage. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing, pages 993-1002, Lisbon, Portugal, September. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Abstract meaning representation for sembanking", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Banarescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shu", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Madalina", |
|
"middle": [], |
|
"last": "Georgescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kira", |
|
"middle": [], |
|
"last": "Griffitt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulf", |
|
"middle": [], |
|
"last": "Hermjakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "178--186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguis- tic Annotation Workshop and Interoperability with Discourse, pages 178-186, Sofia, Bulgaria, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Semantic parsing via paraphrasing", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Association for Com- putational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "An annotated corpus for the analysis of vp ellipsis. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Spenader", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "45", |
|
"issue": "", |
|
"pages": "463--494", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johan Bos and Jennifer Spenader. 2011. An annotated corpus for the analysis of vp ellipsis. Language Re- sources and Evaluation, 45(4):463-494.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Wide-coverage semantic analysis with boxer", |
|
"authors": [ |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Conference Proceedings, Research in Computational Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "277--286", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johan Bos. 2008. Wide-coverage semantic analy- sis with boxer. In Johan Bos and Rodolfo Del- monte, editors, Semantics in Text Processing. STEP 2008 Conference Proceedings, Research in Compu- tational Semantics, pages 277-286. College Publi- cations.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Syntax of the comparative clause construction in English", |
|
"authors": [ |
|
{ |
|
"first": "Joan", |
|
"middle": [], |
|
"last": "Bresnan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1973, |
|
"venue": "Linguistic Inquiry", |
|
"volume": "4", |
|
"issue": "3", |
|
"pages": "275--343", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joan Bresnan. 1973. Syntax of the comparative clause construction in English. Linguistic Inquiry, 4(3):275-343.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The semantics of degree", |
|
"authors": [ |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Cresswell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1976, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "261--292", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Max Cresswell. 1976. The semantics of degree. Bar- bara Hall Partee (ed.), pages 261-292.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Frame-semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Desai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "'", |
|
"middle": [ |
|
"F T" |
|
], |
|
"last": "Andr\u00e3l", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Martins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Schneider", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Computational Linguistics", |
|
"volume": "40", |
|
"issue": "", |
|
"pages": "9--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dipanjan Das, Desai Chen, Andr\u00c3l' F. T. Martins, Nathan Schneider, and Noah A. Smith. 2014. Frame-semantic parsing. Computational Linguis- tics, 40:1:9-56.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The logical form of action sentences", |
|
"authors": [ |
|
{ |
|
"first": "Donald", |
|
"middle": [], |
|
"last": "Davidson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "The Logic of Decision and Action", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "81--95", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Donald Davidson. 1967. The logical form of action sentences. In Nicholas Rescher, editor, The Logic of Decision and Action, pages 81-95. Pittsburgh Uni- versity Press, Pittsburgh.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Accuracy vs. robustness in grammar engineering", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Flickinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Language from a Cognitive Perspective: Grammar, Usage and Processing", |
|
"volume": "201", |
|
"issue": "", |
|
"pages": "31--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Flickinger. 2011. Accuracy vs. robustness in grammar engineering. In Emily M. Bender and Jen- nifer E. Arnold, editors, Language from a Cogni- tive Perspective: Grammar, Usage and Processing, number 201, pages 31-50. CSLI Publications, Stan- ford.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "An empirical approach to vp ellipsis", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Hardt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Comput. Linguist", |
|
"volume": "23", |
|
"issue": "4", |
|
"pages": "525--541", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Hardt. 1997. An empirical approach to vp ellip- sis. Comput. Linguist., 23(4):525-541, December.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Identifying comparative sentences in text documents", |
|
"authors": [ |
|
{ |
|
"first": "Nitin", |
|
"middle": [], |
|
"last": "Jindal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '06", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "244--251", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitin Jindal and Bing Liu. 2006a. Identifying com- parative sentences in text documents. In Proceed- ings of the 29th Annual International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, SIGIR '06, pages 244-251, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Mining comparative sentences and relations", |
|
"authors": [ |
|
{ |
|
"first": "Nitin", |
|
"middle": [], |
|
"last": "Jindal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st National Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1331--1336", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitin Jindal and Bing Liu. 2006b. Mining comparative sentences and relations. In Proceedings of the 21st National Conference on Artificial Intelligence -Vol- ume 2, AAAI'06, pages 1331-1336. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Projecting the Adjective: The Syntax and Semantics of Gradability and Comparison", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Kennedy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Kennedy. 1999. Projecting the Adjective: The Syntax and Semantics of Gradability and Compari- son. Garland, New York.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Comparative deletion and optimality in syntax. Natural Language and Linguistic Theory", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Kennedy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "20", |
|
"issue": "", |
|
"pages": "553--621", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Kennedy. 2002. Comparative deletion and opti- mality in syntax. Natural Language and Linguistic Theory, 20(3):553-621.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Ellipsis and syntactic representation", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Kennedy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "The Interfaces: Deriving and Interpreting Omitted Structures, number 61 in Linguistics Aktuell", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "29--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Kennedy. 2003. Ellipsis and syntactic rep- resentation. In Kerstin Schwabe and Susanne Win- kler, editors, The Interfaces: Deriving and Inter- preting Omitted Structures, number 61 in Linguis- tics Aktuell, pages 29-54. John Benjamins.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Detecting comparative sentiment expressions -a case study in annotation design decisions", |
|
"authors": [ |
|
{ |
|
"first": "Wiltrud", |
|
"middle": [], |
|
"last": "Kessler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Kuhn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of KONVENS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wiltrud Kessler and Jonas Kuhn. 2014. Detecting comparative sentiment expressions -a case study in annotation design decisions. In Proceedings of KONVENS, Hildesheim, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "The Stanford CoreNLP natural language processing toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Bethard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mc-Closky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computa- tional Linguistics: System Demonstrations, pages 55-60.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Voice and ellipsis", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Merchant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Linguistic Inquiry", |
|
"volume": "44", |
|
"issue": "1", |
|
"pages": "77--108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Merchant. 2013. Voice and ellipsis. Linguistic Inquiry, 44(1):77-108.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Verb phrase ellipsis detection using automatically parsed text", |
|
"authors": [ |
|
{ |
|
"first": "Leif", |
|
"middle": [ |
|
"Arda" |
|
], |
|
"last": "Nielsen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 20th International Conference on Computational Linguistics, COLING '04", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leif Arda Nielsen. 2004. Verb phrase ellipsis detection using automatically parsed text. In Proceedings of the 20th International Conference on Computational Linguistics, COLING '04, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", |
|
"authors": [ |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lillian", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "115--124", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Exploit- ing class relationships for sentiment categorization with respect to rating scales. In Proceedings of ACL, pages 115-124.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Events in the semantics of English: A study in subatomic semantics", |
|
"authors": [ |
|
{ |
|
"first": "Terence", |
|
"middle": [], |
|
"last": "Parsons", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Current Studies in Linguistics Series", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Terence Parsons. 1990. Events in the semantics of En- glish: A study in subatomic semantics. In Current Studies in Linguistics Series no. 19, page 334. MIT Press, Cambridge, Massachusetts.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Learning accurate, compact, and interpretable tree annotation", |
|
"authors": [ |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Barrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Romain", |
|
"middle": [], |
|
"last": "Thibaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL-44", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "433--440", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the As- sociation for Computational Linguistics, ACL-44, pages 433-440, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Ellipsis resolution with underspecified scope", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Schiehlen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "72--79", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Schiehlen. 2002. Ellipsis resolution with un- derspecified scope. In Proceedings of the 40th An- nual Meeting on Association for Computational Lin- guistics, ACL '02, pages 72-79, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Comparing semantic theories of comparison", |
|
"authors": [ |
|
{ |
|
"first": "Von", |
|
"middle": [], |
|
"last": "Arnim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Stechow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "Journal of Semantics", |
|
"volume": "3", |
|
"issue": "1", |
|
"pages": "1--77", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arnim Von Stechow. 1984. Comparing semantic the- ories of comparison. Journal of Semantics, 3(1):1- 77.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "On the semantics of comparison across categories", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Wellwood", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Linguistics and Philosophy", |
|
"volume": "38", |
|
"issue": "1", |
|
"pages": "67--101", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Wellwood. 2015. On the semantics of compar- ison across categories. Linguistics and Philosophy, 38(1):67-101.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Mining comparative opinions from customer reviews for competitive intelligence", |
|
"authors": [ |
|
{ |
|
"first": "Kaiquan", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"Shaoyi" |
|
], |
|
"last": "Liao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiexun", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuxia", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Decision Support Systems", |
|
"volume": "50", |
|
"issue": "4", |
|
"pages": "743--754", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kaiquan Xu, Stephen Shaoyi Liao, Jiexun Li, and Yuxia Song. 2011. Mining comparative opinions from customer reviews for competitive intelligence. Decision Support Systems, 50(4):743-754.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Learning to parse database queries using inductive logic programming", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Zelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the Thirteenth National Conference on Artificial Intelligence", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1050--1055", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John M. Zelle and Raymond J. Mooney. 1996. Learn- ing to parse database queries using inductive logic programming. In Proceedings of the Thirteenth Na- tional Conference on Artificial Intelligence -Volume 2, AAAI'96, pages 1050-1055. AAAI Press.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "The frame-semantic parsing of the sentence My Mazda drove faster than his Hyundai." |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Fig)is the main role which is being compared.-Ground is the main role Figure is compared to." |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "(6) a. This was the best pizza in town.b. I ate the best pizza in town." |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Figure Scale/+ Domain sup I ate the most delicious pizza ." |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "see, in (6a) was links this to pizza. In this sentence the argumentFigure isthis. On the other hand, in (6b), the word pizza takes the role of both Figure and Domain." |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Figure Differential Scale/+ Ground" |
|
}, |
|
"FIGREF6": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Figure Scale/+" |
|
}, |
|
"FIGREF7": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Figure Scale/+ Ground" |
|
}, |
|
"FIGREF8": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Figure Scale/+ Standard" |
|
}, |
|
"FIGREF9": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Figure" |
|
}, |
|
"FIGREF10": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Full tree-based annotation of comparison and ellipsis structures for the sentence presented in example 12. The tag 'Es' refers to the Stripping predicate type." |
|
}, |
|
"FIGREF11": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "The number of various predicate types across different resources." |
|
}, |
|
"FIGREF12": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "The steak sizzled. \u2203e1[T h(e1)(steak) & sizzle(e1)] (16) The hamburger sizzled. \u2203e2[T h(e2)(hamburger) & sizzle(e2)]" |
|
}, |
|
"FIGREF13": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "The steak sizzled more appetizingly. \u2203e[T h(e)(s) & sizzle(e) & appetizing(e) > \u03b4]" |
|
}, |
|
"FIGREF14": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "A screen-shot of our two-stage tree-based annotation tool." |
|
}, |
|
"FIGREF15": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": ". The constituency span of comparison predicate's F igure and Ground should overlap with the Ref erence argument of ellipsis predicate, if any. 2. The constituency node of Exclude argument should be a child of the Ref erence. 3. One node can only have more than one comparison argument type if those types are F igure and Ground. The constraints for encoding the semantic frame of the other comparison predicate types follows straightforwardly from the semantic frames presented in the paper." |
|
}, |
|
"FIGREF16": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Very nice ambiance and friendly staff too.(22) We had sesame chicken and kung pao chicken as well as cheese puffs." |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"content": "<table><tr><td>--Sluicing:</td></tr><tr><td>... as someone, but I don't remember as who ate how-much rice. 7</td></tr><tr><td>-Subdeletion:</td></tr><tr><td>.</td></tr></table>", |
|
"html": null, |
|
"text": "Predicates together with their semantic frames shown in example sentences. Stripping: ... as John eat how-much rice.-Gapping:... as John, ate how-much soup. -Pseudogapping:... as John did ate how-much soup.", |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">Scale Fig</td><td colspan=\"4\">Ground Dom D-Spec Diff Std</td></tr><tr><td>38.8</td><td colspan=\"2\">31.5 6.33</td><td>9.31</td><td>7.01</td><td>4.09 2.98</td></tr></table>", |
|
"html": null, |
|
"text": "The percentage of each argument type.", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"content": "<table><tr><td>Each cell contains scores according to Exact/Head</td></tr><tr><td>measurement.</td></tr></table>", |
|
"html": null, |
|
"text": "Predicate prediction results on test set.", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"content": "<table><tr><td>throughout this</td></tr><tr><td>paper.</td></tr><tr><td>resolve ellipsis</td></tr><tr><td>(19) ...the hamburger didsizzle how-appetizingly</td></tr><tr><td>max d.\u2203e[T h(e)(h) & sizzle(e) & appetiz(e) \u2265</td></tr><tr><td>d]</td></tr></table>", |
|
"html": null, |
|
"text": "18) ...than the hamburger did.", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |