|
{ |
|
"paper_id": "C08-1043", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:26:15.837775Z" |
|
}, |
|
"title": "Using Discourse Commitments to Recognize Textual Entailment", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Hickl", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Language Computer Corporation", |
|
"location": { |
|
"addrLine": "1701 North Collins Boulevard Suite", |
|
"postCode": "2000, 75080", |
|
"settlement": "Richardson", |
|
"region": "Texas", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we introduce a new framework for recognizing textual entailment (RTE) which depends on extraction of the set of publicly-held beliefs-known as discourse commitments-that can be ascribed to the author of a text (t) or a hypothesis (h). We show that once a set of commitments have been extracted from a t-h pair, the task of recognizing textual entailment is reduced to the identification of the commitments from a t which support the inference of the h. Our system correctly identified entailment relationships in more than 80% of t-h pairs taken from all three of the previous PASCAL RTE Challenges, without the need for additional sources of training data.", |
|
"pdf_parse": { |
|
"paper_id": "C08-1043", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we introduce a new framework for recognizing textual entailment (RTE) which depends on extraction of the set of publicly-held beliefs-known as discourse commitments-that can be ascribed to the author of a text (t) or a hypothesis (h). We show that once a set of commitments have been extracted from a t-h pair, the task of recognizing textual entailment is reduced to the identification of the commitments from a t which support the inference of the h. Our system correctly identified entailment relationships in more than 80% of t-h pairs taken from all three of the previous PASCAL RTE Challenges, without the need for additional sources of training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Systems participating in the PASCAL Recognizing Textual Entailment (RTE) Challenges (Dagan et al., 2005) have successfully employed a variety of \"shallow\" techniques in order to recognize instances of textual entailment, including methods based on: (1) sets of heuristics (Vanderwende et al., 2006) , (2) measures of term overlap (Jijkoun and de Rijke, 2005 ) (or other measures of semantic \"relatedness\" , (3) the alignment of graphs created from syntactic or semantic dependencies (Haghighi et al., 2005) , or (4) statistical classifiers which leverage a wide range of features, including the output of paraphrase generation (Hickl et al., 2006) , inference rule generac 2008.", |
|
"cite_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 104, |
|
"text": "(Dagan et al., 2005)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 298, |
|
"text": "(Vanderwende et al., 2006)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 330, |
|
"end": 357, |
|
"text": "(Jijkoun and de Rijke, 2005", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 483, |
|
"end": 506, |
|
"text": "(Haghighi et al., 2005)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 627, |
|
"end": 647, |
|
"text": "(Hickl et al., 2006)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-nc-sa/3.0/). Some rights reserved.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "tion (Szpektor et al., 2007) , or model building systems (Bos and Markert, 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 28, |
|
"text": "(Szpektor et al., 2007)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 57, |
|
"end": 80, |
|
"text": "(Bos and Markert, 2006)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While these relatively \"shallow\" approaches have shown much promise in RTE for entailment pairs where the text and hypothesis remain short, we expect that performance of these types of systems will ultimately degrade as longer and more syntactically complex entailment pairs are considered. For example, given a \"short\" t-h pair (as in (1)), we might expect that a feature-based comparison of the t and the h would be sufficient to identify that the t textually entailed the h.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) Short t-h Pair a. Text: Mack Sennett was involved in the production of \"The Extra Girl\". b. Hypothesis: \"The Extra Girl\" was produced by Sennett.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The additional information included in a longer t (like the one in (2)) can make for a much more challenging entailment computation. While the evidence supporting the h is included in the t, systems must be able to establish that (1) Mack Sennett was involved in producing a Mabel Normand vehicle and (2) that \"The Extra Girl\" and the Mabel Normand vehicle refer to the same film.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(2) Long t-h Pair a. Text: \"The Extra Girl\" (1923) is the story of a small-town girl, Sue Graham (played by Mabel Normand) who comes to Hollywood to be in the pictures. This Mabel Normand vehicle, produced by Mack Sennett, followed earlier films about the film industry and also paved the way for later films about Hollywood, such as King Vidor's \"Show People\" (1928). b. Hypothesis: \"The Extra Girl\" was produced by Sennett.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to remain effective as texts get longer, we believe that RTE systems will need to employ techniques that will enable them to enumerate the set of propositions which are inferable from a texthypothesis pair.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We introduce a new framework for recognizing textual entailment which depends on extraction of a subset of the publicly-held beliefs -or discourse commitments -available from the linguistic meaning of a text or hypothesis. We show that once even a small set of discourse commitments have been extracted from a text-hypothesis pair, the task of RTE can be reduced to the identification of the one (or more) commitments from the t which are most likely to support the inference of each commitment extracted from the h.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We have found that a commitment-based approach to RTE provides state-of-the-art results on the PASCAL RTE task even when large external knowledge resources are not available. While our approach does depend on a set of specially-tailored heuristics which makes it possible to enumerate some of the commitments from a t-h pair, we show that reasonably high levels of performance are possible (as much as 84.9% accuracy), given even a small number of extractors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of this paper is organized in the following way. Section 2 describes the organization of most current statistical systems for RTE, while Sections 3, 4, and 5 describe details of the algorithms we have developed for the RTE system we discuss in this paper. Section 6 presents our experimental results, and Section 7 presents our conclusions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recognizing whether the information expressed in a h can be inferred from the information expressed in a t can be cast either as (1) a classification problem or (2) a formal textual inference problem, performed either by theorem proving or model checking. While these approaches apply radically different solutions to the same problem, both meth-ods involve the translation of natural language into some sort of suitable meaning representation, such as real-valued features (in the case of classification), or axioms or models (in the case of formal methods). We argue that performing this translation necessarily requires systems to acquire forms of (linguistic and/or real-world) knowledge which may not be derivable from the surface form of a t or h. (See Figure 1a for an illustration of the architecture of a prototypical RTE system.)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 759, |
|
"end": 768, |
|
"text": "Figure 1a", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Recognizing Textual Entailment", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In order to acquire forms of linguistic knowledge for RTE, we have developed a novel framework which depends on the extraction of discourse commitments from a text-hypothesis pair. Following (Gunlogson, 2001; Stalnaker, 1979) , we assume discourse commitments represent the set of propositions which can necessarily be inferred to be true given a conventional reading of a text. (Figure 2 lists the set of commitments that were extracted from a t-h pair included in the PASCAL RTE-3 Test Set. 1 ) Formally, we assume that given a commitment set {c t } consisting of the set of discourse commitments inferable from a text t and a hypothesis h, we define the task of RTE as a search for the commitment c \u2208 {c t } which maximizes the likelihood that c textually entails h.", |
|
"cite_spans": [ |
|
{ |
|
"start": 191, |
|
"end": 208, |
|
"text": "(Gunlogson, 2001;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 225, |
|
"text": "Stalnaker, 1979)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 379, |
|
"end": 388, |
|
"text": "(Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Recognizing Textual Entailment", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In our architecture (illustrated in Figure 1b ), discourse commitments are first extracted from both the t and the h using the approach described in Section 3. Once commitment sets have been extracted for the t and the h, we then use a commitment selection module (described in Section 4) in order to perform a term-based alignment of each commitment extracted from the t against each commitment extracted from the h. The top-ranked pair of commitments (c t i , c h i ) is then sent to an Text: \"The Extra Girl\" (1923) is a story of a small\u2212town girl, Sue Graham (played by Mabel Normand) who comes to Hollywood to be in the pictures. This Mabel Normand vehicle, produced by Mack Sennett, followed earlier films about the film industry and also paved the way for later films about Hollywood, such as King Vidor's \"Show People\" (1928) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 827, |
|
"end": 833, |
|
"text": "(1928)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 45, |
|
"text": "Figure 1b", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Recognizing Textual Entailment", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Positive Instance of Textual Entailment Hypothesis (4): \"The Extra Girl\" was produced by Sennett. entailment computation module (described in Section 5), which estimates the likelihood that the selected c t i textually entails the c h i (and by extension, the likelihood that t textually entails h). Commitment pairs are considered in ranked order until a positive judgment is returned, or until no more commitments above a threshold remain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selected Commitment", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Work in semantic parsing (Wong and Mooney, 2007; Zettlemoyer and Collins, 2005) has used statistical and symbolic techniques to convert natural language texts into a logical meaning representation (MR) which can be leveraged by formal reasoning systems. While this work has explored how output from syntactic parsers can be used to represent the meaning of a text independent of its actual surface form, these approaches have focused on the propositional semantics explicitly encoded by predicates and have not addressed other phenomena (such as conversational implicature or linguistic presupposition) -which are not encoded overtly in the syntax. Our work focuses on how an approach based on lightweight extraction rules can be used to enumerating a subset of the discourse commitments that are inferable from a t-h pair. While heuristically \"unpacking\" all of commitments of a t or h may be (nearly) impossible, we believe that our work represents an important first step towards determining the relative value of these additional commitments for textual inference applications, such as RTE. Our commitment extraction algorithm is presented in Algorithm 1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 48, |
|
"text": "(Wong and Mooney, 2007;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 49, |
|
"end": 79, |
|
"text": "Zettlemoyer and Collins, 2005)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Commitments are extracted from each t and h using an implementation of the probabilistic finitestate transducer (FST)-based extraction framework described in (Eisner, 2002; Eisner, 2003) . Given a Algorithm 1 Extracting Discourse Commitments 1: Input: A set S of sentences from the t or h 2: Output: A set of discourse commitments 3: loop 4:", |
|
"cite_spans": [ |
|
{ |
|
"start": 158, |
|
"end": 172, |
|
"text": "(Eisner, 2002;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 173, |
|
"end": 186, |
|
"text": "Eisner, 2003)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Pre-process the decompositions to identify lexical, syntactic, semantic, and discourse information 5:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Produce syntactic decompositions of the sentences in S 6:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Produce commitments of propositional content 7:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Produce commitments for supplemental expressions 8:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Extract predefined set of relations 9:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Perform coreference resolution for each commitment 10:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Generate paraphrases of each commitment 11:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Generate a natural-language string for each commitment 12:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "if any of the generated strings is not in S then 13:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Add the generated strings to S 14: else 15:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "return S 16: end if 17: end loop syntactically and semantically-parsed input string, our system returns a series of output representations which can be mapped (given a set of generation heuristics) to natural language sentences which represent each of the individual commitments which can be extracted from that string. Commitments were extracted using a series of weighted regular expressions (which we present in the form of rules for convenience); weights were learned for each regular expression r \u2208 R using our implementation of (Eisner, 2002) . After each candidate commitment was processed by the FST, the natural language form of each returned commitment was then resubmitted to the FST for additional round(s) of extraction until no additional commitments could be extracted from the input string.", |
|
"cite_spans": [ |
|
{ |
|
"start": 534, |
|
"end": 548, |
|
"text": "(Eisner, 2002)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Text-hypothesis pairs are initially submitted to a preprocessing module which performed (1) partof-speech tagging, (2) named entity recognition, (3) syntactic dependency parsing, (4) semantic dependency parsing, (5) normalized temporal expres-sions, and (6) coreference resolution. 2 Pairs are then submitted to a sentence decomposition module, which uses a set of heuristics in order to transform complex sentences containing subordination, relative clauses, lists, and coordination into sets of well-formed simple sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 282, |
|
"end": 283, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Propositional Content: In order to capture assertions encoded by predicates and predicate nominals, we used semantic dependency information output by a predicate-based semantic parser to generate \"simplified\" commitments for each possible combination of their optional and obligatory arguments. (Here, arguments assigned a PropBankstyle role label of ARG m -or ARG 2 or higherwere considered to be optional arguments.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Supplemental Expressions: Recent work by (Potts, 2005; Huddleston and Pullum, 2002) has demonstrated that the class of supplemental expressions -including appositives, as-clauses, parentheticals, parenthetical adverbs, nonrestrictive relative clauses, and epithets -trigger conventional implicatures (CI) whose truth is necessarily presupposed, even if the truth conditions of a sentence are not satisfied. Rules to extract supplemental expressions were implemented in our weighted FST framework; generation heuristics were then used to create new sentences which specify the CI conveyed by the expression.", |
|
"cite_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 54, |
|
"text": "(Potts, 2005;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 55, |
|
"end": 83, |
|
"text": "Huddleston and Pullum, 2002)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(3) \"The Extra Girl\" (1923) is a story of a small-town girl, Sue Graham (played by Mabel Normand) who comes to Hollywood to be in the pictures. Relation Extraction: We used an in-house, heuristic-based relation extraction system to recognize six types 3 of semantic relations between named entities, including: (1) artifact (e.g. OWNER-OF), (2) general affiliation (e.g. 2 We used publicly-available software when possible to facilitate comparison with other researchers' work in this area. We used the C&C Tools (Curran et al., 2007) to perform part-of-speech tagging and parsing and a version of the alias-i LingPipe named entity recognizer in conjunction with Language Computer Corporation's own systems for syntactic and semantic dependency parsing, named entity recognition (CiceroLite), temporal normalization, and coreference resolution.", |
|
"cite_spans": [ |
|
{ |
|
"start": 371, |
|
"end": 372, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 513, |
|
"end": 534, |
|
"text": "(Curran et al., 2007)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "3 These six types were selected because they performed at better than than 70% F-Measure on a sample of t-h pairs selected from the PASCAL RTE datasets. Other relation types with lesser performance were not used in our experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Discourse Commitments from Natural Language Texts", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(3) organization affiliation (e.g. EMPLOYEE-OF), (4) part-whole, (5) social affiliation (e.g. RELATED-TO), and (6) physical location (e.g. LOCATED-NEAR) relations. Coreference Resolution: We use our own implementation of (Ng, 2005) to resolve instances of pronominal and nominal coreference in order to expand the number of commitments available to the system. After a set of co-referential entity mentions were detected (e.g. \"The Extra Girl\", this Mabel Normand vehicle), new commitments were generated from the existing set of commitments which incorporated each co-referential mention.", |
|
"cite_spans": [ |
|
{ |
|
"start": 221, |
|
"end": 231, |
|
"text": "(Ng, 2005)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LOCATION-OF),", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(5) Coreference: (\"The Extra Girl\",this Mabel Normand vehicle) a.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LOCATION-OF),", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[\"The Extra Girl\"] [was] produced by Mack Sennett. b. [\"The Extra Girl\"] followed earlier films about the film industry. c. [\"The Extra Girl\"] also paved the way for later films about Hollywood.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LOCATION-OF),", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We used a lightweight, knowledge-lean paraphrasing approach in order to expand the set of commitments considered by our system. In order to identify other possible linguistic encodings for each commitment -without generating a large number of spurious paraphrases which could introduce errorful knowledge into a commitment set -we focused only on generating paraphrases of two-place predicates (i.e. predicates which encode a semantic dependency between two arguments).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The algorithm we use is presented in Algorithm 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Under this method, a semantic parser (trained on the semantic dependencies in PropBank and NomBank) was used to identify pairs of arguments ( a i , a j ) (where i, j \u2208 {a 0 , a 1 , a 2 , a m }) from each c; each pair of arguments identified in c are then used to generate paraphrases from sets of sentences containing both a 0 and a i . 4 The top 1000 sentences containing each pair of arguments were then retrieved from the WWW; sentences containing both arguments were then filtered and clustered into sets that were presumed to be likely paraphrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Algorithm 2 Paraphrase Clustering 1: Input: Pairs of arguments, ai , aj , where i, j \u2208 {a0 , a1 , a2 , am } 2: Output: Sets of paraphrased sentences, {s cl i ...cln } 3: for all pairs ai , aj do 4:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Retrieve 1000 s containing ai , aj from WWW 5:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Compute token distance between a0 ,ai (span(ai , aj )) 6:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Filter each s with 2 \u2264 span(ai , aj ) \u2264 8 7:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Complete-link cluster {s} into clusters {cl} 8:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Filter cl with size (size(cl)) \u2264 10 9:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "return All sentences {s cl i ...cln } 10:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Add all sentences {s cl i ...cln } to commitment set C 11: end for Parameters were computed using maximum likelihood estimation (and normalized to sum to 1), based on a linear interpolation of three measures of the \"goodness\" of p (as compared to the original input sentence, s).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "para(p|s) = \u03bbwn para(p|s)+\u03bb freq para(p|s)+\u03bb dist para(p|s)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Each candidate paraphrase was then assigned a paraphrase score as in . The likelihood that a word w p from a paraphrase was a valid paraphrase of a word from an original commitment w c was computed as in (2), where p(w p ) and p(w o ) computed from the relative frequency of the occurrence of w p and w o in the set of clusters generated for c, and p(k) was computed from the frequency of each \"overlapping\" term found in the paraphrase and the original c. The top 5 paraphrases generated for each c were then used to generate new versions of each commitment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "ppara (wp|wo) = p(wp)p(wo) i=1 n p(ki )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Paraphrasing:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Following Commitment Extraction, we used a lexical alignment technique first introduced in (Taskar et al., 2005b) in order to select the commitment extracted from t (henceforth, c t ) which represents the best alignment for each of the individual commitments extracted from h (henceforth, c h ). We assume that the alignment of two discourse commitments can be cast as a maximum weighted matching problem in which each pair of words (t i ,h j ) in an commitment pair (c t ,c h ) is assigned a score s ij (t, h) corresponding to the likelihood that t i is aligned to h j . 5 As with (Taskar et al., 2005b) , we use the large-margin structured prediction model introduced in (Taskar et al., 2005a) in order to compute a set of parameters w (computed with respect to a set of features f ) which maximize the number of correct alignment predictions (\u0233 i ) made given a set of training examples (x i ), as in Equation 3.", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 113, |
|
"text": "(Taskar et al., 2005b)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 582, |
|
"end": 604, |
|
"text": "(Taskar et al., 2005b)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 673, |
|
"end": 695, |
|
"text": "(Taskar et al., 2005a)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Commitment Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "y i = arg max\u0233 i \u2208Y w f (x i ,\u0233 i ), \u2200i", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Commitment Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We used three sets of features in our model: (1) string features (including Levenshtein edit distance, string equality, and stemmed string equality), (2) lexico-semantic features (including Word-Net Similarity (Pedersen et al., 2004) ), and (3) word association features (computed using the Dice coefficient (Dice, 1945) 6 ). Training data came from hand-annotated token alignments for each of the 800 entailment pairs included in the RTE-3 Development Set", |
|
"cite_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 233, |
|
"text": "(Pedersen et al., 2004)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 322, |
|
"text": "(Dice, 1945) 6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Commitment Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Following alignment, we used the sum of the edge scores ( i,j =1 n s ij (t i , h j )) computed for each of the possible (c t , c h ) pairs in order to search for the c t which represented the reciprocal best hit (Mushegian and Koonin, 1996) of each c h extracted from the hypothesis. This was performed by selecting a commitment pair (c t , c h ) where c t was the top-scoring alignment candidate for c h and c h was the top-scoring alignment candidate for c t . If no reciprocal best-hit could be found for any of the commitments extracted from the h, the system automatically returned a TE judgment of NO.", |
|
"cite_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 240, |
|
"text": "(Mushegian and Koonin, 1996)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Commitment Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We used a decision tree to estimate the likelihood that a commitment pair represented a valid instance of textual entailment. Confidence values associated with each leaf node (i.e. YES or NO) were normalized and used to rank examples for the official submission. Features were selected manually by performing ten-fold cross validation on the combined development sets from the three previous PASCAL RTE Challenges (2400 examples).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entailment Classification", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Features used in our classifier were selected from a number of sources, including (Hickl et al., 2006; Zanzotto et al., 2006; .", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 102, |
|
"text": "(Hickl et al., 2006;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 103, |
|
"end": 125, |
|
"text": "Zanzotto et al., 2006;", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entailment Classification", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "A partial list of the features used in the Entailment Classifier used in our system is provided in Figure 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 107, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Entailment Classification", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "ALIGNMENT FEATURES: Derived from the results of the alignment of each pair of commitments performed during Commitment Selection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entailment Classification", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "1 LONGEST COMMON STRING: This feature represents the longest contiguous string common to both texts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entailment Classification", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "2 UNALIGNED CHUNK: This feature represents the number of chunks in one text that are not aligned with a chunk from the other 3 LEXICAL ENTAILMENT PROBABILITY: Defined as in .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entailment Classification", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "DEPENDENCY FEATURES: Computed from the semantic dependencies identified by the PropBank-and NomBank-based semantic parsers. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entailment Classification", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We evaluated the performance of our commitmentbased system for RTE against the 1600 examples found in the PASCAL RTE-2 and RTE-3 datasets. 7 Table 1 presents results from our system when trained on the 1600 examples taken from the RTE-2 and RTE-3 Test Sets.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 148, |
|
"text": "Table 1", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Accuracy varied significantly (p <0.05) across each of the four tasks. Performance (in terms of accuracy and average precision) was highest on the 7 Data created for the PASCAL RTE-2 and RTE-3 challenges was organized into four datasets which sought to approximate the kinds of inference required by four different NLP applications: information extraction (IE), information retrieval (IR), question-answering (QA), and summarization (SUM). The RTE-3 Test Sets includes 683 \"short\" examples and 117 \"long\" examples; the RTE-2 Test Set includes 800 \"short\" examples. QA set (88.9% accuracy) and lowest on the IE set (76.9%). The length of the text (either \"short\" or \"long\") did not significantly impact performance, however; in fact, as can be seen in Table 1 , average accuracy was nearly the same for examples featuring \"short\" or \"long\" texts.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 751, |
|
"end": 758, |
|
"text": "Table 1", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In order to quantify the impact that additional sources of training data could have on the performance an RTE system (and to facilitate comparisons with top systems like (Hickl et al., 2006) , which were trained on tens of thousands of entailment pairs), we used the techniques described in (Bensley and Hickl, 2008) to generate a large training set of 100,000 text-hypothesis pairs in order to train our entailment classifier. Unlike (Hickl et al., 2006) , we experienced only a small increase (3%) in overall accuracy when training on increasingly larger corpora of examples. While large training corpora may provide an important source of knowledge for RTE, these results suggest that our commitment extractionbased approach may nullify the gains in performance seen by pure classification-based approaches. We believe that by training an entailment classification model based on the output of a commitment extraction module, we can reduce the number of deleterious features included in the model -and thereby, reduce the overall number of training examples needed to achieve the same level of performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 170, |
|
"end": 190, |
|
"text": "(Hickl et al., 2006)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 316, |
|
"text": "(Bensley and Hickl, 2008)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 435, |
|
"end": 455, |
|
"text": "(Hickl et al., 2006)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In a second experiment, we investigated the performance gains that could be attributed to the choice of weighting function used to select commitments from a commitment set. In order to", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "With Paraphrasing \u2206 Term overlap (Zanzotto et al., 2006) 0.5950 0.6750 +0.0800 Approximate Tree Edit Distance (Schilder and McInnes, 2006) 0.6550 0.5933 -0.0617 LEP 0.6000 0.6800 +0.0800 Lexical Similarity (Adams, 2006) 0.6200 0.6788 +0.0588 Graph Matching (MacCartney et al., 2006) 0.6433 0.6533 +0.0100 Classification-Based Alignment (Hickl et al., 2006) 0.7650 0.7700 + 0.0050 Structured Prediction-Based Alignment (Taskar et al., 2005a) 0.7900 0.8493 + 0.0593 Table 3 : Impact of Commitment Selection Learning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 56, |
|
"text": "(Zanzotto et al., 2006)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 110, |
|
"end": 138, |
|
"text": "(Schilder and McInnes, 2006)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 206, |
|
"end": 219, |
|
"text": "(Adams, 2006)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 248, |
|
"end": 282, |
|
"text": "Matching (MacCartney et al., 2006)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 336, |
|
"end": 356, |
|
"text": "(Hickl et al., 2006)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 418, |
|
"end": 440, |
|
"text": "(Taskar et al., 2005a)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 464, |
|
"end": 471, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Approach Without Paraphrasing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "perform this comparison, we implemented a total of 7 different functions previously investigated by teams participating in the previous PASCAL RTE Challenges, including (1) a simple term-overlap measure introduced as a baseline in (Zanzotto et al., 2006) , (2) the approximate tree edit distance metric used by (Schilder and McInnes, 2006) , 's measure of lexical entailment probability (LEP), (4) the lexical similarity measure described in (Adams, 2006) , (5) our interpretation of the semantic graph-matching approach described in (MacCartney et al., 2006) , (6) the classification-based term alignment approach described in (Hickl et al., 2006) , and (7) the structured prediction-based alignment approach introduced in this paper. Results from this 7-way comparison are presented in Table 3 . While we found that the choice of mechanism used to weight commitments did significantly impact RTE performance (p < 0.05), the inclusion of generated paraphrases appeared to only have a slight positive impact on overall performance, boosting RTE accuracy by an average of 3.1% (across the 7 methods), and by a total of 5.9% in the approach we describe in this paper. While paraphrasing can be used to enhance the performance of some current RTE systems (see performance of the Tree Edit Distance-based RTE system for a case where paraphrasing negatively impacted performance), realized gains are still relatively modest across most approaches.", |
|
"cite_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 254, |
|
"text": "(Zanzotto et al., 2006)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 311, |
|
"end": 339, |
|
"text": "(Schilder and McInnes, 2006)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 455, |
|
"text": "(Adams, 2006)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 534, |
|
"end": 559, |
|
"text": "(MacCartney et al., 2006)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 628, |
|
"end": 648, |
|
"text": "(Hickl et al., 2006)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 788, |
|
"end": 795, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Approach Without Paraphrasing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In a third experiment, we found that RTE performance depended on both the type -and the number -of commitments extracted from a t-h pair, regardless of the learning algorithm used in commitment selection. Table 4 presents results from experiments when (1) no commitment extraction was conducted, (2) extraction strategies were run in isolation 9 , (3) combinations of extraction strategies were considered, or (4) all of possible extraction strategies (listed in Section 3) were considered. The best-performing condition for each RTE 9 Four strategies were considered: (1) syntactic decomposition (SD), (2) supplemental expression (SE) extraction, (3) coreference resolution (Coref), (4) and paraphrasing (Para). strategy is presented in bold.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 212, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Approach Without Paraphrasing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Increasing the amount of linguistic knowledge available from a commitment set did significantly (p < 0.05) impact the performance of RTE, regardless of the actual learning algorithm used to compute the likelihood of an entailment relationship. Combining all four extraction strategies proved best for four RTE approaches (Term Overlap, Lexical Entailment Probability, Graph Matching, and Structured Prediction-based Alignment). Including coreference-based commitments reduced the accuracy of two RTE strategies (Lexical Similarity and Classification-based Alignment). This is most likely due to the fact that coreference-based commitments reincorporate the antecedent of pronouns and other referring expressions into the generated commitments, thereby adding additional lexical information to the sets of features used in computing entailment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach Without Paraphrasing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This paper introduced a new framework for recognizing textual entailment which depends on the extraction of the discourse commitments that can be inferred from a conventional interpretation of a text passage. By explicitly enumerating the set of inferences that can be drawn from a t or h, our approach is able to reduce the task of RTE to the identification of the set of commitments that support the inference of each corresponding commitment extracted from a hypothesis. This approach correctly classified more than 80% of examples from the PASCAL RTE Test Sets, without the need for additional sources of training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Under our approach, commitments are extracted from both the t and the h. Our system failed to extract any commitments from the h used in this example, however.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Arguments in PropBank and NomBank are assigned an index corresponding to their semantic role.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to ensure that content from the h is reflected in the t, we assume that each word from the h is aligned to exactly one or zero words from the t.6 The Dice coefficient was computed as Dice(i) =2C th (i) C t (i)C h (i), where C th is equal to the number of times a word i was found in both the t and an h of a single entailment pair, while Ct and C h were equal to the number of times a word was found in any t or h, respectively. A hand-crafted corpus of 100,000 entailment pairs was used to compute values for Ct , C h , and C th .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ENTITY-ARG MATCH: This is a boolean feature which fires when aligned entities were assigned the same argument role label.2 ENTITY-NEAR-ARG MATCH: This feature is collapsing the arguments Arg1 and Arg2 (as well as the ArgM subtypes) into single categories for the purpose of counting matches.3 PREDICATE-ARG MATCH: This boolean feature is flagged when at least two aligned arguments have the same role.4 PREDICATE-NEAR-ARG MATCH:This feature is collapsing the arguments Arg1 and Arg2 (as well as the ArgM subtypes) into single categories for the purpose of counting matches. SEMANTIC/PRAGMATIC FEATURES: Extracted during preprocessing.1 NAMED ENTITY CLASS: This feature has a different value for each of the 150 named entity classes.2 TEMPORAL NORMALIZATION: This boolean feature is flagged when the temporal expressions are normalized to the same ISO 8601 equivalents.3 MODALITY MARKER:This boolean feature is flagged when the two texts use the same modal verbs.4 SPEECH-ACT: This boolean feature is flagged when the lexicons indicate the same speech act in both texts.5 FACTIVITY MARKER:This boolean feature is flagged when the factivity markers indicate either TRUE or FALSE in both texts simultaneously.6 BELIEF MARKER: This boolean feature is set when the belief markers indicate either TRUE or FALSE in both texts simultaneously.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the RTE evaluations, accuracy is defined as the percentage of entailment judgments correctly identified by the system. Average precision is defined as \"the average of the system's precision values at all points in the ranked list in which recall increases\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This material is based upon work funded in whole or in part by the U.S. Government and any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S. Government.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Textual entailment through extended lexical overlap", |
|
"authors": [ |
|
{ |
|
"first": "Rod", |
|
"middle": [], |
|
"last": "Adams", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Second PASCAL Recognising Textual Entailment Challenge", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adams, Rod. 2006. Textual entailment through extended lexical overlap. In Proceedings of the Second PASCAL Recognising Textual Entailment Challenge (RTE-2).", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Table 4: Impact of Commitment Availability", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Table 4: Impact of Commitment Availability.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Unsupervised Resource Creation for Textual Inference Applications", |
|
"authors": [ |
|
{ |
|
"first": "Jeremy", |
|
"middle": [], |
|
"last": "Bensley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Hickl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "LREC 2008", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bensley, Jeremy and Andrew Hickl. 2008. Unsupervised Resource Creation for Textual Inference Applications. In LREC 2008, Marrakech.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "When logical inference helps in determining textual entailment (and when it doesn't)", |
|
"authors": [ |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katya", |
|
"middle": [], |
|
"last": "Markert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Second PASCAL Recognizing Textual Entailment Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bos, Johan and Katya Markert. 2006. When logical infer- ence helps in determining textual entailment (and when it doesn't). In Proceedings of the Second PASCAL Recogniz- ing Textual Entailment Conference, Venice, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Linguistically Motivated Large-Scale NLP with C-and-C and Boxer", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Curran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "ACL 2007 (Demonstration Session)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Curran, James, Stephen Clark, and Johan Bos. 2007. Lin- guistically Motivated Large-Scale NLP with C-and-C and Boxer. In ACL 2007 (Demonstration Session), Prague.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "The PASCAL Recognizing Textual Entailment Challenge", |
|
"authors": [ |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Glickman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernardo", |
|
"middle": [], |
|
"last": "Magnini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the PASCAL Challenges Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dagan, Ido, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL Recognizing Textual Entailment Challenge. In Proceedings of the PASCAL Challenges Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Measures of the Amount of Ecologic Association Between Species", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Dice", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1945, |
|
"venue": "In Journal of Ecology", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "297--302", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dice, L.R. 1945. Measures of the Amount of Ecologic As- sociation Between Species. In Journal of Ecology, vol- ume 26, pages 297-302.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A Probabilistic Setting and Lexical Co-occurrence Model for Textual Entailment", |
|
"authors": [ |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Glickman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Glickman, Oren and Ido Dagan. 2005. A Probabilistic Set- ting and Lexical Co-occurrence Model for Textual Entail- ment. In Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment, Ann Arbor, USA.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Web based textual entailment", |
|
"authors": [ |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Glickman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Moshe", |
|
"middle": [], |
|
"last": "Koppel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the First PAS-CAL Recognizing Textual Entailment Work-shop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Glickman, Oren, Ido Dagan, and Moshe Koppel. 2005. Web based textual entailment. In Proceedings of the First PAS- CAL Recognizing Textual Entailment Work-shop.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "True to Form: Rising and Falling Declaratives as Questions in English", |
|
"authors": [ |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Gunlogson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gunlogson, Christine. 2001. True to Form: Rising and Falling Declaratives as Questions in English. Ph.D. thesis, University of California, Santa Cruz.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Robust textual inference via graph matching", |
|
"authors": [ |
|
{ |
|
"first": "Aria", |
|
"middle": [], |
|
"last": "Haghighi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "387--394", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haghighi, Aria, Andrew Ng, and Christopher Manning. 2005. Robust textual inference via graph matching. In Pro- ceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 387-394.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Recognizing Textual Entailment with LCC's Groundhog System", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Hickl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeremy", |
|
"middle": [], |
|
"last": "Bensley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kirk", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Rink", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ying", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Second PASCAL Challenges Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hickl, Andrew, John Williams, Jeremy Bensley, Kirk Roberts, Bryan Rink, and Ying Shi. 2006. Recognizing Textual En- tailment with LCC's Groundhog System. In Proceedings of the Second PASCAL Challenges Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "The Cambridge Grammar of the English Language", |
|
"authors": [ |
|
{ |
|
"first": "Rodney", |
|
"middle": [], |
|
"last": "Huddleston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Pullum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huddleston, Rodney and Geoffrey Pullum, editors, 2002. The Cambridge Grammar of the English Language. Cam- bridgeUniversity Press.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Recognizing Textual Entailment Using Lexical Similarity", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Jijkoun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "De Rijke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the First PASCAL Challenges Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jijkoun, V. and M. de Rijke. 2005. Recognizing Textual En- tailment Using Lexical Similarity. In Proceedings of the First PASCAL Challenges Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Learning to recognize features of valid textual entailments", |
|
"authors": [ |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Maccartney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trond", |
|
"middle": [], |
|
"last": "Grenager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Cer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "41--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "MacCartney, Bill, Trond Grenager, Marie-Catherine de Marn- effe, Daniel Cer, and Christopher D. Manning. 2006. Learning to recognize features of valid textual entailments. In Proceedings of the Human Language Technology Con- ference of the NAACL, Main Conference, pages 41-48, New York City, USA, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A minimal gene set for cellular life derived by compraison of complete bacterial genomes", |
|
"authors": [ |
|
{ |
|
"first": "Arcady", |
|
"middle": [], |
|
"last": "Mushegian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Koonin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the National Academies of Science", |
|
"volume": "93", |
|
"issue": "", |
|
"pages": "10268--10273", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mushegian, Arcady and Eugene Koonin. 1996. A minimal gene set for cellular life derived by compraison of com- plete bacterial genomes. In Proceedings of the National Academies of Science, volume 93, pages 10268-10273.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Word-Net::Similarity -Measuring the Relatedness of Concepts", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Pedersen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Patwardhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Michelizzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Nineteenth National Conference on Artificial Intelligence (AAAI-04)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pedersen, T., S. Patwardhan, and J. Michelizzi. 2004. Word- Net::Similarity -Measuring the Relatedness of Concepts. In Proceedings of the Nineteenth National Conference on Artificial Intelligence (AAAI-04), San Jose, CA.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "The Logic of Conventional Implicatures", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Potts, Christopher, editor, 2005. The Logic of Conventional Implicatures. Oxford University Press.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "TLR at DUC 2006: Approximate Tree Similarity and a New Evaluation Regime", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Schilder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Thomson Mcinnes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of HLT-NAACL Document Understanding Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Schilder, F. and B. Thomson McInnes. 2006. TLR at DUC 2006: Approximate Tree Similarity and a New Evaluation Regime. In Proceedings of HLT-NAACL Document Un- derstanding Workshop (DUC 2006).", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Instance-based evaluation of entailment rule acquisition", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "456--463", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Instance-based evaluation of entailment rule acquisition. In Proceedings of the 45th Annual Meeting of the Associa- tion of Computational Linguistics, pages 456-463, Prague, Czech Republic, June. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Structured prediction via the extragradient method", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simone", |
|
"middle": [], |
|
"last": "Lacoste-Julien", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Jordan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taskar, Ben, Simone Lacoste-Julien, and Michael Jordan. 2005a. Structured prediction via the extragradient method. In Proceedings of Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "A discriminative matching approach to word alignment", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simone", |
|
"middle": [], |
|
"last": "Lacoste-Julien", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of Human Language Technology Conference and Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taskar, Ben, Simone Lacoste-Julien, and Dan Klein. 2005b. A discriminative matching approach to word alignment. In Proceedings of Human Language Technology Conference and Empirical Methods in Natural Language Processing (HLT/EMNLP 2005).", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Microsoft Research at RTE-2: Syntactic Contributions in the Entailment Task: an implementation", |
|
"authors": [ |
|
{ |
|
"first": "Lucy", |
|
"middle": [], |
|
"last": "Vanderwende", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arul", |
|
"middle": [], |
|
"last": "Menezes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rion", |
|
"middle": [], |
|
"last": "Snow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Second PASCAL Challenges Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vanderwende, Lucy, Arul Menezes, and Rion Snow. 2006. Microsoft Research at RTE-2: Syntactic Contributions in the Entailment Task: an implementation. In Proceedings of the Second PASCAL Challenges Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Learning synchronous grammars for semantic parsing with lambda calculus", |
|
"authors": [ |
|
{ |
|
"first": "Yuk", |
|
"middle": [], |
|
"last": "Wong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [], |
|
"last": "Wah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "960--967", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wong, Yuk Wah and Raymond Mooney. 2007. Learning syn- chronous grammars for semantic parsing with lambda cal- culus. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 960-967, Prague, Czech Republic, June. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Learning textual entailment from examples", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Zanzotto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Pennacchiotti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Pazienza", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Second PASCAL Challenges Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zanzotto, F., A. Moschitti, M. Pennacchiotti, and M. Pazienza. 2006. Learning textual entailment from examples. In Proceedings of the Second PASCAL Challenges Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of UAI-05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zettlemoyer, L. S. and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of UAI- 05.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "Two Architectures of RTE Systems.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "Text Commitments Extracted from Example 4 (RTE-3).", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"text": "Sue Graham [came] to Hollywood to be in the pictures. a. location-of: Sue Graham [was located in] Hollywood. b. location-of: The pictures [were located in] Hollywood.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"text": "Features used in the Entailment Classifier", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "T1. \"The Extra Girl\" [took place in] 1923. T2. \"The Extra Girl\" is a story of a small\u2212town girl. T3. \"The Extra Girl\" is a story of Sue Graham. T4. Sue Graham is a small\u2212town girl. T5. Sue Graham[was] played by Mabel Normand. T6. Sue Graham comes to Hollywood to be in the pictures. T11. Mack Sennett is a producer. T7. Sue Graham [was located in] Hollywood. T8. A Mabel Normand vehicle was produced by Mack Sennett. T14. [There were] films about the film industry [before] a Mabel Normand vehicle. T13. A Mabel Normand vehicle paved the way for later films about Hollywood. T12. A Mabel Normand vehicle followed earlier films about the film industry. T15. [There were] films about Hollywood [after] a Mabel Normand vehicle.T16. \"The Extra Girl\" followed earlier films about the film industry. T17. \"The Extra Girl\" paved the way for later films about Hollywood. T18. [There were] films about the film industry [before] \"The Extra Girl\". T19. [There were] films about Hollywood [after] \"The Extra Girl\". T20. King Vidor [was associated with] \"Show People\". T21. \"Show People\" [took place in] 1928. T22. \"Show People\" was a film about Hollywood." |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Performance of Commitment-based RTE." |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"3\">sum-marizes the performance of our RTE system on the RTE-2 and RTE-3 Test Sets when trained on in-creasing amounts of training data. 8</td></tr><tr><td>Training Corpus 800 pairs (RTE-2 Dev) 10,000 pairs 25,000 pairs 50,000 pairs 100,000 pairs</td><td>Accuracy 0.8493 0.8550 0.8489 0.8575 0.8850</td><td>Average Precision 0.8611 0.8742 0.8322 0.8505 0.8785</td></tr></table>", |
|
"html": null, |
|
"text": "" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Impact of Training Corpus Size." |
|
} |
|
} |
|
} |
|
} |