Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N16-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:38:11.746139Z"
},
"title": "Knowledge-Guided Linguistic Rewrites for Inference Rule Verification",
"authors": [
{
"first": "Prachi",
"middle": [],
"last": "Jain",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A corpus of inference rules between a pair of relation phrases is typically generated using the statistical overlap of argument-pairs associated with the relations (e.g., PATTY, CLEAN). We investigate knowledge-guided linguistic rewrites as a secondary source of evidence and find that they can vastly improve the quality of inference rule corpora, obtaining 27 to 33 point precision improvement while retaining substantial recall. The facts inferred using cleaned inference rules are 29-32 points more accurate.",
"pdf_parse": {
"paper_id": "N16-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "A corpus of inference rules between a pair of relation phrases is typically generated using the statistical overlap of argument-pairs associated with the relations (e.g., PATTY, CLEAN). We investigate knowledge-guided linguistic rewrites as a secondary source of evidence and find that they can vastly improve the quality of inference rule corpora, obtaining 27 to 33 point precision improvement while retaining substantial recall. The facts inferred using cleaned inference rules are 29-32 points more accurate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The visions of machine reading (Etzioni, 2007) and deep language understanding (Dorr, 2012) emphasize the ability to draw inferences from text to discover implicit information that may not be explicitly stated (Schubert, 2002) . This has natural applications to textual entailment (Dagan et al., 2013) , KB completion (Socher et al., 2013) , and effective querying over Knowledge Bases (KBs) .",
"cite_spans": [
{
"start": 31,
"end": 46,
"text": "(Etzioni, 2007)",
"ref_id": "BIBREF7"
},
{
"start": 79,
"end": 91,
"text": "(Dorr, 2012)",
"ref_id": "BIBREF5"
},
{
"start": 210,
"end": 226,
"text": "(Schubert, 2002)",
"ref_id": "BIBREF22"
},
{
"start": 281,
"end": 301,
"text": "(Dagan et al., 2013)",
"ref_id": "BIBREF3"
},
{
"start": 318,
"end": 339,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF23"
},
{
"start": 386,
"end": 391,
"text": "(KBs)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One popular approach for fact inference is to use a set of inference rules along with probabilistic models such as Markov Logic Networks (Schoenmackers et al., 2008) or Bayesian Logic Programs (Raghavan et al., 2012) to produce humaninterpretable proof chains. While scalable (Niu et al., 2011; Domingos and Webb, 2012) , this is bound by the coverage and quality of the background knowledge -the set of inference rules that enable the inference (Clark et al., 2014) .",
"cite_spans": [
{
"start": 137,
"end": 165,
"text": "(Schoenmackers et al., 2008)",
"ref_id": "BIBREF20"
},
{
"start": 193,
"end": 216,
"text": "(Raghavan et al., 2012)",
"ref_id": "BIBREF16"
},
{
"start": 276,
"end": 294,
"text": "(Niu et al., 2011;",
"ref_id": "BIBREF14"
},
{
"start": 295,
"end": 319,
"text": "Domingos and Webb, 2012)",
"ref_id": "BIBREF4"
},
{
"start": 446,
"end": 466,
"text": "(Clark et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Consequent Y/N? (X, make a note of, Y) (X, write down, Y) Y (X, offer wide range of, Y) (X, offer variety of, Y) Y (X, make full use of, Y) (Y, be used by, X) Y (X, be wounded in, Y) (X, be killed in, Y) N (X, be director of, Y) (X, be vice president of, Y) N (X, be a student at, Y) (X, be enrolled at, Y) N Figure 1 : Sample rules verified (Y) and filtered (N) by our method. Rules #4, #5 were correctly and #6 wrongly filtered.",
"cite_spans": [],
"ref_spans": [
{
"start": 309,
"end": 317,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Antecedent",
"sec_num": null
},
{
"text": "The paper focuses on generating a high precision subset of inference rules over Open Information Extraction (OpenIE) (Etzioni et al., 2011) relation phrases (see Fig 1) . OpenIE systems generate a schema-free KB where entities and relations are represented via normalized but not disambiguated textual strings. Such OpenIE KBs scale to the Web.",
"cite_spans": [
{
"start": 117,
"end": 139,
"text": "(Etzioni et al., 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 162,
"end": 168,
"text": "Fig 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Antecedent",
"sec_num": null
},
{
"text": "Most existing large-scale corpora of inference rules are generated using distributional similarity, like argument-pair overlap (Schoenmackers et al., 2010; , but often eschew any linguistic or compositional insights. Our early analysis revealed that such inference rules have very low precision, not enough to be useful for many real tasks. For human-facing applications (such as IE-based demos), high precision is critical. Inference rules have a multiplicative impact, since one poor rule could potentially generate many bad KB facts. Contributions: We investigate the hypothesis that \"knowledge-guided linguistic rewrites can provide independent verification for statistically-generated Open IE inference rules.\" Our system KGLR's rewrites exploit the compositional structure of Open IE relation phrases alongside knowledge in resources like Wordnet and thesaurus. KGLR independently verifies rules from existing inference rule corpora Pavlick et al., 2015) and can be seen as additional annotation on existing inference rules. The verified rules are 27 to 33 points more accurate than the original corpora and still retain a substantial recall. The precision of inferred knowledge also has a precision boost of over 29 points. We release our KGLR implementation, its annotations on two popular rule corpora along with gold set used for evaluation and the annotation guidelines for further use (available at https://github.com/dair-iitd/kglr.git).",
"cite_spans": [
{
"start": 127,
"end": 155,
"text": "(Schoenmackers et al., 2010;",
"ref_id": "BIBREF21"
},
{
"start": 939,
"end": 960,
"text": "Pavlick et al., 2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Antecedent",
"sec_num": null
},
{
"text": "Methods for inference over text include random walks over knowledge graphs (Lao et al., 2011) , matrix completion (Riedel et al., 2013) , deep neural networks (Socher et al., 2013; Rockt\u00e4schel et al., 2015a) , natural logic inference (MacCartney and Manning, 2007) and graphical models (Schoenmackers et al., 2008; Raghavan et al., 2012) . Most of these need (or benefit from) a background knowledge of inference rules, including matrix completion (Rockt\u00e4schel et al., 2015b) .",
"cite_spans": [
{
"start": 75,
"end": 93,
"text": "(Lao et al., 2011)",
"ref_id": "BIBREF10"
},
{
"start": 114,
"end": 135,
"text": "(Riedel et al., 2013)",
"ref_id": "BIBREF17"
},
{
"start": 159,
"end": 180,
"text": "(Socher et al., 2013;",
"ref_id": "BIBREF23"
},
{
"start": 181,
"end": 207,
"text": "Rockt\u00e4schel et al., 2015a)",
"ref_id": "BIBREF18"
},
{
"start": 234,
"end": 264,
"text": "(MacCartney and Manning, 2007)",
"ref_id": "BIBREF12"
},
{
"start": 286,
"end": 314,
"text": "(Schoenmackers et al., 2008;",
"ref_id": "BIBREF20"
},
{
"start": 315,
"end": 337,
"text": "Raghavan et al., 2012)",
"ref_id": "BIBREF16"
},
{
"start": 448,
"end": 475,
"text": "(Rockt\u00e4schel et al., 2015b)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Inference rules are predominantly generated via extended distributional similarity -two phrases having a high degree of argument overlap are similar, and thus candidates for a unidirectional or a bidirectional inference rule. Methods vary on the base representation, e.g., KB relations (Gal\u00e1rraga et al., 2013; Grycner et al., 2015) , Open IE relation phrases (Schoenmackers et al., 2010) , syntacticontological-lexical (SOL) patterns (Nakashole et al., 2012) , and dependency paths (Lin and Pantel, 2001 ). An enhancement is global transitivity (TNCF algorithm) for improving recall . The highest precision setting of TNCF (\u03bb = 0.1) was released as a corpus (informally called CLEAN) of Open IE inference rules. 1 Distributional similarity approaches have two fundamental limitations. First, they miss obvious commonsense facts, e.g., (X, married, Y) \u2192 (X, knows, Y) -text will rarely say that a couple know each other. Second, they are consistently affected by statistical noise and end up generating a wide variety of inaccurate rules (see rules #4, and #5 in Figure 1 ).",
"cite_spans": [
{
"start": 273,
"end": 310,
"text": "KB relations (Gal\u00e1rraga et al., 2013;",
"ref_id": null
},
{
"start": 311,
"end": 332,
"text": "Grycner et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 360,
"end": 388,
"text": "(Schoenmackers et al., 2010)",
"ref_id": "BIBREF21"
},
{
"start": 435,
"end": 459,
"text": "(Nakashole et al., 2012)",
"ref_id": "BIBREF13"
},
{
"start": 483,
"end": 504,
"text": "(Lin and Pantel, 2001",
"ref_id": "BIBREF11"
},
{
"start": 713,
"end": 714,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1063,
"end": 1071,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Our early experiments with CLEAN revealed its precision to be about 0.49, not enough to be useful in practice, especially for human-facing applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Similar to our paper, some past works have used alternative sources of knowledge. Weisman et al. (2012) study inference between verbs (e.g., startle \u2192 surprise ), but they get low (0.4) precision. Wordnet corpus to generate inference rules for natural logic (Angeli and Manning, 2014) improved noun-based inference. But, they recognize relation entailments as a key missing piece. Recently, natural logic semantics is added to a paraphrase corpus (PPDB2.0). Many of their features, e.g., lexical/orthographic, multilingual translation based, are complimentary to our method.",
"cite_spans": [
{
"start": 258,
"end": 284,
"text": "(Angeli and Manning, 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We test our KGLR algorithm on CLEAN and entailment/paraphrase subset of PPDB2.0 (which we call PPDB e ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "3 Knowledge-Guided Linguistic Rewrites",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "(KGLR) Given a rule (X, r 1 , Y) \u2192 (X, r 2 , Y) or (X, r 1 , Y) \u2192 (Y, r 2 , X)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "we present KGLR, a series of rewrites of relation phrase r 1 to prove r 2 (examples in Fig 1) . The last two rewrites deal with reversal of argument order in r 2 ; others are for the first case.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 93,
"text": "Fig 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Thesaurus Synonyms: Thesauri typically provide an expansive set of potential synonyms, encompassing near-synonyms and contextually synonymous words. Thesaurus synonyms are not that helpful for generating inference rules (or else we will generate rules like produce \u2192 percolate ). However, they are excellent in rule verification as they provide evidence independent from statistical overlap metrics. We allow any word/phrase w 1 in r 1 to be replaced by any word/phrase w 2 from its thesaurus synsets as long as (1) w 2 has same part-of-speech as w 1 and (2) w 2 is seen in r 2 at the same distance from left of the phrase as w 1 in phrase r 1 , but ignoring words dropped due to other rules whose details follows next. To define a thesaurus synset, we tag w 1 with its POS and look for all thesaurus synsets of that POS containing w 1 . We allow this rewrite if PMI(w 1 , w 2 ) > \u03bb (=-2.5 based on a devset). We calculate PMI as log (#w 1 occurs in synsets of w 2 +#w 2 occurs in synsets of w 1 ) (# of synsets of w 1 \u00d7# of synsets of w 2 ) . Some words can be both synonyms and antonyms in different situations. For example, thesaurus lists 'bad' as both a synonym and antonym of 'good'. We don't allow such antonyms in these rewrites.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Thesarus synonyms can verify offer a vast range of \u2192 provide a wide range of , since offer-provide, and vast-wide are thesaurus synonyms. We use Roget's 21 st Century Thesaurus in KGLR implementation. Negating rules: We reject rules where r 2 explicitly negates r 1 or vice versa. We reject a rule if r 2 is same as r 1 if we drop 'not' from one of them. For example, the rule be the president of \u2192 be not the president of , will be rejected. Wordnet Hypernyms: We replace word/phrase w in r 1 by its Wordnet hypernym if it is in r 2 . We prove be highlight of \u2192 be component of , as Wordnet lists 'component' as a hypernym of 'highlight'. Dropping Modifiers: We drop any adjective, adverb, superlatives or comparatives (e.g., 'more', 'most') from r 1 . This lets us verify be most important part of \u2192 be part of . Gerund-Infinitive Equivalence: We convert infinitive constructions into gerunds and vice versa. For example, starts to drink \u2194 starts drinking . Deverbal Nouns: We use Wordnet's derivationally related forms to compute a verb-noun pair list. We allow back and forth conversions from \"be noun of\" to related verb. So, we verify be cause of \u2192 cause . Light Verbs and Serial Verbs: If a light verb precede a word with derivationally related noun sense, we delete it. Similarly, if a serial verb precede a word with derivationally related verb sense, we delete it. We identify light verbs via the verbs that frequently precede a (a|an) (verb|deverbal noun) pair in Wikipedia. Serial verbs are identified as the verbs that frequently precede another verb in Wikipedia. Thus we can convert take a look at \u2192 look at . Preposition Synonyms: We manually create a list of preposition near-synonyms such as into-to, in-at, atnear. We replace a preposition by its near-synonym. This proves translated into \u2192 translated to . Be-Words & Determiners: We drop be-words ('is', 'was', 'be', etc.) and determiners from r 1 and r 2 . Active-Passive: We allow (X, verb, Y) to be rewritten as (Y, be verb by, X). Redundant Prepositions: We find that often prepositions other than 'by' can be alternatively used with passive forms of some verbs. Moreover, some prepositions can be redundantly used in active forms too. For example, (X, absorb, Y) \u2194 (Y, be absorbed in, X) , or similarly, (X, attack, Y) \u2194 (X, attack on, Y) . To create such a list of verb-preposition pairs, we simply trust the argument-overlap statistics. Statistics here does not make that many errors since the base verb in both relations is the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "KGLR allows repeated application of these rewrites to modify r 1 and r 2 . If it achieves r 1 = r 2 it verifies the inference rule. For tractable implementation KGLR uses a depth first search approach where a search node maintains both r 1 and r 2 . Search does not allow rewrites that introduce any new lexical (lemmatized) entries not in original words(r 1 ) \u222a words(r 2 ). If it can't apply any rewrite to get a new node, it returns failure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.1"
},
{
"text": "Many rules are proved by a sequence of rewrites. E.g., to prove (X, be a major cause of, Y) \u2192 (Y, be caused by, X) , the proof proceeds as: (X, be a major cause of, Y) \u2192 (X, be major cause of, Y) \u2192 (X, be cause of, Y) \u2192 (X, cause, Y) \u2192 (Y, be caused by, X) by dropping determiner, dropping adjective, deverbal noun, and active-passive transformation respectively. Similarly, (X, helps to protect, Y) \u2192 (X, look after, Y) follows from gerund-infinitive conversion (helps protect), dropping support from serial verbs (protect), and thesaurus synonym (look after).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.1"
},
{
"text": "KGLR verifies a subset of rules from CLEAN and PPDB e to produce, VCLEAN and VPPDB e . Our experiments answer these research questions: (1) What is the precision and size of the verified subsets compared to original corpora?, (2) How does additional knowledge generated after performing inference using these rules compare with each other? and (3) Which rewrites are critical to KGLR performance?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Comparison of CLEAN and VCLEAN: The original CLEAN corpus has about 102K rules. KGLR verifies about 36K rules and filter 66K rules out. To estimate the precisions of CLEAN and VCLEAN we independently sampled a random subset of 200 inference rules from each and asked two annotators (graduate level NLP students) to label the rules as correct or incorrect. Rules were mixed together and the annotators were blind to the system that generated a rule. Our initial annotation guideline was similar to that of textual entailment -label a rule as correct if the consequent can usually be inferred given the antecedent, for most naturally occurring argument-pairs for the antecedent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Our annotators faced one issue with the guideline -some inference rules were valid if (X,Y) were bound to specific types, but not for others. For example, (X, be born in, Y) \u2192 (Y, be birthplace of, X) is valid if Y is a location, not if Y is a year. Even seemingly correct inference rules, e.g., (X, is the father of, Y) \u2192 (Y, is the child of, X) , can make unusual incorrect inferences: (Gandhi, is the father of, India) does not imply (India, is the child of, Gandhi). Unfortunately, these corpora don't associate argumenttype information with their inference rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "To mitigate this we refined the annotation guidelines to accept inference rules as correct as long as they are valid for some type-pair. The interannotator agreement with this modification was 94% (\u03ba = 0.88). On the subset of the tags where the two annotators agreed we find the precision of CLEAN to be 48.9%, whereas VCLEAN was evaluated to be 82.5% precise -much more useful for real-world applications. Multiplying the precision with their sizes, we find the effective yield 2 of CLEAN to be 50K compared to 30K for VCLEAN. Overall, we find that VCLEAN obtains a 33 point precision improvement with an effective yield of about 60%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Error Analysis: Most of VCLEAN errors are due to erroneous (or unusual) thesaurus synonyms. For missed recall, we analyzed CLEAN's sample missed by VCLEAN. We find that only about 13% of those are world knowledge rules (e.g., rule #6 in Figure 1 ). Other missed recall is because of some missing rewrites, missing thesaurus synonyms, spelling mistakes. These can potentially be captured by using other resources and adding rewrite rules.",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 246,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Comparison of PPDB e and VPPDB e : Unlike CLEAN, PPDB2.0 associates a confidence value for each rule, which can be varied to obtain different levels of precision and yield. We control for yield so that we can compare precisions directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We operate on PPDB e subset that has an Open IE-2 Yield is proportional to recall like relation phrase on both sides; this was identified by matching to ReVerb syntactic patterns (Etzioni et al., 2011) . This subset is of size 402K. KGLR on this produces 85K verified rules (VPPDB e ). We find the threshold for confidence values in PPDB e that achieves the same yield (confidence > 0.342). We perform annotation on PPDB e (0.342) and VPPDB e using same annotation guidelines as before. The inter-annotator agreement was 91% (\u03ba = 0.82). On the subset of the tags where the two annotators agreed we find the precision of PPDB e to be low -44.2%, whereas VPPDB e was evaluated to be 71.4% precise. We notice that about 4 in 5 PPDB relation phrases are of length 1 or 2 (whereas 50% of CLEAN relation phrases are of length \u2265 3). This contributes to a slightly lower precision of VPPDB e , as most rules are proved by thesaurus synonymy and the power of KGLR to handle compositionality of longer relation phrases does not get exploited.",
"cite_spans": [
{
"start": 179,
"end": 201,
"text": "(Etzioni et al., 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "A typical use case of inference rules is in generating new facts by applying inference rules to a KB. We independently apply VCLEAN's, CLEAN's, PPDB e 's and VPPDB e 's inference rules on a public corpus of 4.2 million ReVerb triples. 3 Since ReVerb itself has significant extraction errors (our estimate is 20% errors) and our goal is to evaluate the quality of inference, we restrict this evaluation to only the subset of accurate ReVerb extractions.",
"cite_spans": [
{
"start": 235,
"end": 236,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Inferred Facts:",
"sec_num": null
},
{
"text": "VCLEAN and CLEAN facts: We sampled about 200 facts inferred by VCLEAN rules and CLEAN rules each (applied over accurate ReVerb extractions) and gave the original sentence as well as inferred facts to two annotators. We obtained a high inter-annotator agreement of 96.3%(\u03ba = 0.92) and we discarded disagreements from final analysis. Overall, facts inferred by CLEAN achieved a precision of about 49.1% and those inferred by VCLEAN obtained a 81.6% precision. The estimated yields of fact corpora (precision\u00d7size) are 7 and 4.5 million for CLEAN and VCLEAN respectively. This yield estimate does not include the initial 4.2 million facts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Inferred Facts:",
"sec_num": null
},
{
"text": "PPDB e and VPPDB e facts: As done previously, we sampled 200 facts inferred by PPDB e and VPPDB e rules, which were annotated by two annotators. We obtained a good inter annotator agree- ment of 90.0%(\u03ba = 0.8) and we discarded disagreements from final analysis. Overall, facts inferred by PPDB e achieved a really poor precision -22.2% and those inferred by VPPDB e obtained an improvement of about 29 points (51.3% precision). Short relation phrases (mostly of length 1 or 2, which forms 80% of PPDB e ) contribute to low precision of VPPDB e . Example low precision VPPDB e rules include (X, be, Y) \u2192 (X, obtain, Y) , (X, include, Y) \u2192 (X, come, Y) , which were inaccurately verified due to thesaurus errors. The estimated yields of fact corpora are 41 million and 35 million for PPDB e and VPPDB e respectively. Ablation Study of KGLR rewrites: We evaluate the efficacy of different rewrites in KGLR by performing an ablation study (see Table 3 ). We ran KGLR by turning off one rewrite on a sample of 600 CLEAN rules (our development set) and calculating its precision and recall. The ablation study highlights that most rewrites add some value to the performance of KGLR, however Antonyms and Dropping modifiers are particularly important for precision and Active-Passive and Redundant Preposition add substantial recall.",
"cite_spans": [],
"ref_spans": [
{
"start": 940,
"end": 947,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison of Inferred Facts:",
"sec_num": null
},
{
"text": "KGLR's value is in precision-sensitive tasks such as a human-facing demo, or downstream NLP application (like question answering) where error multiplication is highly undesirable. Along with high precision, it still obtains acceptably good yield. Our annotators observe the importance of typerestriction of arguments for inference rules (similar to rules in (Schoenmackers et al., 2010) notation of existing inference rule corpora is an important step for obtaining high precision and clarity. Inference rules are typically of two types -linguistic/synonym rewrites, which are captured by our work, and world knowledge rules (see rule #6 in Fig 1) , which are not. We were surprised to estimate that about 87% of CLEAN, which is a statisticallygenerated corpus, is just linguistic rewrites! Obtaining world knowledge or common-sense rules at high precision and scale continues to be the key NLP challenge in this area.",
"cite_spans": [
{
"start": 358,
"end": 386,
"text": "(Schoenmackers et al., 2010)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 641,
"end": 648,
"text": "Fig 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We present Knowledge-guided Linguistic Rewrites (KGLR) which exploits the compositionality of relation phrases, guided by existing knowledge sources, such as Wordnet and thesaurus to identify a high precision subset of an inference rule corpus. Validated CLEAN has a high precision of 83% (vs 49%) at a yield of 60%. Validated PPDB e has a precision of 71% (vs 44%) at same yield. The precision of inferred facts has about 29-32 pt precision gain. We expect KGLR to be effective for precision-sensitive applications of inference. The complete code and data has been released for the research community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "http://u.cs.biu.ac.il/\u02dcnlp/resources/downloads/predicativeentailment-rules-learned-using-local-and-global-algorithms",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available at http://reverb.cs.washington.edu",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgments: We thank Ashwini Vaidya and the anonymous reviewers for their helpful suggestions and feedback. We thank Abhishek, Aditya, Ankit, Jatin, Kabir, and Shikhar for helping with the data annotation. This work was supported by Google language understanding and knowledge discovery focused research grants to Mausam, a KISTI grant and a Bloomberg grant also to Mausam. Prachi was supported by a TCS fellowship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Naturalli: Natural logic inference for common sense reasoning",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabor Angeli and Christopher D Manning. 2014. Nat- uralli: Natural logic inference for common sense rea- soning. In Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Efficient tree-based approximation for entailment graph learning",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Meni",
"middle": [],
"last": "Adler",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2012,
"venue": "The 50th Annual Meeting of the Association for Computational Linguistics, Proceedings of the System Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Ido Dagan, Meni Adler, and Jacob Goldberger. 2012. Efficient tree-based approxima- tion for entailment graph learning. In The 50th Annual Meeting of the Association for Computational Linguis- tics, Proceedings of the System Demonstrations, July 10, 2012, Jeju Island, Korea.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic construction of inference-supporting knowledge bases",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Niranjan",
"middle": [],
"last": "Balasubramanian",
"suffix": ""
},
{
"first": "Sumithra",
"middle": [],
"last": "Bhakthavatsalam",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Humphreys",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Kinkead",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Clark, Niranjan Balasubramanian, Sumithra Bhak- thavatsalam, Kevin Humphreys, Jesse Kinkead, Ashish Sabharwal, and Oyvind Tafjord. 2014. Auto- matic construction of inference-supporting knowledge bases.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Recognizing Textual Entailment: Models and Applications. Synthesis Lectures on Human Language Technologies",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Fabio",
"middle": [
"Massimo"
],
"last": "Sammons",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zanzotto",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Dan Roth, Mark Sammons, and Fabio Mas- simo Zanzotto. 2013. Recognizing Textual Entail- ment: Models and Applications. Synthesis Lectures on Human Language Technologies. Morgan & Clay- pool Publishers.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A tractable first-order probabilistic logic",
"authors": [
{
"first": "Pedro",
"middle": [
"M"
],
"last": "Domingos",
"suffix": ""
},
{
"first": "William",
"middle": [
"Austin"
],
"last": "Webb",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pedro M. Domingos and William Austin Webb. 2012. A tractable first-order probabilistic logic. In Proceed- ings of the Twenty-Sixth AAAI Conference on Artifi- cial Intelligence, July 22-26, 2012, Toronto, Ontario, Canada.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Language programs at Darpa. AKBC-WEKEX",
"authors": [
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonnie Dorr. 2012. Language programs at Darpa. AKBC-WEKEX 2012 Invited Talk.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Open information extraction: The second generation",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Janara",
"middle": [],
"last": "Christensen",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "IJCAI",
"volume": "11",
"issue": "",
"pages": "3--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Etzioni, Anthony Fader, Janara Christensen, Stephen Soderland, and Mausam. 2011. Open infor- mation extraction: The second generation. In IJCAI, volume 11, pages 3-10.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Machine reading of web text",
"authors": [
{
"first": "Oren",
"middle": [
"Etzioni"
],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 4th International Conference on Knowledge Capture (K-CAP 2007)",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Etzioni. 2007. Machine reading of web text. In Proceedings of the 4th International Conference on Knowledge Capture (K-CAP 2007), October 28-31, 2007, Whistler, BC, Canada, pages 1-4.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Amie: association rule mining under incomplete evidence in ontological knowledge bases",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Antonio Gal\u00e1rraga",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Teflioudi",
"suffix": ""
},
{
"first": "Katja",
"middle": [],
"last": "Hose",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Suchanek",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 22nd international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "413--422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Antonio Gal\u00e1rraga, Christina Teflioudi, Katja Hose, and Fabian Suchanek. 2013. Amie: association rule mining under incomplete evidence in ontological knowledge bases. In Proceedings of the 22nd interna- tional conference on World Wide Web, pages 413-422. International World Wide Web Conferences Steering Committee.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Relly: Inferring hypernym relationships between relational phrases",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Grycner",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
},
{
"first": "Jay",
"middle": [],
"last": "Pujara",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Foulds",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "971--981",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Grycner, Gerhard Weikum, Jay Pujara, James Foulds, and Lise Getoor. 2015. Relly: Inferring hypernym relationships between relational phrases. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 971-981, Lisbon, Portugal, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Random walk inference and learning in a large scale knowledge base",
"authors": [
{
"first": "Ni",
"middle": [],
"last": "Lao",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "529--539",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ni Lao, Tom Mitchell, and William W Cohen. 2011. Random walk inference and learning in a large scale knowledge base. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 529-539. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dirt@ sbt@ discovery of inference rules from text",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "323--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin and Patrick Pantel. 2001. Dirt@ sbt@ dis- covery of inference rules from text. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 323- 328. ACM.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Natural logic for textual inference",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing",
"volume": "",
"issue": "",
"pages": "193--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill MacCartney and Christopher D Manning. 2007. Natural logic for textual inference. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 193-200. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Patty: a taxonomy of relational patterns with semantic types",
"authors": [
{
"first": "Ndapandula",
"middle": [],
"last": "Nakashole",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Suchanek",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "1135--1145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ndapandula Nakashole, Gerhard Weikum, and Fabian Suchanek. 2012. Patty: a taxonomy of relational patterns with semantic types. In Proceedings of the 2012 Joint Conference on Empirical Methods in Nat- ural Language Processing and Computational Natu- ral Language Learning, pages 1135-1145. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Tuffy: Scaling up statistical inference in markov logic networks using an RDBMS",
"authors": [
{
"first": "Feng",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
},
{
"first": "Anhai",
"middle": [],
"last": "Doan",
"suffix": ""
},
{
"first": "Jude",
"middle": [
"W"
],
"last": "Shavlik",
"suffix": ""
}
],
"year": 2011,
"venue": "PVLDB",
"volume": "4",
"issue": "6",
"pages": "373--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng Niu, Christopher R\u00e9, AnHai Doan, and Jude W. Shavlik. 2011. Tuffy: Scaling up statistical inference in markov logic networks using an RDBMS. PVLDB, 4(6):373-384.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Adding semantics to data-driven paraphrasing",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
},
{
"first": "Charley",
"middle": [],
"last": "Beller",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick, Johan Bos, Malvina Nissim, Charley Beller, Benjamin Van Durme, and Chris Callison- Burch. 2015. Adding semantics to data-driven para- phrasing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL 2015).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning to \"read between the lines\" using bayesian logic programs",
"authors": [
{
"first": "Sindhu",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
},
{
"first": "Hyeonseo",
"middle": [],
"last": "Ku",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "349--358",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sindhu Raghavan, Raymond J. Mooney, and Hyeonseo Ku. 2012. Learning to \"read between the lines\" using bayesian logic programs. pages 349-358, July.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Relation extraction with matrix factorization and universal schemas",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"M"
],
"last": "Marlin",
"suffix": ""
}
],
"year": 2013,
"venue": "Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings",
"volume": "",
"issue": "",
"pages": "74--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Human Language Technologies: Conference of the North American Chapter of the Association of Com- putational Linguistics, Proceedings, June 9-14, 2013, Westin Peachtree Plaza Hotel, Atlanta, Georgia, USA, pages 74-84.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Karl Moritz Hermann, Tom\u00e1s Kocisk\u00fd, and Phil Blunsom",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Rockt\u00e4schel, Edward Grefenstette, Karl Moritz Her- mann, Tom\u00e1s Kocisk\u00fd, and Phil Blunsom. 2015a. Reasoning about entailment with neural attention. CoRR, abs/1509.06664.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Injecting Logical Background Knowledge into Embeddings for Relation Extraction",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2015,
"venue": "Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Rockt\u00e4schel, Sameer Singh, and Sebastian Riedel. 2015b. Injecting Logical Background Knowledge into Embeddings for Relation Extraction. In Annual Con- ference of the North American Chapter of the Associ- ation for Computational Linguistics (NAACL).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Scaling textual inference to the web",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Schoenmackers",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "79--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Schoenmackers, Oren Etzioni, and Daniel S Weld. 2008. Scaling textual inference to the web. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 79-88. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning first-order horn clauses from web text",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Schoenmackers",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Jesse",
"middle": [
"Davis"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1088--1098",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Schoenmackers, Oren Etzioni, Daniel S Weld, and Jesse Davis. 2010. Learning first-order horn clauses from web text. In Proceedings of the 2010 Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 1088-1098. AssociaFrition for Compu- tational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Can we derive general world knowledge from texts?",
"authors": [
{
"first": "Lenhart",
"middle": [],
"last": "Schubert",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the second international conference on Human Language Technology Research",
"volume": "",
"issue": "",
"pages": "94--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lenhart Schubert. 2002. Can we derive general world knowledge from texts? In Proceedings of the second international conference on Human Language Tech- nology Research, pages 94-97. Morgan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Reasoning with neural tensor networks for knowledge base completion",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "926--934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural ten- sor networks for knowledge base completion. In Advances in Neural Information Processing Systems, pages 926-934.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning verb inference rules from linguistically-motivated evidence",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Hila Weisman",
"suffix": ""
},
{
"first": "Idan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "194--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hila Weisman, Jonathan Berant, Idan Szpektor, and Ido Dagan. 2012. Learning verb inference rules from linguistically-motivated evidence. In Proceed- ings of the 2012 Joint Conference on Empirical Meth- ods in Natural Language Processing and Computa- tional Natural Language Learning, EMNLP-CoNLL 2012, pages 194-204.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "The precision and yield of inference rules after KGLR validation, and that of KB generated by inference using these rule-sets. Comparison with PPDBe is yield-controlled.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Ablation study of rule verification using KGLR rewrites on our devset of 600 CLEAN rules",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"num": null,
"text": "). Type an-",
"html": null,
"type_str": "table",
"content": "<table><tr><td>System</td><td colspan=\"2\">Precision Recall</td></tr><tr><td>KGLR (all rules)</td><td>85.4%</td><td>62.0%</td></tr><tr><td>w/o Negating Rules</td><td>85.4%</td><td>62.0%</td></tr><tr><td>w/o Antonyms</td><td>84.2%</td><td>62.0%</td></tr><tr><td>w/o Wordnet Hypernyms</td><td>86.1%</td><td>59.3%</td></tr><tr><td>w/o Dropping Modifiers</td><td>84.9%</td><td>59.6%</td></tr><tr><td colspan=\"2\">w/o Gerund-Infinitive Equivalence 85.2%</td><td>61.0%</td></tr><tr><td>w/o Light and Serial Verbs</td><td>85.0%</td><td>59.9%</td></tr><tr><td>w/o Deverbal Nouns</td><td>85.4%</td><td>62.0%</td></tr><tr><td>w/o Preposition Synonyms</td><td>86.9%</td><td>56.9%</td></tr><tr><td>w/o Active-Passive</td><td>85.0%</td><td>54.5%</td></tr><tr><td>w/o Redundant Prepositions</td><td>86.1%</td><td>61.6%</td></tr></table>"
}
}
}
}