Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q15-1038",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:08:07.267593Z"
},
"title": "Large-Scale Information Extraction from Textual Definitions through Deep Syntactic and Semantic Analysis",
"authors": [
{
"first": "Claudio",
"middle": [],
"last": "Delli Bovi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sapienza University of Rome",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Luca",
"middle": [],
"last": "Telesca",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sapienza University of Rome",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sapienza University of Rome",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present DEFIE, an approach to largescale Information Extraction (IE) based on a syntactic-semantic analysis of textual definitions. Given a large corpus of definitions we leverage syntactic dependencies to reduce data sparsity, then disambiguate the arguments and content words of the relation strings, and finally exploit the resulting information to organize the acquired relations hierarchically. The output of DEFIE is a high-quality knowledge base consisting of several million automatically acquired semantic relations. 1",
"pdf_parse": {
"paper_id": "Q15-1038",
"_pdf_hash": "",
"abstract": [
{
"text": "We present DEFIE, an approach to largescale Information Extraction (IE) based on a syntactic-semantic analysis of textual definitions. Given a large corpus of definitions we leverage syntactic dependencies to reduce data sparsity, then disambiguate the arguments and content words of the relation strings, and finally exploit the resulting information to organize the acquired relations hierarchically. The output of DEFIE is a high-quality knowledge base consisting of several million automatically acquired semantic relations. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The problem of knowledge acquisition lies at the core of Natural Language Processing. Recent years have witnessed the massive exploitation of collaborative, semi-structured information as the ideal middle ground between high-quality, fully-structured resources and the larger amount of cheaper (but noisy) unstructured text (Hovy et al., 2013) . Collaborative projects, like Freebase (Bollacker et al., 2008) and Wikidata (Vrande\u010di\u0107, 2012) , have been being developed for many years and are continuously being improved. A great deal of research also focuses on enriching available semi-structured resources, most notably Wikipedia, thereby creating taxonomies (Ponzetto and Strube, 2011; Flati et al., 2014) , ontologies (Mahdisoltani et al., 2015) and semantic networks (Navigli and Ponzetto, 2012; Nastase and Strube, 2013) . These solutions, however, 1 http://lcl.uniroma1.it/defie are inherently constrained to small and often prespecified sets of relations. A more radical approach is adopted in systems like TEXTRUNNER (Etzioni et al., 2008) and REVERB (Fader et al., 2011) , which developed from the Open Information Extraction (OIE) paradigm (Etzioni et al., 2008) and focused on the unconstrained extraction of a large number of relations from massive unstructured corpora. Ultimately, all these endeavors were geared towards addressing the knowledge acquisition problem and tackling long-standing challenges in the field, such as Machine Reading (Mitchell, 2005) .",
"cite_spans": [
{
"start": 324,
"end": 343,
"text": "(Hovy et al., 2013)",
"ref_id": "BIBREF20"
},
{
"start": 384,
"end": 408,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF0"
},
{
"start": 422,
"end": 439,
"text": "(Vrande\u010di\u0107, 2012)",
"ref_id": "BIBREF52"
},
{
"start": 660,
"end": 687,
"text": "(Ponzetto and Strube, 2011;",
"ref_id": "BIBREF44"
},
{
"start": 688,
"end": 707,
"text": "Flati et al., 2014)",
"ref_id": "BIBREF15"
},
{
"start": 721,
"end": 748,
"text": "(Mahdisoltani et al., 2015)",
"ref_id": "BIBREF26"
},
{
"start": 771,
"end": 799,
"text": "(Navigli and Ponzetto, 2012;",
"ref_id": "BIBREF40"
},
{
"start": 800,
"end": 825,
"text": "Nastase and Strube, 2013)",
"ref_id": "BIBREF39"
},
{
"start": 1025,
"end": 1047,
"text": "(Etzioni et al., 2008)",
"ref_id": "BIBREF10"
},
{
"start": 1059,
"end": 1079,
"text": "(Fader et al., 2011)",
"ref_id": "BIBREF11"
},
{
"start": 1150,
"end": 1172,
"text": "(Etzioni et al., 2008)",
"ref_id": "BIBREF10"
},
{
"start": 1456,
"end": 1472,
"text": "(Mitchell, 2005)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While earlier OIE approaches relied mostly on dependencies at the level of surface text (Etzioni et al., 2008; Fader et al., 2011) , more recent work has focused on deeper language understanding at the level of both syntax and semantics (Nakashole et al., 2012; and tackled challenging linguistic phenomena like synonymy and polysemy. However, these issues have not yet been addressed in their entirety. Relation strings are still bound to surface text, lacking actual semantic content. Furthermore, most OIE systems do not have a clear and unified ontological structure and require additional processing steps, such as statistical inference mappings (Dutta et al., 2014) , graphbased alignments of relational phrases (Grycner and Weikum, 2014) , or knowledge base unification procedures (Delli Bovi et al., 2015) , in order for their potential to be exploitable in real applications.",
"cite_spans": [
{
"start": 88,
"end": 110,
"text": "(Etzioni et al., 2008;",
"ref_id": "BIBREF10"
},
{
"start": 111,
"end": 130,
"text": "Fader et al., 2011)",
"ref_id": "BIBREF11"
},
{
"start": 237,
"end": 261,
"text": "(Nakashole et al., 2012;",
"ref_id": "BIBREF38"
},
{
"start": 651,
"end": 671,
"text": "(Dutta et al., 2014)",
"ref_id": "BIBREF8"
},
{
"start": 718,
"end": 744,
"text": "(Grycner and Weikum, 2014)",
"ref_id": "BIBREF17"
},
{
"start": 788,
"end": 813,
"text": "(Delli Bovi et al., 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In DEFIE the key idea is to leverage the linguistic analysis of recent semantically-enhanced OIE techniques while moving from open text to smaller corpora of dense prescriptive knowledge. The aim is then to extract as much information as possible by unifying syntactic analysis and state-of-the-art disambiguation and entity linking. Using this strategy, from an input corpus of textual definitions (short and concise descriptions of a given concept or entity) we are able to harvest fully disambiguated relation instances on a large scale, and integrate them automatically into a high-quality taxonomy of semantic relations. As a result a large knowledge base is produced that shows competitive accuracy and coverage against state-of-the-art OIE systems based on much larger corpora. Our contributions can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose an approach to IE that ties together syntactic dependencies and unified entity linking/word sense disambiguation, designed to discover semantic relations from a relatively small corpus of textual definitions;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We create a large knowledge base of fully disambiguated relation instances, ranging over named entities and concepts from available resources like WordNet and Wikipedia;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We exploit our semantified relation patterns to automatically build a rich, high-quality relation taxonomy, showing competitive results against state-of-the-art approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach comprises three stages. First, we extract from our input corpus an initial set of semantic relations (Section 2); each relation is then scored and augmented with semantic type signatures (Section 3); finally, the augmented relations are used to build a relation taxonomy (Section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here we describe the first stage of our approach, where a set of semantic relations is extracted from the input corpus. In the following, we refer to a relation instance as a triple t = a i , r, a j with a i and a j being the arguments and r the relation pattern. From each relation pattern r k the associated relation R k is identified by the set of all relation instances where r = r k . In order to extract a large set of fully disambiguated relation instances we bring together syntactic and semantic analysis on a corpus of plain textual definitions. Each definition is first parsed and disambiguated ( Figure 1a -b, Section 2.1); syntactic and semantic information is combined into a structured graph representation (Figure 1c , Section 2.2) and relation patterns are then extracted as shortest paths between concept pairs (Section 2.3).",
"cite_spans": [],
"ref_spans": [
{
"start": 608,
"end": 617,
"text": "Figure 1a",
"ref_id": "FIGREF0"
},
{
"start": 722,
"end": 732,
"text": "(Figure 1c",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Relation Extraction",
"sec_num": "2"
},
{
"text": "The semantics of our relations draws on BabelNet (Navigli and Ponzetto, 2012) , a wide-coverage multilingual semantic network obtained from the automatic integration of WordNet, Wikipedia and other resources. This choice is not mandatory; however, inasmuch as it is a superset of these resources, Ba-belNet brings together lexicographic and encyclopedic knowledge, enabling us to reach higher coverage while still being able to accommodate different disambiguation strategies. For each relation instance t extracted, both a i , a j and the content words appearing in r are linked to the BabelNet inventory. In the remainder of the paper we identify BabelNet concepts or entities using a subscript-superscript notation where, for instance, band i bn refers to the i-th BabelNet sense for the English word band.",
"cite_spans": [
{
"start": 49,
"end": 77,
"text": "(Navigli and Ponzetto, 2012)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Extraction",
"sec_num": "2"
},
{
"text": "The first step of the process is the automatic extraction of syntactic information (typed dependencies) and semantic information (word senses and named entity mentions) from each textual definition. Each definition undergoes the following steps: Syntactic Analysis. Each textual definition d is parsed to obtain a dependency graph G d (Figure 1a ). Parsing is carried out using C&C (Clark and Curran, 2007) , a log-linear parser based on Combinatory Categorial Grammar (CCG). Although our algorithm seamlessly works with any syntactic formalism, CCG rules are especially suited to longer definitions and linguistic phenomena like coordinating conjunctions (Steedman, 2000) .",
"cite_spans": [
{
"start": 382,
"end": 406,
"text": "(Clark and Curran, 2007)",
"ref_id": "BIBREF3"
},
{
"start": 656,
"end": 672,
"text": "(Steedman, 2000)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [
{
"start": 335,
"end": 345,
"text": "(Figure 1a",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Textual Definition Processing",
"sec_num": "2.1"
},
{
"text": "Semantic Analysis. Semantic analysis is based on Babelfy (Moro et al., 2014) , a joint, stateof-the-art approach to entity linking and word sense disambiguation. Given a lexicalized semantic network as underlying structure, Babelfy uses a dense subgraph algorithm to identify high-coherence semantic interpretations of words and multi-word expressions across an input text. We apply Babelfy to each definition d, obtaining a sense mapping S d from surface text (words and entity mentions) to word senses and named entities (Figure 1b) .",
"cite_spans": [
{
"start": 57,
"end": 76,
"text": "(Moro et al., 2014)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 523,
"end": 534,
"text": "(Figure 1b)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Textual Definition Processing",
"sec_num": "2.1"
},
{
"text": "As a matter of fact, any disambiguation or entity linking strategy can be used at this stage. However, a knowledge-based unified approach like Babelfy is best suited to our setting, where context is limited and exploiting definitional knowledge as much as possible is key to attaining high-coverage results (as we show in Section 6.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Textual Definition Processing",
"sec_num": "2.1"
},
{
"text": "The information extracted by parsing and disambiguating a given definition d is unified into a syntactic-semantic graph G sem d where concepts and entities identified in d are arranged in a graph structure encoding their syntactic dependencies ( Figure 1c ). We start from the dependency graph G d , as provided by the syntactic analysis of d in Section 2.1. Semantic information from the sense mappings S d can be incorporated directly in the vertices of G d by attaching available matches between words and senses to the corresponding vertices. Dependency graphs, however, encode dependencies solely on a word basis, while our sense mappings may include multi-word expressions (e.g. Pink Floyd 1 bn ). In order to extract consistent information, subsets of vertices referring to the same concept or entity are merged to a single semantic node, which replaces the subgraph covered in the original dependency structure. Consider the example in Figure 1 : an entity like Pink Floyd 1 bn covers two distinct and connected vertices in the dependency graph G d , one for the noun Floyd and one for its modifier Pink. In the actual semantics of the sentence, as encoded in G sem d (Figure 1c ), these two vertices are merged to a single node referring to the entity Pink Floyd 1 bn (the English rock band), instead of being assigned individual word interpretations.",
"cite_spans": [],
"ref_spans": [
{
"start": 246,
"end": 256,
"text": "Figure 1c",
"ref_id": "FIGREF0"
},
{
"start": 945,
"end": 953,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1177,
"end": 1187,
"text": "(Figure 1c",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Syntactic-Semantic Graph Construction",
"sec_num": "2.2"
},
{
"text": "Our procedure for building Figure 1c ). Then, the remaining vertices and edges are added as in G d , discarding nondisambiguated adjuncts and modifiers (like the and fifth in Figure 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 36,
"text": "Figure 1c",
"ref_id": "FIGREF0"
},
{
"start": 175,
"end": 183,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Syntactic-Semantic Graph Construction",
"sec_num": "2.2"
},
{
"text": "At this stage, all the information in a given definition d has been extracted and encoded in the corresponding graph G sem d (Section 2.2). We now consider those paths connecting entity pairs across the graph and extract the relation pattern r between two entities and/or concepts as the shortest path between the two corresponding vertices in G sem d . This enables us to exclude less relevant information (typically carried by adjuncts or modifiers) and reduce data sparsity in the overall extraction process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Pattern Identification",
"sec_num": "2.3"
},
{
"text": "Our algorithm works as follows: given a textual definition d, we consider every pair of identified concepts or entities and compute the corresponding shortest path in G sem d using the Floyd-Warshall algorithm (Floyd, 1962) . The only constraint we enforce is that resulting paths must include at least one verb node. This condition filters out meaningless single-node patterns (e.g. two concepts connected",
"cite_spans": [
{
"start": 210,
"end": 223,
"text": "(Floyd, 1962)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Pattern Identification",
"sec_num": "2.3"
},
{
"text": "Algorithm 1 Relation Extraction procedure EXTRACTRELATIONSFROM(D) 1: R := \u2205 2: for each d in D do 3: G d := dependencyP arse(d) 4: S d := disambiguate(d) 5: G sem d := buildSemanticGraph(G d , S d ) 6: for each s i , s j in S d do 7: s i , r ij , s j := shortestP ath(s i , s j ) 8: R := R \u222a { s i , r ij , s j } 9: f ilterP atterns(R, \u03c1) return R;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Pattern Identification",
"sec_num": "2.3"
},
{
"text": "with a preposition) and, given the prescriptive nature of d, is unlikely to discard semantically relevant attributes compacted in noun phrases. As an example, consider the two sentences \"Mutter is the third album by German band Rammstein\" and \"Atom Heart Mother is the fifth album by English band Pink Floyd\". In both cases, two valid shortest-path patterns are extracted. The first extracted shortest-path pattern is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Pattern Identification",
"sec_num": "2.3"
},
{
"text": "X \u2192 is \u2192 album 1 bn \u2192 by \u2192 Y with a i =Mutter 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Pattern Identification",
"sec_num": "2.3"
},
{
"text": "bn , a j =Rammstein 1 bn for the first sentence and a i =Atom Heart Mother 1 bn , a j =Pink Floyd 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Pattern Identification",
"sec_num": "2.3"
},
{
"text": "bn for the second one. The second extracted shortest-path pattern is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Pattern Identification",
"sec_num": "2.3"
},
{
"text": "X \u2192 is \u2192 Y with a i =Mutter 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Pattern Identification",
"sec_num": "2.3"
},
{
"text": "bn , a j =album 1 bn for the first sentence and a i =Atom Heart Mother 1 bn , a j =album 1 bn for the second one. In fact, our extraction process seamlessly discovers general knowledge (e.g. that Mutter 3 bn and Atom Heart Mother 1 bn are instances of the concept album 1 bn ) and facts (e.g. that the entities Rammstein 1 bn and Pink Floyd 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Pattern Identification",
"sec_num": "2.3"
},
{
"text": "bn have an isAlbumBy relation with the two recordings).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Pattern Identification",
"sec_num": "2.3"
},
{
"text": "A pseudo-code for the entire extraction algorithm is shown in Algorithm 1: given a set of textual definitions D, a set of relations is generated over extractions R, with each relation R \u2282 R comprising relation instances extracted from D. Each d \u2208 D is first parsed and disambiguated to produce a syntactic-semantic graph G sem d (Sections 2.1-2.2); then all the concept pairs s i , s j are examined to detect relation instances as shortest paths. Finally, we filter out from the resulting set all relations for which the number of extracted instances is below a fixed threshold \u03c1. 2 The overall algorithm extracts over 20 million relation instances in our experimental setup (Section 5) with almost 256,000 distinct relations.",
"cite_spans": [
{
"start": 581,
"end": 582,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Pattern Identification",
"sec_num": "2.3"
},
{
"text": "We further characterize the semantics of our relations by computing semantic type signatures for each R \u2282 R, i.e. by attaching a proper semantic class to both its domain and range (the sets of arguments occurring on the left and right of the pattern). As every element in the domain and range of R is disambiguated, we retrieve the corresponding senses and collect their direct hypernyms. Then we select the hypernym covering the largest subset of arguments as the representative semantic class for the domain (or range) of R. We extract hypernyms using BabelNet, where taxonomic information covers both general concepts (from the WordNet taxonomy (Fellbaum, 1998) ) and named entities (from the Wikipedia Bitaxonomy (Flati et al., 2014) ).",
"cite_spans": [
{
"start": 648,
"end": 664,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF12"
},
{
"start": 717,
"end": 737,
"text": "(Flati et al., 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Type Signatures and Scoring",
"sec_num": "3"
},
{
"text": "From the distribution of direct hypernyms over domain and range arguments of R we estimate the quality of R and associate a confidence value with its relation pattern r. Intuitively we want to assign higher confidence to relations where the corresponding distributions have low entropy. For instance, if both sets have a single hypernym covering all arguments, then R arguably captures a well-defined semantic relation and should be assigned high confidence. For each relation R, we compute:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Type Signatures and Scoring",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H R = \u2212 n i=1 p(h i ) log 2 p(h i )",
"eq_num": "(1)"
}
],
"section": "Relation Type Signatures and Scoring",
"sec_num": "3"
},
{
"text": "where we therefore consider two additional factors, i.e. the number of extracted instances for R and the length of the associated pattern r, obtaining the following empirical measure:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Type Signatures and Scoring",
"sec_num": "3"
},
{
"text": "h i (i = 1, ...,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Type Signatures and Scoring",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score(R) = |S R | (H R + 1) length(r)",
"eq_num": "(2)"
}
],
"section": "Relation Type Signatures and Scoring",
"sec_num": "3"
},
{
"text": "with S R being the set of extracted relation instances for R. The +1 term accounts for cases where H R = 0. As shown in the examples of Table 1 , relations with rather general patterns (such as X known for Y) achieve higher scores compared to very specific ones (like X is village 2 bn founded in 1912 in Y) despite higher entropy values. We validated our measure on the samples of Section 6.1, computing the overall precision for different score thresholds. The monotonic decrease of sample precision in Figure 2a shows that our measure captures the quality of extracted patterns better than H R (Figure 2b ).",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 143,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 505,
"end": 515,
"text": "Figure 2a",
"ref_id": "FIGREF1"
},
{
"start": 598,
"end": 608,
"text": "(Figure 2b",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Relation Type Signatures and Scoring",
"sec_num": "3"
},
{
"text": "In the last stage of our approach our set of extracted relations is arranged automatically in a relation taxonomy. The process is carried out by comparing relations pairwise, looking for hypernymyhyponymy relationships between the corresponding relation patterns; we then build our taxonomy by connecting with an edge those relation pairs for which such a relationship is found. Both the relation taxonomization procedures described here examine noun nodes across each relation pattern r, and consider for taxonomization only those relations whose patterns are identical except for a single noun node. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Taxonomization",
"sec_num": "4"
},
{
"text": "A direct way of identifying hypernym/hyponym noun nodes across relation patterns is to analyze the semantic information attached to them. Given two relation patterns r i and r j , differing only in respect of the noun nodes n i and n j , we first look at the associated concepts or entities, c i and c j , and retrieve the corresponding hypernym sets, H(c i ) and H(c j ). Hypernym sets are obtained by iteratively collecting the superclasses of c i and c j from the semantic network of BabelNet, up to a fixed height. For instance, given c i = album 1 bn , H(c i ) = {work of art 1 bn , creation 2 bn , artifact 1 bn }, and given c j = Rammstein 1 bn , H(c j ) = {band 2 bn , musical ensemble 1 bn , organization 1 bn }. Once we have H(c i ) and H(c j ), we just check whether c j \u2208 H(c i ) or c i \u2208 H(c j ) (Figure 3a ). According to which is the case, we conclude that r j is a generalization of r i , or that r i is a generalization of r j .",
"cite_spans": [],
"ref_spans": [
{
"start": 809,
"end": 819,
"text": "(Figure 3a",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Hypernym Generalization",
"sec_num": "4.1"
},
{
"text": "The second procedure focuses on the noun (or compound) represented by the node. Given two relation patterns, r i and r j , we apply the following heuristic: from one of the two nouns, be it n i , any adjunct or modifier is removed, retaining the sole head wordn i . Then,n i is compared with n j and, ifn i = n j , we assume that the relation r j is a generalization of r i (Figure 3b ). Input. The input corpus used for the relation extraction procedure is the full set of English textual definitions in BabelNet 2.5 (Navigli and Ponzetto, 2012) . 4 In fact, any set of textual definitions can be provided as input to DEFIE, ranging from existing dictionaries (like WordNet or Wiktionary) to the set of first sentences of Wikipedia articles. 5 As it is a merger for various different resources of this kind, BabelNet provides a large heterogeneous set comprising definitions from WordNet, Wikipedia, Wiktionary, Wikidata and OmegaWiki. To the best of our knowledge, this set constitutes the largest available corpus of definitional knowledge. We therefore worked on a total of 4,357,327 textual definitions from the English synsets of BabelNet's knowledge base. We then used the same version of BabelNet as the underlying semantic network structure for disambiguating with Babelfy. 6",
"cite_spans": [
{
"start": 518,
"end": 546,
"text": "(Navigli and Ponzetto, 2012)",
"ref_id": "BIBREF40"
},
{
"start": 549,
"end": 550,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 374,
"end": 384,
"text": "(Figure 3b",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Substring Generalization",
"sec_num": "4.2"
},
{
"text": "Statistics. Comparative statistics are shown in Table 2 . DEFIE extracts 20,352,903 relation instances, out of which 13,753,133 feature a fully disambiguated pattern, yielding an average of 3.15 disambiguated relation instances extracted from each definition. After the extraction process, our knowledge base comprises 255,881 distinct semantic relations, 94% of which also have disambiguated content words in their patterns. DEFIE extracts a considerably larger amount of relation instances compared to similar approaches, despite the much smaller amount of text used. For example, we managed to harvest over 5 million relation instances more than PATTY, using a much smaller corpus (sin-gle sentences as opposed to full Wikipedia articles) and generating a number of distinct relations that was six times less than PATTY's. As a result, we obtained an average number of extractions that was substantially higher than those of our OIE competitors. This suggests that DEFIE is able to exploit the nature of textual definitions effectively and generalize over relation patterns. Furthermore, our semantic analysis captured 2,398,982 distinct arguments (either concept or named entities), outperforming almost all open-text systems examined.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Substring Generalization",
"sec_num": "4.2"
},
{
"text": "Evaluation. All the evaluations carried out in Section 6 were based on manual assessment by two human judges, with an inter-annotator agreement, as measured by Cohen's kappa coefficient, above 70% in all cases. In these evaluations we compared DE-FIE with the following OIE approaches:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Substring Generalization",
"sec_num": "4.2"
},
{
"text": "\u2022 NELL (Carlson et al., 2010) with knowledge base beliefs updated to November 2014;",
"cite_spans": [
{
"start": 7,
"end": 29,
"text": "(Carlson et al., 2010)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Substring Generalization",
"sec_num": "4.2"
},
{
"text": "\u2022 PATTY (Nakashole et al., 2012) with Freebase types and pattern synsets from the English Wikipedia dump of June 2011;",
"cite_spans": [
{
"start": 8,
"end": 32,
"text": "(Nakashole et al., 2012)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Substring Generalization",
"sec_num": "4.2"
},
{
"text": "\u2022 REVERB (Fader et al., 2011) , using the set of normalized relation instances from the ClueWeb09 dataset;",
"cite_spans": [
{
"start": 9,
"end": 29,
"text": "(Fader et al., 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Substring Generalization",
"sec_num": "4.2"
},
{
"text": "\u2022 WISENET (Moro and Navigli, 2012; with relational phrases from the English Wikipedia dump of December 2012.",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "(Moro and Navigli, 2012;",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Substring Generalization",
"sec_num": "4.2"
},
{
"text": "In addition, we also compared our knowledge base with up-to-date human-contributed resources, namely Freebase (Bollacker et al., 2008) and DBpedia (Lehmann et al., 2014) , both from the dumps of April/May 2014. We first assessed the quality and the semantic consistency of our relations using manual evaluation. We ranked our relations according to their score (Section 3) and then created two samples (of size 100 and 250 respectively) of the top scoring relations. In order to evaluate the long tail of less confident relations, we created another two samples of the same size with randomly extracted relations. We presented these samples to our human judges, accompanying each relation with a set of 50 argument pairs and the corresponding textual definitions from BabelNet. For each item in the sample we asked whether it represented a meaningful relation and whether the extracted argument pairs were consistent with this relation and the corresponding definitions. If the answer was positive, the relation was considered as correct. Finally we estimated the overall precision of the sample as the proportion of correct items. Results are reported in Table 3 and compared to those obtained by our closest competitor, PATTY, in the setting of Section 5. In PATTY the confidence of a given pattern was estimated from its statistical strength (Nakashole et al., 2012). As shown in Table 3 , DEFIE achieved a comparable level of accuracy in every sample. An error analysis identified most errors as related to the vagueness of some short and general patterns, e.g. X take Y, X make Y. Others were related to parsing (e.g. in labeling the head word of complex noun phrases) or disambiguation. In addition, we used the same samples to estimate the novelty of the extracted information in comparison to currently available resources. We examined each correct relation pattern and looked manually for an equivalent relation in the knowledge bases Table 4 for both the top 100 sample and the random sample. The high proportion of relations not appearing in existing resources (especially across the random samples) suggests that DEFIE is capable of discovering information not obtainable from available knowledge bases, including very specific relations (X is blizzard in Y, X is Mayan language spoken by Y, X is governmentowned corporation in Y), as well as general but unusual ones (X used by writer of Y).",
"cite_spans": [
{
"start": 110,
"end": 134,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF0"
},
{
"start": 147,
"end": 169,
"text": "(Lehmann et al., 2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 1156,
"end": 1163,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 1383,
"end": 1390,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 1944,
"end": 1951,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Substring Generalization",
"sec_num": "4.2"
},
{
"text": "To assess the coverage of DEFIE we first tested our extracted relations on a public dataset described in (Nakashole et al., 2012) and consisting of 163 semantic relations manually annotated from five Wikipedia pages about musicians. Following the line of previous works (Nakashole et al., 2012; , for each annotation we sought a relation in our knowledge base carrying the same semantics. Results are reported in Table 5 . Consistently with the results in Table 4 , the proportion of novel information places DEFIE in line with its closest competitors, achieving a coverage of 80.3% with respect to the gold standard. Examples of relations not covered by our competitors are hasFatherInLaw and hasDaughterInLaw. Furthermore, relations holding between entities and general concepts (e.g. critizedFor, praisedFor, sentencedTo), are captured only by DEFIE and REVERB (which, however, lacks any argument semantics).",
"cite_spans": [
{
"start": 105,
"end": 129,
"text": "(Nakashole et al., 2012)",
"ref_id": "BIBREF38"
},
{
"start": 270,
"end": 294,
"text": "(Nakashole et al., 2012;",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 413,
"end": 420,
"text": "Table 5",
"ref_id": "TABREF9"
},
{
"start": 456,
"end": 463,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Coverage of Relations",
"sec_num": "6.2"
},
{
"text": "We also assessed the coverage of resources based Table 6 , DEFIE reports a coverage between 81% and 89% depending on the resource, failing to cover mostly relations that refer to numerical properties (e.g. numberOfMembers).",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 56,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Coverage of Relations",
"sec_num": "6.2"
},
{
"text": "Finally, we tested the coverage of DEFIE over individual relation instances. We selected a random sample of 100 triples from the two closest competitors exploiting textual corpora, i.e. PATTY and WISENET. For each selected triple a i , r, a j , we sought an equivalent relation instance in our knowledge base, i.e. one comprising a i and a j and a relation pattern expressing the same semantic relation of r. Results in Table 7 show a coverage greater than 65% over both samples. Given the dramatic reduction of corpus size and the high precision of the items extracted, these figures demonstrate that definitional knowledge is extremely valuable for relation extraction approaches. This might suggest that, even in large-scale OIE-based resources, a substantial amount of knowledge is likely to come from a rather smaller subset of definitional sentences within the source corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 420,
"end": 427,
"text": "Table 7",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Coverage of Relations",
"sec_num": "6.2"
},
{
"text": "We evaluated our relation taxonomy by manually assessing the accuracy of our taxonomization heuristics. Then we compared our results against PATTY, the only system among our closest competitors that generates a taxonomy of relations. The setting for this evaluation was the same of that of Section 6.1. However, as we lacked a confidence measure in this case, we just extracted a random sample of 200 hypernym edges for each generalization procedure. We presented these samples to our human judges and, for each hypernym edge, we asked whether the corresponding pair of relations represented a correct generalization. We then estimated the overall precision as the proportion of edges regarded as correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Relation Taxonomization",
"sec_num": "6.3"
},
{
"text": "Results are reported in Table 8 , along with PATTY's results in the setting of Section 5; as PATTY's edges are ranked by confidence, we considered both its top confident 100 subsumptions and a random sample of the same size. As shown in Table 8 , DEFIE outperforms PATTY in terms of precision, and generates more than twice the number of edges overall. HARPY (Grycner and Weikum, 2014) enriches PATTY's taxonomy with 616,792 hypernym edges, but its alignment algorithm, in the setting of Section 5, also includes transitive edges and still yields a sparser taxonomy compared to ours, with a graph density of 2.32 \u00d7 10 \u22127 . Generalization errors in our taxonomy are mostly related to disambiguation errors or flaws in the Wikipedia Bitaxonomy (e.g. the concept Titular Church 1 bn marked as hyponym of Cardinal 1 bn ).",
"cite_spans": [
{
"start": 360,
"end": 386,
"text": "(Grycner and Weikum, 2014)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table 8",
"ref_id": "TABREF12"
},
{
"start": 237,
"end": 245,
"text": "Table 8",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Quality of Relation Taxonomization",
"sec_num": "6.3"
},
{
"text": "We evaluated the disambiguation stage of DEFIE (Section 2.1) by comparing Babelfy against other state-of-the-art entity linking systems. In order to compare different disambiguation outputs we selected a random sample of 60,000 glosses from the input corpus of textual definitions (Section 5) and ran the relation extraction algorithm (Sections 2.1-2.3) using a different competitor in the disambiguation step each time. We eventually used the mappings in BabelNet to express each output using a common dictionary and sense inventory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Entity Linking and Disambiguation",
"sec_num": "6.4"
},
{
"text": "The coverage obtained by each competitor was assessed by looking at the number of distinct relations extracted in the process, the total number of relation instances extracted, the number of distinct concepts or entities involved, and the average number of semantic nodes within the relation patterns. For each competitor, we also assessed the precision obtained by evaluating the quality and semantic consistency of the relation patterns, in the same manner as in Tables 9 and 10 for Babelfy and the following systems:",
"cite_spans": [],
"ref_spans": [
{
"start": 465,
"end": 480,
"text": "Tables 9 and 10",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Quality of Entity Linking and Disambiguation",
"sec_num": "6.4"
},
{
"text": "\u2022 TagME 2.0 7 (Ferragina and Scaiella, 2012) , which links text fragments to Wikipedia based on measures like sense commonness and keyphraseness (Mihalcea and Csomai, 2007) ;",
"cite_spans": [
{
"start": 14,
"end": 44,
"text": "(Ferragina and Scaiella, 2012)",
"ref_id": "BIBREF13"
},
{
"start": 145,
"end": 172,
"text": "(Mihalcea and Csomai, 2007)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Entity Linking and Disambiguation",
"sec_num": "6.4"
},
{
"text": "\u2022 WAT (Piccinno and Ferragina, 2014) , an entity annotator that improves over TagME and features a re-designed spotting, disambiguation and pruning pipeline;",
"cite_spans": [
{
"start": 6,
"end": 36,
"text": "(Piccinno and Ferragina, 2014)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Entity Linking and Disambiguation",
"sec_num": "6.4"
},
{
"text": "\u2022 DBpedia Spotlight 8 (Mendes et al., 2011) , which annotates text documents with DBpedia URIs using scores such as prominence, topical relevance and contextual ambiguity;",
"cite_spans": [
{
"start": 22,
"end": 43,
"text": "(Mendes et al., 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Entity Linking and Disambiguation",
"sec_num": "6.4"
},
{
"text": "\u2022 Wikipedia Miner 9 (Milne and Witten, 2013) , which combines parallelized processing of Wikipedia dumps, relatedness measures and annotation features.",
"cite_spans": [
{
"start": 20,
"end": 44,
"text": "(Milne and Witten, 2013)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Entity Linking and Disambiguation",
"sec_num": "6.4"
},
{
"text": "As shown in Table 12 : Impact of each source on the extraction step with 2.37 semantic nodes on the average per sentence. This reflects on the quality of semantic relations, reported in Table 10 , with an overall increase of precision both in terms of relations and in terms of individual instances; even though WAT shows slightly higher precision over relations, its considerably lower coverage yields semantically poor patterns (0.39 semantic nodes on the average) and impacts on the overall quality of relations, where some ambiguity is necessarily retained. As an example, the pattern X is station in Y, extracted from WAT's disambiguation output, covers both railway stations and radio broadcasts. Babelfy produces, instead, two distinct relation patterns for each sense, tagging station as railway station 1 bn for the former and station 5 bn for the latter.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Table 12",
"ref_id": "TABREF2"
},
{
"start": 186,
"end": 194,
"text": "Table 10",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Quality of Entity Linking and Disambiguation",
"sec_num": "6.4"
},
{
"text": "We carried out an empirical analysis over the input corpus in our experimental setup, studying the impact of each source of textual definitions in isolation. In fact, as explained in Section 5, BabelNet's textual definitions come from various resources: WordNet, Wikipedia, Wikidata, Wiktionary and OmegaWiki. Table 11 shows the composition of the input corpus with respect to each of these definition sources. The distribution is rather skewed, with the vast majority of definitions coming from Wikipedia (almost 90% of the input corpus).",
"cite_spans": [],
"ref_spans": [
{
"start": 310,
"end": 318,
"text": "Table 11",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Impact of Definition Sources",
"sec_num": "6.5"
},
{
"text": "We ran the relation extraction algorithm (Sections 2.1-2.3) on each subset of the input corpus. As in previous experiments, we report the number of relation instances extracted, the number of distinct re- lations, and the average number of extractions for each relation. Results, as shown in Table 12 , are consistent with the composition of the input corpus in Table 11 : by relying solely on Wikipedia's first sentences, the extraction algorithm discovered 98% of all the distinct relations identified across the whole input corpus, and 93% of the total number of extracted instances. Wikidata provides more than 1 million extractions (5% of the total) but definitions are rather short and most of them (44.2%) generate only is-a relation instances. The remaining sources (WordNet, Wiktionary, OmegaWiki) account for less than 2% of the extractions.",
"cite_spans": [],
"ref_spans": [
{
"start": 292,
"end": 300,
"text": "Table 12",
"ref_id": "TABREF2"
},
{
"start": 362,
"end": 370,
"text": "Table 11",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Impact of Definition Sources",
"sec_num": "6.5"
},
{
"text": "6.6 Impact of the Approach vs. Impact of the Data DEFIE's relation extraction algorithm is explicitly designed to target textual definitions. Hence, the result it achieves is due to the mutual contribution of two key features: an OIE approach and the use of definitional data. In order to decouple these two factors and study their respective impacts, we carried out two experiments: first we applied DEFIE to a sample of non-definitional text; then we applied our closest competitor, PATTY, on the same definition corpus described in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of Definition Sources",
"sec_num": "6.5"
},
{
"text": "Extraction from non-definitional text. We selected a random sample of Wikipedia pages from the English Wikipedia dump of October 2012. We processed each sentence as in Sections 2.1-2.2 and extracted instances of those relations produced by DEFIE in the original definitional setting (Section 5); we then automatically filtered out those instances where the arguments' hypernyms did not agree with the semantic types of the relation. We evaluated manually the quality of extractions on a sample of 100 items (as in Section 6.1) for both the full set of extracted instances and for the subset of extractions from the top 100 scoring relations. Results are reported in Table 13 : in both cases, precision figures show that extraction quality drops consistently in comparison to Section 6.1, suggesting that our extraction approach by itself is less accurate when moving to more complex sentences (with, e.g., subordinate clauses or coreferences).",
"cite_spans": [],
"ref_spans": [
{
"start": 666,
"end": 674,
"text": "Table 13",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Impact of Definition Sources",
"sec_num": "6.5"
},
{
"text": "PATTY on textual definitions. Since no opensource implementation of PATTY is available, we implemented a version of the algorithm which uses BABELFY for named entity disambiguation. We then ran it on our corpus of BabelNet definitions and compared the results against those originally obtained by PATTY (on the entire Wikipedia corpus) and those obtained by DEFIE. Figures are reported in Table 14 in terms of number of extracted relation instances, distinct relations and hypernym edges in the relation taxonomy. Results show that the dramatic reduction of corpus size affects the support sets of PATTY's relations, worsening both coverage and generalization capability.",
"cite_spans": [],
"ref_spans": [
{
"start": 389,
"end": 397,
"text": "Table 14",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Impact of Definition Sources",
"sec_num": "6.5"
},
{
"text": "To further investigate the potential of our approach, we explored the application of DEFIE to the enrichment of existing resources. We focused on BabelNet as a case study. In BabelNet's semantic network, nodes representing concepts and entities are only connected via lexicograhic relationships from WordNet (hypernymy, meronymy, etc.) or unlabeled edges derived from Wikipedia hyperlinks. Our extraction algorithm has the potential to provide useful information to both augment unlabeled edges with labels and explicit semantic content, and create additional connections based on semantic relations. Examples are shown in We carried out a preliminary analysis over all disambiguated relations with at least 10 extracted instances. For each relation pattern r, we first examined the concept pairs associated with its type signatures and looked in BabelNet for an unlabeled edge connecting the pair. Then we examined the whole set of extracted relation instances in R and looked in BabelNet for an unlabeled edge connecting the arguments a i and a j . Results in Table 16 show that only 27.7% of the concept pairs representing relation type signatures are connected in BabelNet, and most of these connections are unlabeled. By the same token, more than 4 million distinct argument pairs (53.5%) do not share any edge in the semantic network and, among those that do, less than 14% have a labeled relationship. These proportions suggest that our relations provide a potential enrichment of the underlying knowledge base in terms of both connectivity and labeling of existing edges. In BabelNet, our case study, cross-resource mappings might also propagate this information across other knowledge bases and rephrase semantic relations in terms of, e.g., automatically generated Wikipedia hyperlinks.",
"cite_spans": [
{
"start": 308,
"end": 335,
"text": "(hypernymy, meronymy, etc.)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1062,
"end": 1070,
"text": "Table 16",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Preliminary Study: Resource Enrichment",
"sec_num": "6.7"
},
{
"text": "From the earliest days, OIE systems had to cope with the dimension and heterogeneity of huge unstructured sources of text. The first systems employed statistical techniques and relied heavily on information redundancy. Then, as soon as semistructured resources came into play (Hovy et al., 2013) , researchers started developing learning systems based on self-supervision (Wu and Weld, 2007) and distant supervision (Mintz et al., 2009; Krause et al., 2012) . Crucial issues in distant supervision, like noisy training data, have been addressed in various ways: probabilistic graphical models (Riedel et al., 2010; Hoffmann et al., 2011) , sophisticated multi-instance learning algorithms (Surdeanu et al., 2012) , matrix factorization techniques (Riedel et al., 2013) , labeled data infusion (Pershina et al., 2014) or crowd-based human computing (Kondreddi et al., 2014) . A different strategy consists of moving from open text extraction to more constrained settings. For instance, the KNOWLEDGE VAULT (Dong et al., 2014) combines Web-scale extraction with prior knowledge from existing knowledge bases; BIPER-PEDIA relies on schema-level attributes from the query stream in order to create an ontology of class-attribute pairs; RENOUN (Yahya et al., 2014) in turn exploits BIPERPEDIA to extract facts expressed as noun phrases. DEFIE focuses, instead, on smaller and denser corpora of prescriptive knowledge. Although early works, such as MindNet (Richardson et al., 1998) , had already highlighted the potential of textual definitions for extracting reliable semantic information, no OIE approach to the best of our knowledge has exploited definitional data to extract and disambiguate a large knowledge base of semantic relations. The direction of most papers (especially in the recent OIE literature) seems rather the opposite, namely, to target Web-scale corpora. In contrast, we manage to extract a large amount of high-quality information by combining an OIE unsupervised approach with definitional data.",
"cite_spans": [
{
"start": 276,
"end": 295,
"text": "(Hovy et al., 2013)",
"ref_id": "BIBREF20"
},
{
"start": 372,
"end": 391,
"text": "(Wu and Weld, 2007)",
"ref_id": "BIBREF54"
},
{
"start": 416,
"end": 436,
"text": "(Mintz et al., 2009;",
"ref_id": "BIBREF31"
},
{
"start": 437,
"end": 457,
"text": "Krause et al., 2012)",
"ref_id": "BIBREF22"
},
{
"start": 593,
"end": 614,
"text": "(Riedel et al., 2010;",
"ref_id": "BIBREF47"
},
{
"start": 615,
"end": 637,
"text": "Hoffmann et al., 2011)",
"ref_id": "BIBREF19"
},
{
"start": 689,
"end": 712,
"text": "(Surdeanu et al., 2012)",
"ref_id": "BIBREF51"
},
{
"start": 747,
"end": 768,
"text": "(Riedel et al., 2013)",
"ref_id": "BIBREF48"
},
{
"start": 793,
"end": 816,
"text": "(Pershina et al., 2014)",
"ref_id": "BIBREF42"
},
{
"start": 848,
"end": 872,
"text": "(Kondreddi et al., 2014)",
"ref_id": "BIBREF21"
},
{
"start": 1005,
"end": 1024,
"text": "(Dong et al., 2014)",
"ref_id": "BIBREF7"
},
{
"start": 1239,
"end": 1259,
"text": "(Yahya et al., 2014)",
"ref_id": "BIBREF55"
},
{
"start": 1451,
"end": 1476,
"text": "(Richardson et al., 1998)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "A deeper linguistic analysis constitutes the focus of many OIE approaches. Syntactic dependencies are used to construct general relation patterns (Nakashole et al., 2012) , or to improve the quality of surface pattern realizations . Phenomena like synonymy and polysemy have been addressed with kernel-based similarity measures and soft clustering techniques (Min et al., 2012; , or exploiting the semantic types of relation arguments (Nakashole et al., 2012; Moro and Navigli, 2012) . An appropriate modeling of semantic types (e.g. selectional preferences) constitutes a line of research by itself, rooted in earlier works like (Resnik, 1996) and focused on either class-based (Clark and Weir, 2002) , or similarity-based (Erk, 2007) , approaches. However, these methods are used to model the semantics of verbs rather than arbitrary patterns. More recently some strategies based on topic modeling have been proposed, either to infer latent relation semantic types from OIE relations (Ritter et al., 2010) , or to directly learn an ontological structure from a starting set of relation instances (Movshovitz-Attias and Cohen, 2015) . However, the knowledge generated is often hard to interpret and integrate with existing knowledge bases without human intervention (Ritter et al., 2010) . In this respect, the semantic predicates proposed by Flati and Navigli (2013) seem to be more promising.",
"cite_spans": [
{
"start": 146,
"end": 170,
"text": "(Nakashole et al., 2012)",
"ref_id": "BIBREF38"
},
{
"start": 359,
"end": 377,
"text": "(Min et al., 2012;",
"ref_id": "BIBREF30"
},
{
"start": 435,
"end": 459,
"text": "(Nakashole et al., 2012;",
"ref_id": "BIBREF38"
},
{
"start": 460,
"end": 483,
"text": "Moro and Navigli, 2012)",
"ref_id": "BIBREF33"
},
{
"start": 630,
"end": 644,
"text": "(Resnik, 1996)",
"ref_id": "BIBREF45"
},
{
"start": 679,
"end": 701,
"text": "(Clark and Weir, 2002)",
"ref_id": "BIBREF4"
},
{
"start": 724,
"end": 735,
"text": "(Erk, 2007)",
"ref_id": "BIBREF9"
},
{
"start": 986,
"end": 1007,
"text": "(Ritter et al., 2010)",
"ref_id": "BIBREF49"
},
{
"start": 1098,
"end": 1133,
"text": "(Movshovitz-Attias and Cohen, 2015)",
"ref_id": "BIBREF37"
},
{
"start": 1267,
"end": 1288,
"text": "(Ritter et al., 2010)",
"ref_id": "BIBREF49"
},
{
"start": 1344,
"end": 1368,
"text": "Flati and Navigli (2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "A novelty in our approach is that issues like polysemy and synonymy are explicitly addressed with a unified entity linking and disambiguation algorithm. By incorporating explicit semantic content in our relation patterns, not only do we make relations less ambiguous, but we also abstract away from specific lexicalizations of the content words and merge together many patterns conveying the same semantics. Rather than using plain dependencies we also inject explicit semantic content into the dependency graph to generate a unified syntactic-semantic representation. Previous works used similar semantic graph representations to produce filtering rules for relation extraction, but they required a starting set of relation patterns and did not exploit syntactic information. A joint approach of syntacticsemantic analysis of text was used in works such as (Lao et al., 2012) , but they addressed a substantially different task (inference for knowledge base completion) and assumed a radically different setting, with a predefined starting set of semantic relations from a given knowledge base. As we enforce an OIE approach, we do not have such requirements and directly process the input text via parsing and disambiguation. This enables DEFIE to generate relations already integrated with resources like WordNet and Wikipedia, without additional alignment steps (Grycner and Weikum, 2014) , or semantic type propagations . As shown in Section 6.3, explicit semantic content within relation patterns underpins a rich and high-quality relation taxonomy, whereas generalization in (Nakashole et al., 2012) is limited to support set inclusion and leads to sparser and less accurate results.",
"cite_spans": [
{
"start": 858,
"end": 876,
"text": "(Lao et al., 2012)",
"ref_id": "BIBREF23"
},
{
"start": 1366,
"end": 1392,
"text": "(Grycner and Weikum, 2014)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "We presented DEFIE, an approach to OIE that, thanks to a novel unified syntactic-semantic analysis of text, harvests instances of semantic relations from a corpus of textual definitions. DEFIE extracts knowledge on a large scale, reducing data sparsity and disambiguating both arguments and relation patterns at the same time. Unlike previous semantically-enhanced approaches, mostly relying on the semantics of argument types, DEFIE is able to semantify relation phrases as well, by providing explicit links to the underlying knowledge base. We leveraged an input corpus of 4.3 million definitions and extracted over 20 million relation instances, with more than 250,000 distinct relations and almost 2.4 million concepts and entities involved. From these relations we automatically constructed a highquality relation taxonomy by exploiting the explicit semantic content of the relation patterns. In the resulting knowledge base concepts and entities are linked to existing resources, such as WordNet and Wikipedia, via the BabelNet semantic network. We evaluated DEFIE in terms of precision, coverage, novelty of information in comparison to existing resources and quality of disambiguation, and we compared our relation taxonomy against state-of-the-art systems obtaining highly competitive results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "A key feature of our approach is its deep syntactic-semantic analysis targeted to textual definitions. In contrast to our competitors, where syntactic constraints are necessary in order to keep precision high when dealing with noisy data, DEFIE shows comparable (or greater) performances by exploiting a dense, noise-free definitional setting. DE-FIE generates a large knowledge base, in line with collaboratively-built resources and state-of-the-art OIE systems, but uses a much smaller amount of input data: our corpus of definitions comprises less than 83 million tokens overall, while other OIE systems exploit massive corpora like Wikipedia (typically more than 1.5 billion tokens), ClueWeb (more than 33 billion tokens), or the Web itself. Furthermore, our semantic analysis based on Babelfy enables the discovery of semantic connections between both general concepts and named entities, with the potential to enrich existing structured and semi-structured resources, as we showed in a preliminary study on BabelNet (cf. Section 6.7).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "As the next step, we plan to apply DEFIE to open text and integrate it with definition extraction and automatic gloss finding algorithms (Navigli and Velardi, 2010; Dalvi et al., 2015) . Also, by further exploiting the underlying knowledge base, inference and learning techniques (Lao et al., 2012; Wang et al., 2015) can be applied to complement our model, generating new triples or correcting wrong ones. Fi-nally, another future perspective is to leverage the increasingly large variety of multilingual resources, like BabelNet, and move towards the modeling of language-independent relations.",
"cite_spans": [
{
"start": 137,
"end": 164,
"text": "(Navigli and Velardi, 2010;",
"ref_id": "BIBREF41"
},
{
"start": 165,
"end": 184,
"text": "Dalvi et al., 2015)",
"ref_id": "BIBREF5"
},
{
"start": 280,
"end": 298,
"text": "(Lao et al., 2012;",
"ref_id": "BIBREF23"
},
{
"start": 299,
"end": 317,
"text": "Wang et al., 2015)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "In all the experiments of Section 6 we set \u03c1 = 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The simplifying assumption here is that two given relation patterns may be in a hypernymy-hyponymy relationship only when their plain syntactic structure is equivalent (e.g. is N1 by and is N2 by, with N1 and N2 being two distinct noun nodes).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "babelnet.org 5 According to the Wikipedia guidelines, an article should begin with a short declarative sentence, defining what (or who) is the subject and why it is notable.6 babelfy.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors gratefully acknowledge the support of the ERC Starting Grant MultiJEDI No. 259234. This research was also partially supported by Google through a Faculty Research Award granted in July 2012.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Freebase: A Collaboratively Created Graph Database For Structuring Human Knowledge",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Bollacker",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Paritosh",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Sturge",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of SIGMOD",
"volume": "",
"issue": "",
"pages": "1247--1250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A Collab- oratively Created Graph Database For Structuring Hu- man Knowledge. In Proceedings of SIGMOD, pages 1247-1250.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Toward an Architecture for Never-Ending Language Learning",
"authors": [
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "1306--1313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell. 2010. Toward an Architecture for Never- Ending Language Learning. In Proceedings of AAAI, pages 1306-1313.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Widecoverage Efficient Statistical Parsing with CCG and Log-Linear Models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "4",
"pages": "493--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and James R. Curran. 2007. Wide- coverage Efficient Statistical Parsing with CCG and Log-Linear Models. Computational Linguistics, 33(4):493-552.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Class-Based Probability Estimation Using a Semantic Hierarchy",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "2",
"pages": "187--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and David Weir. 2002. Class-Based Prob- ability Estimation Using a Semantic Hierarchy. Com- putational Linguistics, 28(2):187-206.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic Gloss Finding for a Knowledge Base using Ontological Constraints",
"authors": [
{
"first": "Bhavana",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Einat",
"middle": [],
"last": "Minkov",
"suffix": ""
},
{
"first": "Partha",
"middle": [
"P"
],
"last": "Talukdar",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of WSDM",
"volume": "",
"issue": "",
"pages": "369--378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhavana Dalvi, Einat Minkov, Partha P. Talukdar, and William W. Cohen. 2015. Automatic Gloss Finding for a Knowledge Base using Ontological Constraints. In Proceedings of WSDM, pages 369-378.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Knowledge Base Unification via Sense Embeddings and Disambiguation",
"authors": [
{
"first": "Claudio",
"middle": [
"Delli"
],
"last": "Bovi",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Espinosa Anke",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "726--736",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudio Delli Bovi, Luis Espinosa Anke, and Roberto Navigli. 2015. Knowledge Base Unification via Sense Embeddings and Disambiguation. In Proceedings of EMNLP, pages 726-736.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Knowledge Vault: a Web-Scale Approach to Probabilistic Knowledge Fusion",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Geremy",
"middle": [],
"last": "Heitz",
"suffix": ""
},
{
"first": "Wilko",
"middle": [],
"last": "Horn",
"suffix": ""
},
{
"first": "Ni",
"middle": [],
"last": "Lao",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Strohmann",
"suffix": ""
},
{
"first": "Shaohua",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of KDD",
"volume": "",
"issue": "",
"pages": "601--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge Vault: a Web-Scale Approach to Probabilistic Knowl- edge Fusion. In Proceedings of KDD, pages 601-610.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Probabilistic Approach for Integrating Heterogeneous Knowledge Sources",
"authors": [
{
"first": "Arnab",
"middle": [],
"last": "Dutta",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Meilicke",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ESWC",
"volume": "",
"issue": "",
"pages": "286--301",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arnab Dutta, Christian Meilicke, and Simone Paolo Ponzetto. 2014. A Probabilistic Approach for Inte- grating Heterogeneous Knowledge Sources. In Pro- ceedings of ESWC, pages 286-301.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Simple, Similarity-based Model for Selectional Preferences",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "216--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk. 2007. A Simple, Similarity-based Model for Selectional Preferences. In Proceedings of ACL, page 216-223.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Open Information Extraction from the Web",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2008,
"venue": "Commun. ACM",
"volume": "51",
"issue": "12",
"pages": "68--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S. Weld. 2008. Open Information Extraction from the Web. Commun. ACM, 51(12):68-74.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Identifying Relations for Open Information Extraction",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1535--1545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying Relations for Open Information Extraction. In Proceedings of EMNLP, pages 1535- 1545.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "WordNet: An Electronic Lexical Database",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Bradford Books.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Fast and Accurate Annotation of Short Texts with Wikipedia Pages",
"authors": [
{
"first": "Paolo",
"middle": [],
"last": "Ferragina",
"suffix": ""
},
{
"first": "Ugo",
"middle": [],
"last": "Scaiella",
"suffix": ""
}
],
"year": 2012,
"venue": "IEEE Software",
"volume": "29",
"issue": "1",
"pages": "70--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paolo Ferragina and Ugo Scaiella. 2012. Fast and Accu- rate Annotation of Short Texts with Wikipedia Pages. IEEE Software, 29(1):70-75.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "SPred: Largescale Harvesting of Semantic Predicates",
"authors": [
{
"first": "Tiziano",
"middle": [],
"last": "Flati",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1222--1232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tiziano Flati and Roberto Navigli. 2013. SPred: Large- scale Harvesting of Semantic Predicates. In Proceed- ings of ACL, pages 1222-1232.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Two Is Bigger (and Better) Than One: the Wikipedia Bitaxonomy Project",
"authors": [
{
"first": "Tiziano",
"middle": [],
"last": "Flati",
"suffix": ""
},
{
"first": "Daniele",
"middle": [],
"last": "Vannella",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Pasini",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "945--955",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tiziano Flati, Daniele Vannella, Tommaso Pasini, and Roberto Navigli. 2014. Two Is Bigger (and Better) Than One: the Wikipedia Bitaxonomy Project. In Pro- ceedings of ACL, pages 945-955.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Algorithm 97: Shortest Path",
"authors": [
{
"first": "Robert",
"middle": [
"W"
],
"last": "Floyd",
"suffix": ""
}
],
"year": 1962,
"venue": "Communications of the ACM",
"volume": "5",
"issue": "6",
"pages": "345--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert W. Floyd. 1962. Algorithm 97: Shortest Path. Communications of the ACM, 5(6):345-345.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "HARPY: Hypernyms and Alignment of Relational Paraphrases",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Grycner",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "2195--2204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Grycner and Gerhard Weikum. 2014. HARPY: Hypernyms and Alignment of Relational Paraphrases. In Proceedings of COLING, pages 2195-2204.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Biperpedia: An Ontology for Search Applications",
"authors": [
{
"first": "Rahul",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Halevy",
"suffix": ""
},
{
"first": "Xuezhi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"Euijong"
],
"last": "Whang",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of VLDB",
"volume": "",
"issue": "",
"pages": "505--516",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rahul Gupta, Alon Halevy, Xuezhi Wang, Steven Eui- jong Whang, and Fei Wu. 2014. Biperpedia: An Ontology for Search Applications. In Proceedings of VLDB, pages 505-516.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Knowledgebased Weak Supervision for Information Extraction of Overlapping Relations",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of NAACL HLT",
"volume": "",
"issue": "",
"pages": "541--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge- based Weak Supervision for Information Extraction of Overlapping Relations. In Proceedings of NAACL HLT, pages 541-540.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Collaboratively built semi-structured content and Artificial Intelligence: The story so far",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2013,
"venue": "Artificial Intelligence",
"volume": "194",
"issue": "",
"pages": "2--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy, Roberto Navigli, and Simone Paolo Ponzetto. 2013. Collaboratively built semi-structured content and Artificial Intelligence: The story so far. Artificial Intelligence, 194:2-27.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Combining Information Extraction and Human Computing for Crowdsourced Knowledge Acquisition",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Sarath Kumar Kondreddi",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Triantafillou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ICDE",
"volume": "",
"issue": "",
"pages": "988--999",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarath Kumar Kondreddi, Peter Triantafillou, and Ger- hard Weikum. 2014. Combining Information Extrac- tion and Human Computing for Crowdsourced Knowl- edge Acquisition. In Proceedings of ICDE, pages 988-999.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Large-Scale Learning of Relation-Extraction Rules with Distant Supervision from the Web",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Feiyu",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ISWC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Krause, Hong Li, Hans Uszkoreit, and Feiyu Xu. 2012. Large-Scale Learning of Relation- Extraction Rules with Distant Supervision from the Web. In Proceedings of ISWC.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Reading the Web with Learned Syntactic-Semantic Inference Rules",
"authors": [
{
"first": "Ni",
"middle": [],
"last": "Lao",
"suffix": ""
},
{
"first": "Amarnag",
"middle": [],
"last": "Subramanya",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "1017--1026",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ni Lao, Amarnag Subramanya, Fernando Pereira, and William W. Cohen. 2012. Reading the Web with Learned Syntactic-Semantic Inference Rules. In Pro- ceedings of EMNLP-CoNLL, pages 1017-1026.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "DBpedia -A Large-scale, Multilingual Knowledge Base Extracted from Wikipedia",
"authors": [
{
"first": "Jens",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Isele",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Jakob",
"suffix": ""
},
{
"first": "Anja",
"middle": [],
"last": "Jentzsch",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Kontokostas",
"suffix": ""
},
{
"first": "Pablo",
"middle": [
"N"
],
"last": "Mendes",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Hellmann",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Morsey",
"suffix": ""
},
{
"first": "S\u00f6ren",
"middle": [],
"last": "Patrick Van Kleef",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Auer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bizer",
"suffix": ""
}
],
"year": 2014,
"venue": "Semantic Web Journal",
"volume": "",
"issue": "",
"pages": "1--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, S\u00f6ren Auer, and Christian Bizer. 2014. DBpedia -A Large-scale, Multilingual Knowledge Base Extracted from Wikipedia. Semantic Web Journal, pages 1-29.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "No Noun Phrase Left Behind: Detecting and Typing Unlinkable Entities",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "893--903",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Lin, Mausam, and Oren Etzioni. 2012. No Noun Phrase Left Behind: Detecting and Typing Un- linkable Entities. In Proceedings of EMNLP-CoNLL, pages 893-903.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "YAGO3: A Knowledge Base from Multilingual Wikipedias",
"authors": [
{
"first": "Farzaneh",
"middle": [],
"last": "Mahdisoltani",
"suffix": ""
},
{
"first": "Joanna",
"middle": [],
"last": "Biega",
"suffix": ""
},
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
}
],
"year": 2015,
"venue": "CIDR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Farzaneh Mahdisoltani, Joanna Biega, and Fabian M. Suchanek. 2015. YAGO3: A Knowledge Base from Multilingual Wikipedias. In CIDR.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "DBPedia Spotlight: Shedding Light on the Web of Documents",
"authors": [
{
"first": "Pablo",
"middle": [
"N"
],
"last": "Mendes",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Jakob",
"suffix": ""
},
{
"first": "Andr\u00e9s",
"middle": [],
"last": "Garc\u00eda-Silva",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bizer",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of I-Semantics",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pablo N. Mendes, Max Jakob, Andr\u00e9s Garc\u00eda-Silva, and Christian Bizer. 2011. DBPedia Spotlight: Shedding Light on the Web of Documents. In Proceedings of I-Semantics, pages 1-8.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Wikify!: Linking Documents to Encyclopedic Knowledge",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Andras",
"middle": [],
"last": "Csomai",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of CIKM",
"volume": "",
"issue": "",
"pages": "233--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Andras Csomai. 2007. Wikify!: Linking Documents to Encyclopedic Knowledge. In Proceedings of CIKM, pages 233-242.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "An Open-Source Toolkit for Mining Wikipedia",
"authors": [
{
"first": "David",
"middle": [],
"last": "Milne",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2013,
"venue": "Artificial Intelligence",
"volume": "194",
"issue": "",
"pages": "222--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Milne and Ian H. Witten. 2013. An Open-Source Toolkit for Mining Wikipedia. Artificial Intelligence, 194:222-239.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Ensemble Semantics for Large-scale Unsupervised Relation Extraction",
"authors": [
{
"first": "Shuming",
"middle": [],
"last": "Bonan Min",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "1027--1037",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonan Min, Shuming Shi, Ralph Grishman, and Chin- Yew Lin. 2012. Ensemble Semantics for Large-scale Unsupervised Relation Extraction. In Proceedings of EMNLP-CoNLL, pages 1027-1037.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Distant Supervision for Relation Extraction Without Labeled Data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant Supervision for Relation Extrac- tion Without Labeled Data. In Proceedings of ACL- IJCNLP, pages 1003-1011.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Reading the Web: A Breakthrough Goal for AI",
"authors": [
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom M. Mitchell. 2005. Reading the Web: A Break- through Goal for AI. AI Magazine.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "WiSeNet: Building a Wikipedia-based Semantic Network with Ontologized Relations",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Moro",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of CIKM",
"volume": "",
"issue": "",
"pages": "1672--1676",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Moro and Roberto Navigli. 2012. WiSeNet: Building a Wikipedia-based Semantic Network with Ontologized Relations. In Proceedings of CIKM, pages 1672-1676.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Integrating Syntactic and Semantic Analysis into the Open Information Extraction Paradigm",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Moro",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of IJCAI",
"volume": "",
"issue": "",
"pages": "2148--2154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Moro and Roberto Navigli. 2013. Integrating Syntactic and Semantic Analysis into the Open Infor- mation Extraction Paradigm. In Proceedings of IJCAI, pages 2148-2154.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Semantic Rule Filtering for Web-Scale Relation Extraction",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Moro",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Feiyu",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ISWC",
"volume": "",
"issue": "",
"pages": "347--362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Moro, Hong Li, Sebastian Krause, Feiyu Xu, Roberto Navigli, and Hans Uszkoreit. 2013. Semantic Rule Filtering for Web-Scale Relation Extraction. In Proceedings of ISWC, pages 347-362.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Entity Linking meets Word Sense Disambiguation: a Unified Approach. TACL",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Moro",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Raganato",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "2",
"issue": "",
"pages": "231--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Moro, Alessandro Raganato, and Roberto Nav- igli. 2014. Entity Linking meets Word Sense Disam- biguation: a Unified Approach. TACL, 2:231-244.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "KB-LDA: Jointly Learning a Knowledge Base of Hierarchy, Relations, and Facts",
"authors": [
{
"first": "Dana",
"middle": [],
"last": "Movshovitz",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Attias",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dana Movshovitz-Attias and William W. Cohen. 2015. KB-LDA: Jointly Learning a Knowledge Base of Hi- erarchy, Relations, and Facts. In Proceedings of ACL.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "PATTY: A Taxonomy of Relational Patterns with Semantic Types",
"authors": [
{
"first": "Ndapandula",
"middle": [],
"last": "Nakashole",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
},
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "1135--1145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ndapandula Nakashole, Gerhard Weikum, and Fabian M. Suchanek. 2012. PATTY: A Taxonomy of Rela- tional Patterns with Semantic Types. In Proceedings of EMNLP-CoNLL, pages 1135-1145.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Transforming Wikipedia into a Large Scale Multilingual Concept Network",
"authors": [
{
"first": "Vivi",
"middle": [],
"last": "Nastase",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2013,
"venue": "Artificial Intelligence",
"volume": "194",
"issue": "",
"pages": "62--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vivi Nastase and Michael Strube. 2013. Transform- ing Wikipedia into a Large Scale Multilingual Concept Network. Artificial Intelligence, 194:62-85.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Ba-belNet: The Automatic Construction, Evaluation and Application of a Wide-Coverage Multilingual Semantic Network",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2012,
"venue": "Artificial Intelligence",
"volume": "193",
"issue": "",
"pages": "217--250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2012. Ba- belNet: The Automatic Construction, Evaluation and Application of a Wide-Coverage Multilingual Seman- tic Network. Artificial Intelligence, 193:217-250.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Learning Word-class Lattices for Definition and Hypernym Extraction",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Velardi",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1318--1327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli and Paola Velardi. 2010. Learning Word-class Lattices for Definition and Hypernym Ex- traction. In Proceedings of ACL, pages 1318-1327.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Infusion of Labeled Data into Distant Supervision for Relation Extraction",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Pershina",
"suffix": ""
},
{
"first": "Bonan",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "732--738",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Pershina, Bonan Min, Wei Xu, and Ralph Grish- man. 2014. Infusion of Labeled Data into Distant Su- pervision for Relation Extraction. In Proceedings of ACL, pages 732-738.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "From TagME to WAT: a New Entity Annotator",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Piccinno",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Ferragina",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ERD",
"volume": "",
"issue": "",
"pages": "55--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesco Piccinno and Paolo Ferragina. 2014. From TagME to WAT: a New Entity Annotator. In Proceed- ings of ERD, pages 55-62.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Taxonomy Induction Based on a Collaboratively Built Knowledge Repository",
"authors": [
{
"first": "Paolo",
"middle": [],
"last": "Simone",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2011,
"venue": "Artificial Intelligence",
"volume": "175",
"issue": "9",
"pages": "1737--1756",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Paolo Ponzetto and Michael Strube. 2011. Tax- onomy Induction Based on a Collaboratively Built Knowledge Repository. Artificial Intelligence, 175(9- 10):1737-1756.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Selectional Constraints: An Information-Theoretic Model and its Computational Realization",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1996,
"venue": "Cognition",
"volume": "",
"issue": "",
"pages": "127--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik. 1996. Selectional Constraints: An Information-Theoretic Model and its Computational Realization. Cognition, 61(1-2):127-159.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "MindNet: Acquiring and Structuring Semantic Information from Text",
"authors": [
{
"first": "D",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "William",
"middle": [
"B"
],
"last": "Richardson",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vanderwende",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1098--1102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen D. Richardson, William B. Dolan, and Lucy Van- derwende. 1998. MindNet: Acquiring and Structur- ing Semantic Information from Text. In Proceedings of ACL, pages 1098-1102.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Modeling Relations and Their Mentions without Labeled Text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ECML-PKDD",
"volume": "",
"issue": "",
"pages": "148--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling Relations and Their Mentions with- out Labeled Text. In Proceedings of ECML-PKDD, pages 148-163.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Relation Extraction with Matrix Factorization and Universal Schemas",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"M"
],
"last": "Marlin",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NAACL HLT",
"volume": "",
"issue": "",
"pages": "74--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation Extraction with Matrix Factorization and Universal Schemas. In Pro- ceedings of NAACL HLT, pages 74-84.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "A Latent Dirichlet Allocation Method for Selectional Preferences",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Oren",
"middle": [
"Etzioni"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "424--434",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Mausam, and Oren Etzioni. 2010. A La- tent Dirichlet Allocation Method for Selectional Pref- erences. In Proceedings of ACL, pages 424-434.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "The Syntactic Process",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge, MA, USA.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Multi-instance Multilabel Learning for Relation Extraction",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Tibshirani",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "455--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D. Manning. 2012. Multi-instance Multi- label Learning for Relation Extraction. In Proceedings of EMNLP-CoNLL, pages 455-465.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Wikidata: A New Platform for Collaborative Data Collection",
"authors": [
{
"first": "Denny",
"middle": [],
"last": "Vrande\u010di\u0107",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of WWW",
"volume": "",
"issue": "",
"pages": "1063--1064",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denny Vrande\u010di\u0107. 2012. Wikidata: A New Platform for Collaborative Data Collection. In Proceedings of WWW, pages 1063-1064.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Efficient Inference and Learning in a Large Knowledge Base -Reasoning with Extracted Information using a Locally Groundable First-Order Probabilistic Logic",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Kathryn",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ni",
"middle": [],
"last": "Mazaitis",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Lao",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Mitchell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2015,
"venue": "Machine Learning",
"volume": "100",
"issue": "",
"pages": "101--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Yang Wang, Kathryn Mazaitis, Ni Lao, Tom M. Mitchell, and William W. Cohen. 2015. Efficient In- ference and Learning in a Large Knowledge Base - Reasoning with Extracted Information using a Locally Groundable First-Order Probabilistic Logic. Machine Learning, 100(1):101-126.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Autonomously Semantifying Wikipedia",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of CIKM",
"volume": "",
"issue": "",
"pages": "41--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Wu and Daniel S. Weld. 2007. Autonomously Semantifying Wikipedia. In Proceedings of CIKM, pages 41-50.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "ReNoun: Fact Extraction for Nominal Attributes",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Yahya",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"Euijong"
],
"last": "Whang",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Halevy",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "325--335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Yahya, Steven Euijong Whang, Rahul Gupta, and Alon Halevy. 2014. ReNoun: Fact Extraction for Nominal Attributes. In Proceedings of EMNLP, pages 325-335.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Syntactic-semantic graph construction from a textual definition",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Precision against score(R) (a) and H R (b)",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "Hypernym (a) and substring (b) generalizations",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"num": null,
"text": "n) are all the distinct argument hypernyms over the domain and range of R and probabilities p(h i ) are estimated from the proportion of arguments covered in such sets. The lower H R , the better semantic types of R are defined. As a matter of fact, however, some valid but over-general relations (e.g. X is a Y, X is used for Y) have inherently high values of H R . To obtain a balanced score,",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Pattern</td><td>Score</td><td>Entropy</td></tr><tr><td>X directed by Y</td><td>4 025.80</td><td>1.74</td></tr><tr><td>X known for Y</td><td>2 590.70</td><td>3.65</td></tr><tr><td>X is election district 1 bn of Y X is composer 1 bn from Y X is street 1 bn named after Y X is village 2 bn founded in 1912 in Y</td><td>110.49 39.92 1.91 0.91</td><td>0.83 2.08 2.24 0.18</td></tr></table>"
},
"TABREF2": {
"num": null,
"text": "Examples of relation scores",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF4": {
"num": null,
"text": "Comparative statistics on the relation extraction process",
"html": null,
"type_str": "table",
"content": "<table><tr><td>5 Experimental Setup</td></tr></table>"
},
"TABREF5": {
"num": null,
"text": "DEFIE 0.93 \u00b1 0.01 0.91 \u00b1 0.02 0.79 \u00b1 0.02 0.81 \u00b1 0.08 PATTY 0.93 \u00b1 0.05",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Top 100</td><td>Top 250</td><td>Rand 100</td><td>Rand 250</td></tr><tr><td/><td>N/A</td><td>0.80 \u00b1 0.08</td><td>N/A</td></tr></table>"
},
"TABREF6": {
"num": null,
"text": "Precision of relation patternsNELL PATTY REVERB WISENET Freebase DBpedia",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Top 100</td><td>.571</td><td>.238</td><td>.214</td><td>.155</td><td>.571</td><td>.461</td></tr><tr><td>Rand 100</td><td>.942</td><td>.711</td><td>.596</td><td>.635</td><td>.904</td><td>.880</td></tr></table>"
},
"TABREF7": {
"num": null,
"text": "Novelty of the extracted information",
"html": null,
"type_str": "table",
"content": "<table><tr><td>6 Experiments</td></tr><tr><td>6.1 Quality of Relations</td></tr></table>"
},
"TABREF9": {
"num": null,
"text": "Coverage of semantic relations of both our OIE competitors and human-contributed resources. For instance, given the relation X born in Y, NELL and REVERB have the equivalent relations personborninlocation and is born in, while Freebase and DBpedia have Place of birth and birthPlace respectively.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>We then</td></tr></table>"
},
"TABREF10": {
"num": null,
"text": "Coverage of manually curated resources",
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td>PATTY</td><td>WISENET</td></tr><tr><td>Random 100</td><td>66%</td><td>69%</td></tr></table>"
},
"TABREF11": {
"num": null,
"text": "Coverage of individual relation instances",
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Hyp. Gen. Substr. Gen. PATTY (Top) PATTY (Rand)</td></tr><tr><td colspan=\"3\">Precision 0.87 \u00b1 0.03 0.90 \u00b1 0.02 0.85 \u00b1 0.07 # Edges 44 412 20 339 0.62 \u00b1 0.09</td></tr><tr><td>Density</td><td>1.89 \u00d7 10 \u22126</td><td>7.64 \u00d7 10 \u22129</td></tr></table>"
},
"TABREF12": {
"num": null,
"text": "",
"html": null,
"type_str": "table",
"content": "<table><tr><td>: Precision and coverage of the relation taxonomy</td></tr><tr><td>on human-defined semantic relations: we extracted</td></tr><tr><td>three random samples of 100 relations from Free-</td></tr><tr><td>base, DBpedia and NELL and looked for seman-</td></tr><tr><td>tically equivalent relations in our knowledge base.</td></tr><tr><td>As shown in</td></tr></table>"
},
"TABREF14": {
"num": null,
"text": "Coverage for different disambiguation systems",
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td>Relations</td><td>Relation instances</td></tr><tr><td>Babelfy</td><td>82.3%</td><td>76.6%</td></tr><tr><td>TagME 2.0</td><td>76.0%</td><td>62.0%</td></tr><tr><td>WAT</td><td>84.6%</td><td>72.6%</td></tr><tr><td>DBpedia Spotlight</td><td>70.5%</td><td>62.6%</td></tr><tr><td>Wikipedia Miner</td><td>71.7%</td><td>56.0%</td></tr></table>"
},
"TABREF15": {
"num": null,
"text": "",
"html": null,
"type_str": "table",
"content": "<table><tr><td>: Precision for different disambiguation systems</td></tr><tr><td>Section 6.1, both at the level of semantic relations</td></tr><tr><td>(on the top 150 relation patterns) and at the level</td></tr><tr><td>of individual relation instances (on a randomly ex-</td></tr><tr><td>tracted sample of 150 triples). Results are shown in</td></tr></table>"
},
"TABREF16": {
"num": null,
"text": "",
"html": null,
"type_str": "table",
"content": "<table><tr><td>, Babelfy outperforms all its</td></tr><tr><td>competitors in terms of coverage and, due to its</td></tr><tr><td>unified word sense disambiguation and entity link-</td></tr><tr><td>ing approach, extracts semantically richer patterns</td></tr></table>"
},
"TABREF17": {
"num": null,
"text": "Composition of the input corpus by source",
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\"># Relations # Relation instances Avg. Extractions</td></tr><tr><td>Wikipedia</td><td>251 954</td><td>19 455 992</td><td>77.58</td></tr><tr><td>Wikidata</td><td>5 414</td><td>1 033 732</td><td>191.01</td></tr><tr><td>WordNet</td><td>2 260</td><td>128 200</td><td>56.73</td></tr><tr><td>Wiktionary</td><td>2 863</td><td>143 990</td><td>50.52</td></tr><tr><td>OmegaWiki</td><td>1 168</td><td>45 818</td><td>39.45</td></tr></table>"
},
"TABREF19": {
"num": null,
"text": "Extraction results over non-definitional text",
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\"># Relation instances # Relations # Edges</td></tr><tr><td>PATTY (definitions)</td><td>3 212 065</td><td>41 593</td><td>4 785</td></tr><tr><td>PATTY (Wikipedia)</td><td colspan=\"3\">15 802 946 1 631 531 20 339</td></tr><tr><td>Our system</td><td>20 807 732</td><td colspan=\"2\">255 881 44 412</td></tr></table>"
},
"TABREF20": {
"num": null,
"text": "Performance of PATTY on definitional data",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF22": {
"num": null,
"text": "Examples of augmented semantic edges",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF23": {
"num": null,
"text": "",
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\"># Concept pairs # Unlabeled # Labeled</td></tr><tr><td>Type signatures</td><td>1 403</td><td>299</td><td>90</td></tr><tr><td>Relation instances</td><td>8 493 588</td><td>3 401 677</td><td>551 331</td></tr></table>"
},
"TABREF24": {
"num": null,
"text": "Concept pairs and associated edges in BabelNet",
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}