|
{ |
|
"paper_id": "D12-1035", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:24:47.737969Z" |
|
}, |
|
"title": "Natural Language Questions for the Web of Data", |
|
"authors": [ |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Yahya", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Berberich", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Shady", |
|
"middle": [], |
|
"last": "Elbassuoni", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Maya", |
|
"middle": [], |
|
"last": "Ramanath", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IIT-Delhi", |
|
"location": { |
|
"country": "India" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Volker", |
|
"middle": [], |
|
"last": "Tresp", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Siemens AG, Corporate Technology", |
|
"location": { |
|
"settlement": "Munich", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The Linked Data initiative comprises structured databases in the Semantic-Web data model RDF. Exploring this heterogeneous data by structured query languages is tedious and error-prone even for skilled users. To ease the task, this paper presents a methodology for translating natural language questions into structured SPARQL queries over linked-data sources. Our method is based on an integer linear program to solve several disambiguation tasks jointly: the segmentation of questions into phrases; the mapping of phrases to semantic entities, classes, and relations; and the construction of SPARQL triple patterns. Our solution harnesses the rich type system provided by knowledge bases in the web of linked data, to constrain our semantic-coherence objective function. We present experiments on both the question translation and the resulting query answering.", |
|
"pdf_parse": { |
|
"paper_id": "D12-1035", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The Linked Data initiative comprises structured databases in the Semantic-Web data model RDF. Exploring this heterogeneous data by structured query languages is tedious and error-prone even for skilled users. To ease the task, this paper presents a methodology for translating natural language questions into structured SPARQL queries over linked-data sources. Our method is based on an integer linear program to solve several disambiguation tasks jointly: the segmentation of questions into phrases; the mapping of phrases to semantic entities, classes, and relations; and the construction of SPARQL triple patterns. Our solution harnesses the rich type system provided by knowledge bases in the web of linked data, to constrain our semantic-coherence objective function. We present experiments on both the question translation and the resulting query answering.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Recently, very large, structured, and semantically rich knowledge bases have become available. Examples are Yago (Suchanek et al., 2007) , DBpedia (Auer et al., 2007) , and Freebase (Bollacker et al., 2008) . DBpedia forms the nucleus of the Web of Linked Data (Heath and Bizer, 2011) , which interconnects hundreds of RDF data sources with a total of 30 billion subject-property-object (SPO) triples.", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 136, |
|
"text": "(Suchanek et al., 2007)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 147, |
|
"end": 166, |
|
"text": "(Auer et al., 2007)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 182, |
|
"end": 206, |
|
"text": "(Bollacker et al., 2008)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 284, |
|
"text": "(Heath and Bizer, 2011)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "The diversity of linked-data sources and their high heterogeneity make it difficult for humans to search and discover relevant information. As linked data is in RDF format, the standard approach would be to run structured queries in triple-pattern-based languages like SPARQL, but only expert programmers are able to precisely specify their information needs and cope with the high heterogeneity of the data (and absence or very high complexity of schema information). For less initiated users the only option to query this rich data is by keyword search (e.g., via services like sig.ma (Tummarello et al., 2010) ). None of these approaches is satisfactory. Instead, the by far most convenient approach would be to search in knowledge bases and the Web of linked data by means of natural-language questions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 587, |
|
"end": 612, |
|
"text": "(Tummarello et al., 2010)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "As an example, consider a quiz question like \"Which female actor played in Casablanca and is married to a writer who was born in Rome?\". The answer could be found by querying several linked data sources together, like the IMDBstyle LinkedMDB movie database and the DBpedia knowledge base, exploiting that there are entity-level sameAs links between these collections. One can think of different formulations of the example question, such as \"Which actress from Casablanca is married to a writer from Rome?\". A possible SPARQL formulation, assuming a user familiar with the schema of the underlying knowledge base(s), could consist of the following six triple patterns (joined by shared-variable bindings): ?x hasGender female, ?x isa actor, ?x actedIn Casablanca (film), ?x marriedTo ?w, ?w isa writer, ?w bornIn Rome. This complex query, which involves multiple joins, would yield good results, but it is difficult for the user to come up with the precise choices for relations, classes, and entities. This would require familiarity with the contents of the knowledge base, which no average user is expected to have. Our goal is to automatically create such structured queries by mapping the user's question into this representation. Keyword search is usually not a viable alternative when the information need involves joining multiple triples to construct the final result, notwithstanding good attempts like that of Pound et al. (2010) . In the example, the obvious keyword query \"female actress Casablanca married writer born Rome\" lacks a clear specification of the relations among the different entities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1420, |
|
"end": 1439, |
|
"text": "Pound et al. (2010)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "Given a natural language question q N L and a knowledge base KB, our goal is to translate q N L into a formal query q F L that captures the information need expressed by q N L .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "We focus on input questions that put the emphasis on entities, classes, and relations between them. We do not consider aggregations (counting, max/min, etc.) and negations. As a result, we generate structured queries of the form known as conjunctive queries or select-project-join queries in database terminology. Our target language is SPARQL 1.0, where the above focus leads to queries that consist of multiple triple patterns, that is, conjunctions of SPO search conditions. We do not use any pre-existing query templates, but generate queries from scratch as they involve a variable number of joins with apriori unknown join structure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "A major challenge is in the ambiguity of the phrases occurring in a natural-language question. Phrases can denote entities (e.g., the city of Casablanca or the movie Casablanca), classes (e.g., actresses, movies, married people), or relations/properties (e.g., marriedTo between people, played between people and movies). A priori, we do not know if a phrase should be mapped to an entity, a class, or a relation. In fact, some phrases may denote any of these three kinds of targets. For example, a phrase like \"wrote score for\" in a question about film music composers, could map to the composerfilm relation wroteSoundtrackForFilm, to the class of movieSoundtracks (a subclass of music pieces), or to an entity like the movie \"The Score\". Depending on the choice, we may arrive at a structurally good query (with triple patterns that can actually be joined) or at a meaningless and non-executable query (with disconnected triple patterns). This generalized disambiguation problem is much more challenging than the more focused task of named entity disambiguation (NED). It is also different from general word sense disambiguation (WSD), which focuses on the meaning of individual words (e.g., mapping them to WordNet synsets).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "In our approach, we introduce new elements towards making translation of questions into SPARQL triple patterns more expressive and robust. Most importantly, we solve the disambiguation and mapping tasks jointly, by encoding them into a comprehensive integer linear program (ILP): the segmentation of questions into meaningful phrases, the mapping of phrases to semantic entities, classes, and relations, and the construction of SPARQL triple patterns. The ILP harnesses the richness of large knowledge bases like Yago2 (Hoffart et al., 2011b) , which has information not only about entities and relations, but also about surface names and textual patterns by which web sources refer to them. For example, Yago2 knows that \"Casablanca\" can refer to the city or the film, and \"played in\" is a pattern that can denote the actedIn relation. In addition, we can leverage the rich type system of semantic classes. For example, knowing that Casablanca is a film, for translating \"played in\" we can focus on relations with a type signature whose range includes films, as opposed to sports teams, for example. Such information is encoded in judiciously designed constraints for the ILP. Although we intensively harness Yago2, our approach does not depend on a specific choice of knowledge base or language resource for type information and phrase/name dictionaries. Other knowledge bases such as DBpedia can be easily plugged in.", |
|
"cite_spans": [ |
|
{ |
|
"start": 519, |
|
"end": 542, |
|
"text": "(Hoffart et al., 2011b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contribution", |
|
"sec_num": "1.3" |
|
}, |
|
{ |
|
"text": "Based on these ideas, we have developed a framework and system, called DEANNA (DEep Answers for maNy Naturally Asked questions), that comprises a full suite of components for question decomposition, mapping constituents into the semantic concept space, generating alternative candidate mappings, and computing a coherent mapping of all constituents into a set of SPARQL triple patterns that can be directly executed on one or more linked data sources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contribution", |
|
"sec_num": "1.3" |
|
}, |
|
{ |
|
"text": "We use the Yago2 knowledge base, with its rich type system, as a semantic backbone. Yago2 is composed of instances of binary relations derived from Wikipedia and WordNet. The instances, called facts, provide both ontological information and instance data. Figure 1 shows sample facts from Yago2. Each fact is composed of semantic items that can be divided into relations, entities, and classes. Entities and classes together are referred to as concepts. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 256, |
|
"end": 264, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Given a natural language question, Figure 2 shows the tasks DEANNA performs to translate a question into a structured query. The first three steps prepare the input for constructing a disambiguation graph for mapping the phrases in a question onto entities, classes, and relations, in a coherent manner. The fourth step formulates this generalized disambiguation problem as an ILP with complex constraints and computes the best solution using an ILP solver. Finally, the fifth and sixth step together use the disambiguated mapping to construct an executable SPARQL query.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 43, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Framework", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A question sentence is a sequence of tokens, q N L = (t 0 , t 1 , ..., t n ). A phrase is a contiguous subsequence of tokens (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framework", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "t i , t i+1 , ..., t i+l ) \u2286 q N L , 0 \u2264 i, 0 \u2264 l \u2264 n.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framework", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The input question is fed into the following pipeline of six steps: 1. Phrase detection. Phrases are detected that potentially correspond to semantic items such as 'Who', 'played in', 'movie' and 'Casablanca'. 2. Phrase mapping to semantic items. This includes finding that the phrase 'played in' can either refer to the semantic relation actedIn or to playedForTeam and that the phrase 'Casablanca' can potentially refer to Casablanca (film) or Casablanca, Morocco. This step merely constructs a candidate space for the mapping. The actual disambiguation is addressed by step 4, discussed below. 3. Q-unit generation. Intuitively, a q-unit is a triple composed of phrases. Their generation and role will be discussed in detail in the next section. 4. Joint disambiguation, where the ambiguities in the phrase-to-semantic-item mapping are resolved. This entails resolving the ambiguity in phrase borders, and above all, choosing the best fitting candidates from the semantic space of entities, classes, and relations. Here, we determine for our running example that 'played in' refers to the semantic relation actedIn and not to playedForTeam and the phrase 'Casablanca' refers to Casablanca (film) and not Casablanca, Morocco. 5. Semantic items grouping to form semantic triples. For example, we determine that the relation marriedTo connects person referred to by 'Who' and writer to form the semantic triple person marriedTo writer. This is done via q-units. 6. Query generation. For SPARQL queries, semantic triples such as person marriedTo writer have to be mapped to suitable triple patterns with appropriate join conditions expressed through common variables: ?x type person, ?x marriedTo ?w, and ?w type writer for the example.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framework", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A detected phrase p is a pair < T oks, l > where T oks is a phrase and l is a label, l \u2208 {concept, relation}, indicating whether a phrase is a relation phrase or a concept phrase. P r is the set of all detected relation phrases and P c is the set of all detected concept phrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phrase Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "One special type of detected relation phrase is the null phrase, where no relation is explicitly mentioned, but can be induced. The most prominent example of this is the case of adjectives, such as 'Australian movie', where we know there is a relation being expressed between 'Australia' and 'movie'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phrase Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We use multiple detectors for detecting phrases of different types. For concept detection, we use a detector that works against a phrase-concept dictionary which looks as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phrase Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "{'Rome','eternal city'} \u2192 Rome {'Casablanca'} \u2192 Casablanca (film)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phrase Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We experimented with using third-party named entity recognizers but the results were not satisfactory. This dictionary was mostly constructed as part of the knowledge base, independently of the questionto-query translation task in the form of instances of the means relation in Yago2, an example of which is shown in Figure 1 For relation detection, we experimented with various approaches. We mainly rely on a relation detector based on ReVerb (Fader et al., 2011) with additional POS tag patterns, in addition to our own which looks for patterns in dependency parses.", |
|
"cite_spans": [ |
|
{ |
|
"start": 445, |
|
"end": 465, |
|
"text": "(Fader et al., 2011)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 317, |
|
"end": 325, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Phrase Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "After phrases are detected, each phrase is mapped to a set of semantic items. The mapping of concept phrases also relies on the phrase-concept dictionary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phrase Mapping", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To map relation phrases, we rely on a corpus of textual patterns to relation mappings of the form:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phrase Mapping", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "{'play','star in','act','leading role'} \u2192 actedIn {'married', 'spouse','wife'} \u2192 marriedTo", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phrase Mapping", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Distinct phrase occurrences will map to different semantic item instances. We discuss why this is important when we discuss the construction of the disambiguation graph and variable assignment in the structured query.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phrase Mapping", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Dependency parsing identifies triples of tokens, or triploids, t rel , t arg1 , t arg2 , where t rel , t arg1 , t arg2 \u2208 q N L are seeds for phrases, with the triploid acting as a seed for a potential SPARQL triple pattern. Here, t rel is the seed for the relation phrase, while t arg1 and t arg2 are seeds for the two arguments. At this point, there is no attempt to assign subject/object roles to the arguments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing & Q-Unit Generation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Triploids are collected by looking for specific dependency patterns in dependency graphs (de Marneffe et al., 2006) . The most prominent pattern we look for is a verb and its arguments. Other patterns include adjectives and their arguments, prepositionally modified tokens and objects of prepositions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 115, |
|
"text": "Marneffe et al., 2006)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing & Q-Unit Generation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "By combining triploids with detected phrases, we obtain q-units. A q-unit is a triple of sets of phrases, {p rel \u2208 P r }, {p arg1 \u2208 P c }, {p arg2 \u2208 P c } , where t rel \u2208 p rel and similarly for arg 1 and arg 2 . Conceptually, one can view a q-unit as a placeholder node with three sets of edges, each connecting the same q-node to a phrase that corresponds to a relation or concept phrase in the same q-unit. This notion of nodes and edges will be made more concrete when we present our disambiguation graph construction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing & Q-Unit Generation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The core contribution of this paper is a framework for disambiguating phrases into semantic itemscovering relations, classes, and entities in a unified manner. This can be seen as a joint task combining named entity disambiguation for entities, word sense disambiguation for classes (common nouns), and relation extraction. The next section presents the disambiguation framework in detail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation of Phrase Mappings", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Once phrases are mapped to unique semantic items, we proceed to generate queries in two steps. First, semantic items are grouped into triples. This is done using the triploids generated earlier. The power of using a knowledge base is that we have a rich type system that allows us to tell if two semantic items are compatible or not. Each relation has a type signature and we check whether the candidate items are compatible with the signature.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Generation", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "We did not assign subject/object roles in triploids and q-units because a natural language relation phrase might express the inverse of a semantic relation, e.g., the natural language expression 'directed by' and the relation isDirectorOf with respect to the movies domain are inverses of each other. Therefore, we check which assignment of arg1 and arg2 is compatible with the semantic relation. If both arrangements are compatible, then we give preference to the assignment given by the dependency parsers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Generation", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "Once semantic items are grouped into triples, it is an easy task to expand them to SPARQL triple patterns. This is done by replacing each semantic class with a distinct type-constrained variable. Note that this is the reason why each distinct phrase maps to a distinct instance of a semantic class, to ensure correct variable assignment. This becomes clear when we consider the question \"Which singer is married to a singer?\", which requires two distinct variables each constrained to bind to an entity of type singer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Generation", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "The goal of the disambiguation step is to compute a partial mapping of phrases onto semantic items, such that each phrase is assigned to at most one semantic item. This step also resolves the phraseboundary ambiguity, by enforcing that only nonoverlapping phrases are mapped. As the result of disambiguating one phrase can influence the mapping of other phrases, we consider all phrases jointly in one big disambiguation task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Disambiguation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the following, we construct a disambiguation graph that encodes all possible mappings. We impose a variety of complex constraints (mutual exclusion among overlapping phrases, type constraints among the selected semantic items, etc.), and define an objective function that aims to maximize the joint quality of the mapping. The graph construction itself may resemble similar models used in NED (e.g., (Milne and Witten, 2008; Kulkarni et al., 2009; Hoffart et al., 2011a) ). Recall, however, that our task is more complex because we jointly consider entities, classes, and relations in the candidate space of possible mappings. Because of this complication and to capture our complex constraints, we do not employ graph algorithms, but model the general disambiguation problem as an ILP.", |
|
"cite_spans": [ |
|
{ |
|
"start": 403, |
|
"end": 427, |
|
"text": "(Milne and Witten, 2008;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 428, |
|
"end": 450, |
|
"text": "Kulkarni et al., 2009;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 451, |
|
"end": 473, |
|
"text": "Hoffart et al., 2011a)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Disambiguation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Joint disambiguation takes place over a disambiguation graph DG = (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "V, E), where V = V s \u222a V p \u222a V q and E = E sim \u222a E coh \u222a E q , where: \u2022 V s is the set of semantic items, v s \u2208 V s is an s-node. \u2022 V p is the set of phrases, v p \u2208 V p is called a p- node.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We denote the set of p-nodes corresponding to relation phrases by V rp and the set of pnodes corresponding to concept phrases by V rc . \u2022 V q is a set of placeholder nodes for q-units, called q-nodes. They represent phrase triples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 E sim \u2286 V p \u00d7 V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "s is a set of weighted similarity edges that capture the strength of the mapping of a phrase to a semantic item.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 E coh \u2286 V s \u00d7 V s is a set of weighted coherence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "edges that capture the semantic coherence between two semantic items. Semantic coherence is discussed in more detail later in this section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 E q \u2286 V q \u00d7V p \u00d7d, where d \u2208 {rel, arg 1 , arg 2 }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "is a q-edge. Each such edge connects a placeholder q-node to a p-node with a specific role as a relation, or one of the two arguments. A q-unit, as presented earlier, can be seen as a qnode along with its outgoing q-edges. Figure 3 shows the disambiguation graph for our running example (excluding coherence edges between s-nodes).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 223, |
|
"end": 231, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We next describe how the weights on similarity edges and semantic coherence edges are defined. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Edge Weights", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Semantic coherence, Coh sem , captures to what extent two semantic items occur in the same context. This is different from semantic similarity (Sim sem ), which is usually evaluated using the distance between nodes in a taxonomy (Resnik, 1995) . While we expect Sim sem (George Bush, Woody Allen) to be higher than Sim sem (Woody Allen, Terminator) we would like Coh sem (Woody Allen, Terminator), both of which are from the entertainment domain, to be higher than Coh sem (George Bush, Woody Allen).", |
|
"cite_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 243, |
|
"text": "(Resnik, 1995)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Coherence", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "For Yago2, we characterize an entity e by its inlinks InLinks(e): the set of Yago2 entities whose corresponding Wikipedia pages link to the entity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Coherence", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "To be able to compare semantic items of different semantic types (entities, relations, and classes), we need to extend this to classes and relations. For class c with entities e, its inlinks are defined as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Coherence", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "InLinks(c) = e\u2208c Inlinks(e)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Coherence", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "For relations, we only consider those that map entities to entities (e.g. actedIn, produced), for which we define the set of inlinks as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Coherence", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "InLinks(r) = (e 1 ,e 2 )\u2208r (InLinks(e1) \u2229 InLinks(e2))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Coherence", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "The intuition behind this is that when the two arguments of an instance of the relation co-occur, then the relation is being expressed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Coherence", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "We define the semantic coherence (Coh sem ) between two semantic items s 1 and s 2 as the Jaccard coefficient of their sets of inlinks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Coherence", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Similarity weights are computed differently for entities, classes, and relations. For entities, we use a normalized prior score based on how often a phrase refers to a certain entity in Wikipedia. For classes, we use a normalized prior that reflects the number of members in a class. Finally, for relations, similarity reflects the maximum n-gram similarity between the phrase and any of the relation's surface forms. We use Lucene for indexing and searching the relation surface forms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarity Weights", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "The result of disambiguation is a subgraph of the disambiguation graph, yielding the most coherent mappings. We employ an ILP to this end. Before describing our ILP, we state some necessary definitions: Given the above definitions, our objective function is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 Triple dimensions: d \u2208 {rel, arg 1 , arg 2 } \u2022 Tokens: T = {t 0 , t 1 , ..., t n }. \u2022 Phrases: P = {p 0 , p 1 , ..., p k }. \u2022 Semantic items: S = {s 0 , s 1 , ..., s l }. \u2022 Token occurrences: P(t) = {p \u2208 P | t \u2208 p}. \u2022 X i \u2208 {0, 1} indicates if p-node i is selected. \u2022 Y ij \u2208 {0,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "maximize \u03b1 i,j w ij Y ij + \u03b2 k,l v kl Z kl + \u03b3 m,n,d Q mnd", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "subject to the following constraints: 1. A p-node can be assigned to one s-node at most:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "j Y ij \u2264 1, \u2200i 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "If a p-s similarity edge is chosen, then the respective p-node must be chosen:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Y ij \u2264 X i , \u2200j", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "3. If s-nodes k and l are chosen (Z kl = 1), then there are p-nodes mapping to each of the s-nodes k and l ( Y ik = 1 for some i and Y jl = 1 for some j):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Z kl \u2264 i Y ik and Z kl \u2264 j Y jl 4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": ". No token can appear as part of two phrases:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "i\u2208P(t) X i \u2264 1, \u2200t \u2208 T 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "At most one q-edge is selected for a dimension:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "n Q mnd \u2264 1, \u2200m, d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "6. If the q-edge mnd is chosen (Q mnd = 1) then p-node n must be selected:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Q mnd \u2264 X n , \u2200m, d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "7. Each semantic triple should include a relation:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "E r \u2265 Q mn d + X n + Y n r \u2212 2 \u2200m, n , r, d = rel", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "8. Each triple should have at least one class:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "C c 1 + C c 2 \u2265 Q mn d 1 + X n + Y n c 1 + Q mn d 2 + X n + Y n c 2 \u2212 5, \u2200m, n , n , r, c 1 , c 2 , d 1 = arg1, d 2 = arg2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "This is not invoked for existential questions that return Boolean answers and are translated to ASK queries in SPARQL. An example is the question \"Did Tom Cruise act in Top Gun?\", which can be translated to ASK {Tom Cruise actedIn Top Gun}. 9. Type constraints are respected (through q-edges):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "t rc 1 + t rc 2 \u2265 Q mn d 1 + X n + Y n r + Q mn d 2 + X n + Y n c 1 + Q mn d 3 + X n + Y n c 2 \u2212 7 \u2200m, n , n , n , r, c 1 , c 2 , d 1 = rel, d 2 = arg1, d3 = arg2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The above is a sophisticated ILP, and most likely NP-hard. However, even with ten thousands of variables it is within the regime of modern ILP solvers. In our experiments, we used Gurobi (Gur, 2011), and achieved run-times -typically of a few seconds. Figure 4 shows the resulting subgraph for the disambiguation graph of Figure 3 . Note how common p-nodes between q-units capture joins.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 260, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 322, |
|
"end": 330, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Disambiguation Graph Processing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Our experiments are based on two collections of questions: the QALD-1 task for question answering over linked data (QAL, 2011) and a collection of questions used in (Elbassuoni et al., 2011; Elbassuoni et al., 2009) in the context of the NAGA project, for informative ranking of SPARQL query answers (Elbassuoni et al. (2009) evaluated the SPARQL queries, but the underlying questions are formulated in natural language.) The NAGA collection is based on linking data from IMDB with the Yago2 knowledge base. This is an interesting linkeddata case: IMDB provides data about movies, actors, directors, and movie plots (in the form of descriptive keywords and phrases); Yago2 adds semantic types and relational facts for the participating entities. Yago2 provides nearly 3 million concepts and 100 relations, of which 41 lie within the scope of our framework.", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 126, |
|
"text": "(QAL, 2011)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 165, |
|
"end": 190, |
|
"text": "(Elbassuoni et al., 2011;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 191, |
|
"end": 215, |
|
"text": "Elbassuoni et al., 2009)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 300, |
|
"end": 325, |
|
"text": "(Elbassuoni et al. (2009)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Typical example questions for these two collections are: \"Which software has been published by Mean Hamster Software?\" for QALD-1, and \"Which director has won the Academy Award for Best director and is married to an actress that has won the Academy Award for Best Actress?\" for NAGA. For both collections, some questions are out-of-scope for our setting, because they mention entities or relations that are not available in the underlying datasets, contain date or time comparisons, or involve aggregation such as counting. After re-moving these questions, our test set consists of 27 QALD-1 training questions out of a total of 50 and 44 NAGA questions, out of a total of 87. We used the 19 questions from the QALD-1 test set that are within the scope of our method for tuning the hyperparameters (\u03b1, \u03b2, \u03b3) in the ILP objective function.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We evaluated the output of DEANNA at three stages in the processing pipeline: a) after the disambiguation of phrases, b) after the generation of the SPARQL query, and c) after obtaining answers from the underlying linked-data sources. This way, we could obtain insights into our building blocks, in addition to assessing the end-to-end performance. In particular, we could assess the goodness of the question-to-query translation independently of the actual answer quality which may depend on particularities of the underlying datasets (e.g., slight mismatches between query terminology and the names in the data.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "At each of the three stages, the output was shown to two human assessors who judged whether an output item was good or not. If the two were in disagreement, then a third person resolved the judgment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For the disambiguation stage, the judges looked at each q-node/s-node pair, in the context of the question and the underlying data schemas, and determined whether the mapping was correct or not and whether any expected mappings were missing. For the query-generation stage, the judges looked at each triple pattern and determined whether the pattern was meaningful for the question or not and whether any expected triple pattern was missing. Note that, because our approach does not use any query templates, the same question may generate semantically equivalent queries that differ widely in terms of their structure. Hence, we rely on our evaluation metrics that are based on triple patterns, as there is no gold-standard query for a given question. For the query-answering stage, the judges were asked to identify if the result sets for the generated queries are satisfactory.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "With these assessments, we computed overall quality measures by both micro-averaging and macro-averaging. Micro-averaging aggregates over all assessed items (e.g., q-node/s-node pairs or triple patterns) regardless of the questions to which they belong. Macro-averaging first aggregates the items for the same question, and then averages the quality measure over all questions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For a question q and item set s in one of the stages of evaluation, let correct(q, s) be the number of correct items in s, ideal(q) be the size of the ideal item set and retrieved(q, s) be the number of retrieved items, we define coverage and precision as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "cov(q, s) = correct(q, s)/ideal(q)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "prec(q, s) = correct(q, s)/retrieved(q, s). Table 1 shows the results for disambiguation in terms of macro and micro coverage and precision. For both datasets, coverage is high as few mappings are missing. We obtain perfect precision for QALD-1 as no mapping that we generate is incorrect, while for NAGA we generate few incorrect mappings. Table 2 shows the same metrics for the generated triple patterns. The results are similar to those for disambiguation. Missing or incorrect triple patterns can be attributed to (i) incorrect mappings in the disambiguation stage or (ii) incorrect detection of dependencies between phrases despite having the correct mappings. Table 3 shows the results for query answering. Here, we attempt to generate answers to questions by executing the generated queries over the datasets. The table shows the number of questions for which the system successfully generated SPARQL queries (#queries), and among those, how many resulted in satisfactory answers as judged by our evaluators (#satisfactory). Answers were considered unsatisfactory when: 1) the generated SPARQL query was wrong, 2) the result set was empty due to the incompleteness of the underlying knowledge base, or 3) a small fraction of the result set was relevant to the question. For both sets of questions, most of the queries that were perceived unsatisfactory were ones that returned no answers. Table 4 shows a set of example QALD questions, the corresponding SPARQL queries and sample answers. Table 4 : Example questions, the generated SPARQL queries and their answers", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 51, |
|
"text": "Table 1", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 341, |
|
"end": 348, |
|
"text": "Table 2", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 666, |
|
"end": 673, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 1396, |
|
"end": 1403, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1496, |
|
"end": 1503, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Queries that produced no answers, such as the third query in Table 4 were further relaxed using an incarnation of the techniques described in (Elbassuoni et al., 2009) , by retaining the triple patterns expressing type constraints and relaxing all other triple patterns. Relaxing a triple pattern was done by replacing all entities with variables and casting entity mentions into keywords that are attached to the relaxed triple pattern. For example, the QALD question \"Which actors were born in Germany?\" was translated into the following SPARQL query: ?x type actor . ?x bornIn Germany which produced no answers when run over the Yago2 knowledge base since the relation bornIn relates people to cities and not countries in Yago2. The query was then relaxed into: ?x type actor . ?x bornIn ?z [Germany] . This relaxed (and keywordaugmented) triple-pattern query was then processed the same way as triple-pattern queries without any keywords. The results of such query were then ranked based on how well they match the keyword conditions specified in the relaxed query using the ranking model in (Elbassuoni et al., 2009) . Using this technique, the top ranked results for the relaxed query were all actors born in German cities as shown in Table 5 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 167, |
|
"text": "(Elbassuoni et al., 2009)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 794, |
|
"end": 803, |
|
"text": "[Germany]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1096, |
|
"end": 1121, |
|
"text": "(Elbassuoni et al., 2009)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 68, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1241, |
|
"end": 1248, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Question Answering", |
|
"sec_num": "5.3.3" |
|
}, |
|
{ |
|
"text": "After relaxation, the judges again assessed the results of the relaxed queries and determined whether they were satisfactory or not. The number of additional queries that obtained satisfactory answers after relaxation are shown under #relaxed in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 246, |
|
"end": 253, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Question Answering", |
|
"sec_num": "5.3.3" |
|
}, |
|
{ |
|
"text": "The evaluation data, in addition to a demonstration of our system (Yahya et al., 2012) , can be found at http://mpi-inf.mpg.de/yago-naga/deanna/.", |
|
"cite_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 86, |
|
"text": "(Yahya et al., 2012)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Question Answering", |
|
"sec_num": "5.3.3" |
|
}, |
|
{ |
|
"text": "Question answering has a long history in NLP and IR research. The Web and Wikipedia have proved to be a valuable resource for answering fact-oriented questions. State-of-the-art methods (Hirschman and Gaizauskas, 2001; Kwok et al., 2001; Zheng, 2002; Katz et al., 2007; Dang et al., 2007; Voorhees, 2003) cast the user's question into a keyword query to a Web search engine (perhaps with phrases for location and person names or other proper nouns). Key to finding good results is to retrieve and rank sentences or short passages that contain all or most keywords and are likely to yield good answers. Together with trained classifiers for the question type (and thus the desired answer type), this methodology performs fairly well for both factoid and list questions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 218, |
|
"text": "(Hirschman and Gaizauskas, 2001;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 219, |
|
"end": 237, |
|
"text": "Kwok et al., 2001;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 238, |
|
"end": 250, |
|
"text": "Zheng, 2002;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 251, |
|
"end": 269, |
|
"text": "Katz et al., 2007;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 270, |
|
"end": 288, |
|
"text": "Dang et al., 2007;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 289, |
|
"end": 304, |
|
"text": "Voorhees, 2003)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "IBM's Watson project (Ferrucci et al., 2010 ) demonstrated a new kind of deep QA. A key element in Watson's approach is to decompose complex questions into several cues and sub-cues, with the aim of generating answers from matches for the various cues (tapping into the Web and Wikipedia). Knowledge bases like DBpedia (Auer et al., 2007) , Freebase (Bollacker et al., 2008) , and Yago (Suchanek et al., 2007) ) are used for both answering parts of questions that can be translated to structured form (Chu-Carroll et al., 2012) and typechecking possible answer candidates and thus filtering out spurious results (Kalyanpur et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 43, |
|
"text": "(Ferrucci et al., 2010", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 319, |
|
"end": 338, |
|
"text": "(Auer et al., 2007)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 350, |
|
"end": 374, |
|
"text": "(Bollacker et al., 2008)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 409, |
|
"text": "(Suchanek et al., 2007)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 501, |
|
"end": 527, |
|
"text": "(Chu-Carroll et al., 2012)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 612, |
|
"end": 636, |
|
"text": "(Kalyanpur et al., 2011)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The recent QALD-1 initiative (QAL, 2011) proposed a benchmark task to translate questions into SPARQL queries over linked-data sources like DBpedia and MusicBrainz. FREyA (Damljanovic et al., 2011) , the best performing system, relies on Table 5 : Top-4 results for the QALD question \"Which actors were born in Germany?\" after relaxation interaction with the user to interpret the question. Earlier work on mapping questions into structured queries includes the work by Frank et al. (2007) and Unger and Cimiano (2011) . Frank et al. (2007) used lexical-conceptual templates for query generation. However, this work did not address the crucial issue of disambiguating the constituents of the question. In Pythia, Unger and Cimiano (2011) relied on an ontology-driven grammar for the question language so that questions could be directly mapped onto the vocabulary of the underlying ontology. Such grammars are obviously hard to craft for very large, complex, and evolving knowledge bases. Nalix is an attempt to bring question answering to XML data (Li, Yang, and Jagadish, 2007) by mapping questions to XQuery expressions, relying on human interaction to resolve possible ambiguity. Very recently, Unger et al. (2012) developed a template-based approach based on Pythia, where questions are automatically mapped to structured queries in a two step process. First, a set of query templates are generated for a question, independent of the knowledge base, determining the structure of the query. After that, each template is instantiated with semantic items from the knowledge base. This performs reasonably well for the QALD-1 benchmark: out of 50 test questions, 34 could be mapped, and 19 were correctly answered.", |
|
"cite_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 197, |
|
"text": "(Damljanovic et al., 2011)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 470, |
|
"end": 489, |
|
"text": "Frank et al. (2007)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 518, |
|
"text": "Unger and Cimiano (2011)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 521, |
|
"end": 540, |
|
"text": "Frank et al. (2007)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1049, |
|
"end": 1079, |
|
"text": "(Li, Yang, and Jagadish, 2007)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 245, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Efforts on user-friendly exploration of structured data include keyword search over relational databases (Bhalotia et al., 2002) and structured keyword search (Pound et al., 2010) . The latter is a compromise between full natural language and structured queries, where the user provides the structure and the system takes care of the disambiguation of keyword phrases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 128, |
|
"text": "(Bhalotia et al., 2002)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 159, |
|
"end": 179, |
|
"text": "(Pound et al., 2010)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our joint disambiguation method was inspired by recent work on NED (Milne and Witten, 2008; Kulkarni et al., 2009; Hoffart et al., 2011a) and WSD (Navigli, 2009) . In contrast to this prior work on related problems, our graph construction and constraints are more complex, as we address the joint mapping of arbitrary phrases onto entities, classes, or relations. Moreover, instead of graph algorithms or factor-graph learning, we use an ILP for solving the ambiguity problem. This way, we can accommodate expressive constraints, while being able to disambiguate all phrases in a few seconds.", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 91, |
|
"text": "(Milne and Witten, 2008;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 92, |
|
"end": 114, |
|
"text": "Kulkarni et al., 2009;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 115, |
|
"end": 137, |
|
"text": "Hoffart et al., 2011a)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 146, |
|
"end": 161, |
|
"text": "(Navigli, 2009)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "DEANNA uses dictionaries of names and phrases for entities, classes, and relations. Spitkovsky and Chang (2012) recently released a huge dictionary of pairs of phrases and Wikipedia links, derived from Google's Web index. For relations, Nakashole et al. (2012) released PATTY, a large taxonomy of patterns with semantic types.", |
|
"cite_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 111, |
|
"text": "Spitkovsky and Chang (2012)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 237, |
|
"end": 260, |
|
"text": "Nakashole et al. (2012)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We presented a method for translating naturallanguage questions into structured queries. The novelty of this method lies in modeling several mapping stages as a joint ILP problem. We harness type signatures and other information from large-scale knowledge bases. Although our model, in principle, leads to high combinatorial complexity, we observed that the Gurobi solver could handle our judiciously designed ILP very efficiently. Our experimental studies showed very high precision and good coverage of the query translation, and good results in the actual question answers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Future work includes relaxing some of the limitations that our current approach still has. First, questions with aggregations cannot be handled at this point. Second, queries sometimes return empty answers although they perfectly capture the original question, but the underlying data sources are incomplete or represent the relevant information in an unexpected manner. We plan to extend our approach of combining structured data with textual descriptions, and generate queries that combine structured search predicates with keyword or phrase matching.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "DBpedia: A Nucleus for a Web of Open Data", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Auer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Bizer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Kobilarov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Lehmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Cyganiak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Ives", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "ISWC/ASWC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Auer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyga- niak, R.; and Ives, Z. G. 2007. DBpedia: A Nucleus for a Web of Open Data. In ISWC/ASWC.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Keyword Searching and Browsing in Databases using BANKS", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Bhalotia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Hulgeri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Nakhe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Chakrabarti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Sudarshan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "ICDE", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bhalotia, G.; Hulgeri, A.; Nakhe, C.; Chakrabarti, S.; and Sudarshan, S. 2002. Keyword Searching and Brows- ing in Databases using BANKS. In ICDE.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Freebase: a Collaboratively Created Graph Database for Structuring Human Knowledge", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Bollacker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Evans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Paritosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Sturge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "SIGMOD", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bollacker, K. D.; Evans, C.; Paritosh, P.; Sturge, T.; and Taylor, J. 2008. Freebase: a Collaboratively Created Graph Database for Structuring Human Knowledge. In SIGMOD.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Finding needles in the haystack: Search and candidate generation", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Chu-Carroll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Boguraev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Carmel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Sheinwald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Welty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "In IBM J. Res. & Dev", |
|
"volume": "56", |
|
"issue": "3/4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chu-Carroll, J.; Fan, J.; Boguraev, B. K.; Carmel, D.; and Sheinwald, D.; Welty, C. 2012. Finding needles in the haystack: Search and candidate generation. In IBM J. Res. & Dev., vol 56, no.3/4.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "FREyA: an Interactive Way of Querying Linked Data using Natural Language", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Damljanovic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Agatonovic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Cunningham", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Damljanovic, D.; Agatonovic, M.; and Cunningham, H. 2011. FREyA: an Interactive Way of Querying Linked Data using Natural Language.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Overview of the trec 2007 question answering track", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Dang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kelly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "TREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dang, H. T.; Kelly, D.; and Lin, J. J. 2007. Overview of the trec 2007 question answering track. In TREC.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Generating typed dependency parses from phrase structure parses", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Maccartney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "de Marneffe, M. C.; Maccartney, B.; and Manning, C. D. 2006. Generating typed dependency parses from phrase structure parses. In LREC.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Language-model-based ranking for queries on rdf-graphs", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Elbassuoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ramanath", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Schenkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Sydow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "CIKM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elbassuoni, S.; Ramanath, M.; Schenkel, R.; Sydow, M.; and Weikum, G. 2009. Language-model-based rank- ing for queries on rdf-graphs. In CIKM.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Query relaxation for entity-relationship search", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Elbassuoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ramanath", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "ESWC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elbassuoni, S.; Ramanath, M.; and Weikum, G. 2011. Query relaxation for entity-relationship search. In ESWC.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Identifying relations for open information extraction", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Fader", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Soderland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fader, A.; Soderland, S.; and Etzioni, O. 2011. Iden- tifying relations for open information extraction. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Building Watson: An Overview of the DeepQA Project", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Ferrucci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Chu-Carroll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Gondek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kalyanpur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lally", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Murdock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Nyberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Prager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Schlaefer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Welty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "AI Magazine", |
|
"volume": "", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ferrucci, D. A.; Brown, E. W.; Chu-Carroll, J.; Fan, J.; Gondek, D.; Kalyanpur, A.; Lally, A.; Murdock, J. W.; Nyberg, E.; Prager, J. M.; Schlaefer, N.; and Welty, C. A. 2010. Building Watson: An Overview of the DeepQA Project. AI Magazine 31(3).", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Question Answering from Structured Knowledge Sources", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H.-U", |
|
"middle": [], |
|
"last": "Krieger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Crysmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "J\u00f6rg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Sch\u00e4fer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "J. Applied Logic", |
|
"volume": "5", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frank, A.; Krieger, H.-U.; Xu, F.; Uszkoreit, H.; Crys- mann, B.; J\u00f6rg, B.; and Sch\u00e4fer, U. 2007. Question Answering from Structured Knowledge Sources. J. Applied Logic 5(1).", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Gurobi Optimizer Reference Manual", |
|
"authors": [], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gurobi Optimization, Inc. 2012. Gurobi Optimizer Ref- erence Manual. http://www.gurobi.com/.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Linked Data: Evolving the Web into a Global Data Space", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Heath", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Bizer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Heath, T., and Bizer, C. 2011. Linked Data: Evolving the Web into a Global Data Space. San Rafael, CA: Morgan & Claypool, 1 edition.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Natural Language Question Answering: The View from Here", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Hirschman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Gaizauskas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Nat. Lang. Eng", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hirschman, L., and Gaizauskas, R. 2001. Natural Lan- guage Question Answering: The View from Here. Nat. Lang. Eng. 7.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Robust Disambiguation of Named Entities in Text", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Hoffart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Mohamed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Bordino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "F\u00fcrstenau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Pinkal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Spaniol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Taneva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Thaterm", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hoffart, J.; Mohamed, A. Y.; Bordino, I.; F\u00fcrstenau, H.; Pinkal, M.; Spaniol, M.; Taneva, B.; Thaterm S.; and Weikum, G. 2011. Robust Disambiguation of Named Entities in Text. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Yago2: exploring and querying world knowledge in time, space, context, and many languages", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Hoffart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Suchanek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Berberich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Lewis-Kelham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "De Melo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hoffart, J.; Suchanek, F. M.; Berberich, K.; Lewis- Kelham, E.; de Melo, G.; and Weikum, G. 2011. Yago2: exploring and querying world knowledge in time, space, context, and many languages. In WWW (Companion Volume).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Leveraging community-built knowledge for type coercion in question answering", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kalyanpur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Murdock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Welty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "International Semantic Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kalyanpur, A.; Murdock, J. W.; Fan, J.; and Welty, C. A. 2011. Leveraging community-built knowledge for type coercion in question answering. In International Semantic Web Conference.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "CSAIL at TREC 2007 Question Answering", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Katz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Felshin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Marton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Mora", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Zaccak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ammar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Turgut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Westrick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katz, B.; Felshin, S.; Marton, G.; Mora, F.; Shen, Y. K.; Zaccak, G.; Ammar, A.; Eisner, E.; Turgut, A.; and Westrick, L. B. 2007. CSAIL at TREC 2007 Ques- tion Answering. In TREC.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Collective annotation of wikipedia entities in web text", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Kulkarni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Ramakrishnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Chakrabarti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "KDD", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kulkarni, S.; Singh, A.; Ramakrishnan, G.; and Chakrabarti, S. 2009. Collective annotation of wikipedia entities in web text. In KDD.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Scaling Question Answering to the Web", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"C T" |
|
], |
|
"last": "Kwok", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Weld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "WWW", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kwok, C. C. T.; Etzioni, O.; and Weld, D. S. 2001. Scal- ing Question Answering to the Web. In WWW.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "NaLIX: A Generic Natural Language Search Environment for XML Data", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Jagadish", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "ACM Trans. Database Syst", |
|
"volume": "32", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Li, Y.; Yang, H.; and Jagadish, H. V. 2007. NaLIX: A Generic Natural Language Search Environment for XML Data. ACM Trans. Database Syst. 32(4).", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Learning to link with wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Milne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Witten", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "CIKM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Milne, D. N., and Witten, I. H. 2008. Learning to link with wikipedia. In CIKM.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "PATTY: A Taxonomy of Relational Patterns with Semantic Types", |
|
"authors": [ |
|
{ |
|
"first": "Ndapandula", |
|
"middle": [], |
|
"last": "Nakashole", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabian", |
|
"middle": [], |
|
"last": "Suchanek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ndapandula Nakashole, Gerhard Weikum and Fabian Suchanek 2012. PATTY: A Taxonomy of Relational Patterns with Semantic Types. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Word sense disambiguation: A survey", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "ACM Comput. Surv", |
|
"volume": "41", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Navigli, R. 2009. Word sense disambiguation: A survey. ACM Comput. Surv. 41(2).", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Expressive and Flexible Access to Web-extracted Data: A Keyword-based Structured Query Language", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Pound", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Ilyas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Weddell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "SIG-MOD. 2011. 1st Workshop on Question Answering over Linked Data (QALD-1)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pound, J.; Ilyas, I. F.; and Weddell, G. E. 2010. Ex- pressive and Flexible Access to Web-extracted Data: A Keyword-based Structured Query Language. In SIG- MOD. 2011. 1st Workshop on Question Answering over Linked Data (QALD-1). http://www.sc.cit-ec.uni- bielefeld.de/qald-1.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Using Information Content to Evaluate Semantic Similarity in a Taxonomy", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "IJCAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Resnik, P. 1995. Using Information Content to Evaluate Semantic Similarity in a Taxonomy. In IJCAI.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "A Cross-Lingual Dictionary for English Wikipedia Concepts", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"X" |
|
], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chang, A. X. ; 2012. A Cross-Lingual Dictionary for English Wikipedia Con- cepts. In LREC.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Yago: a core of semantic knowledge", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Suchanek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Kasneci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Suchanek, F. M.; Kasneci, G.; and Weikum, G. 2007. Yago: a core of semantic knowledge. In WWW.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Sig.ma: Live views on the web of data", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Tummarello", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Cyganiak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Catasta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Danielczyk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Delbru", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Decker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "J. Web Sem", |
|
"volume": "8", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tummarello, G.; Cyganiak, R.; Catasta, M.; Danielczyk, S.; Delbru, R.; and Decker, S. 2010. Sig.ma: Live views on the web of data. J. Web Sem. 8(4).", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Pythia: Compositional Meaning Construction for Ontology-Based Question Answering on the Semantic Web", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Unger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Cimiano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "NLDB", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Unger, C.; and Cimiano, P. 2011. Pythia: Compositional Meaning Construction for Ontology-Based Question Answering on the Semantic Web. In NLDB.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Templatebased question answering over RDF data", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Unger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "B\u00fchmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Lehmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A.-C", |
|
"middle": [], |
|
"last": "Ngonga Ngomo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Gerber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Cimiano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "WWW", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Unger, C.; B\u00fchmann, L.; Lehmann, J.; Ngonga Ngomo, A.-C.; Gerber, D.; and Cimiano, P. 2012. Template- based question answering over RDF data. In WWW.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Overview of the trec 2003 question answering track", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Voorhees", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "TREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Voorhees, E. M. 2003. Overview of the trec 2003 ques- tion answering track. In TREC.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Deep answers for naturally asked questions on the web of data", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Yahya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Berberich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Elbassuoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ramanath", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Tresp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yahya, M.; Berberich, K.; Elbassuoni, S.; Ramanath, M.; Tresp, V.; and Weikum, G. 2012. Deep answers for naturally asked questions on the web of data. In WWW.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "AnswerBus Question Answering System", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zheng, Z. 2002. AnswerBus Question Answering Sys- tem. In HLT.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Architecture of DEANNA.", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Disambiguation graph for the running example.", |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Computed subgraph for the running example.", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"text": "Each relation has a type signature: classes for the relation's domain and range. Classes, such as person and film group entities. Entities are represented in canonical form such as Ingrid Bergman and Casablanca (film). A special type of entities are literals, such as strings, numbers, and dates.", |
|
"num": null, |
|
"content": "<table><tr><td>Subject</td><td>Predicate</td><td>Object</td></tr><tr><td>film</td><td colspan=\"2\">subclassOf production</td></tr><tr><td colspan=\"2\">Casablanca (film) type</td><td>film</td></tr><tr><td>\"Casablanca\"</td><td>means</td><td>Casablanca (film)</td></tr><tr><td>\"Casablanca\"</td><td>means</td><td>Casablanca, Morocco</td></tr><tr><td>Ingrid Bergman</td><td>actedIn</td><td>Casablanca (film)</td></tr><tr><td colspan=\"3\">Figure 1: Sample knowledge base</td></tr><tr><td colspan=\"3\">Examples of relations are type, subclassOf, and</td></tr><tr><td>actedIn.</td><td/><td/></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"text": "1} indicates if p-node i maps to snode j. \u2022 Z kl \u2208 {0, 1} indicates if s-nodes k, l are both selected so that their coherence edge matters. \u2022 Q mnd \u2208 {0, 1} indicates if the q-edge between q-node m and p-node n for d is selected. \u2022 C j , E j and R j are {0, 1} constants indicating if s-node j is a class, entity, or relation, resp. \u2022 w ij is the weight for a p-s similarity edge. \u2022 v kl is the weight for an s-s semantic coherence edge. \u2022 t rc \u2208 {0, 1} indicates if the relation s-node r is type-compatible with the concept s-node c.", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"text": "", |
|
"num": null, |
|
"content": "<table><tr><td>Benchmark</td><td colspan=\"2\">QALD-1 NAGA</td></tr><tr><td>covmacro</td><td>0.975</td><td>0.894</td></tr><tr><td>precmacro</td><td>1.000</td><td>0.941</td></tr><tr><td>cov micro</td><td>0.956</td><td>0.847</td></tr><tr><td>prec micro</td><td>1.000</td><td>0.906</td></tr><tr><td>: Disambiguation</td><td/><td/></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"text": "Query generation", |
|
"num": null, |
|
"content": "<table><tr><td>Benchmark</td><td colspan=\"2\">QALD-1 NAGA</td></tr><tr><td>#questions</td><td>27</td><td>44</td></tr><tr><td>#queries</td><td>20</td><td>41</td></tr><tr><td>#satisfactory</td><td>10</td><td>15</td></tr><tr><td>#relaxed</td><td>+3</td><td>+3</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"text": "Query answering", |
|
"num": null, |
|
"content": "<table><tr><td>Question</td><td>Generated Query</td><td>Sample Answers</td></tr><tr><td>1. Who was the wife of President Lincoln?</td><td>?x marriedTo Abraham Lincoln . ?x type person</td><td>Mary Todd Lincoln</td></tr><tr><td>2. In which films did Julia Roberts</td><td>?x type movie . Richard Gere actedIn ?x .</td><td>Runaway Bride</td></tr><tr><td>as well as Richard Gere play?</td><td>Julia Roberts actedIn ?x</td><td>Pretty Woman</td></tr><tr><td>3. Which actors were born in Germany?</td><td>?x type actor . ?x bornIn Germany</td><td>NONE</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"text": "?x type actor . ?x wasBornIn ?z[Germany] Martin Lawrence type actor . Martin Lawrence wasBornIn Frankfurt am Main Robert Schwentke type actor . Robert Schwentke wasBornIn Stuttgart Willy Millowitsch type actor . Willy Millowitsch wasBornIn Cologne Jerry Zaks type actor . Jerry Zaks wasBornIn Stuttgart", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |