|
{ |
|
"paper_id": "Q15-1019", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:07:48.057823Z" |
|
}, |
|
"title": "Learning a Compositional Semantics for Freebase with an Open Predicate Vocabulary", |
|
"authors": [ |
|
{ |
|
"first": "Jayant", |
|
"middle": [], |
|
"last": "Krishnamurthy", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University", |
|
"location": { |
|
"addrLine": "5000 Forbes Avenue Pittsburgh", |
|
"postCode": "15213", |
|
"region": "PA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Mitchell", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University", |
|
"location": { |
|
"addrLine": "5000 Forbes Avenue Pittsburgh", |
|
"postCode": "15213", |
|
"region": "PA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present an approach to learning a modeltheoretic semantics for natural language tied to Freebase. Crucially, our approach uses an open predicate vocabulary, enabling it to produce denotations for phrases such as \"Republican front-runner from Texas\" whose semantics cannot be represented using the Freebase schema. Our approach directly converts a sentence's syntactic CCG parse into a logical form containing predicates derived from the words in the sentence, assigning each word a consistent semantics across sentences. This logical form is evaluated against a learned probabilistic database that defines a distribution over denotations for each textual predicate. A training phase produces this probabilistic database using a corpus of entitylinked text and probabilistic matrix factorization with a novel ranking objective function. We evaluate our approach on a compositional question answering task where it outperforms several competitive baselines. We also compare our approach against manually annotated Freebase queries, finding that our open predicate vocabulary enables us to answer many questions that Freebase cannot.", |
|
"pdf_parse": { |
|
"paper_id": "Q15-1019", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present an approach to learning a modeltheoretic semantics for natural language tied to Freebase. Crucially, our approach uses an open predicate vocabulary, enabling it to produce denotations for phrases such as \"Republican front-runner from Texas\" whose semantics cannot be represented using the Freebase schema. Our approach directly converts a sentence's syntactic CCG parse into a logical form containing predicates derived from the words in the sentence, assigning each word a consistent semantics across sentences. This logical form is evaluated against a learned probabilistic database that defines a distribution over denotations for each textual predicate. A training phase produces this probabilistic database using a corpus of entitylinked text and probabilistic matrix factorization with a novel ranking objective function. We evaluate our approach on a compositional question answering task where it outperforms several competitive baselines. We also compare our approach against manually annotated Freebase queries, finding that our open predicate vocabulary enables us to answer many questions that Freebase cannot.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Traditional knowledge representation assumes that world knowledge can be encoded using a closed vocabulary of formal predicates. In recent years, semantic parsing has enabled us to build compositional models of natural language semantics using such a closed predicate vocabulary (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 279, |
|
"end": 303, |
|
"text": "(Zelle and Mooney, 1996;", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 304, |
|
"end": 334, |
|
"text": "Zettlemoyer and Collins, 2005)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "These semantic parsers map natural language statements to database queries, enabling applications such as answering questions using a large knowledge base (Yahya et al., 2012; Krishnamurthy and Mitchell, 2012; Cai and Yates, 2013; Kwiatkowski et al., 2013; Berant et al., 2013; Berant and Liang, 2014; Reddy et al., 2014) . Furthermore, the modeltheoretic semantics provided by such parsers have the potential to improve performance on other tasks, such as information extraction and coreference resolution.", |
|
"cite_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 175, |
|
"text": "(Yahya et al., 2012;", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 176, |
|
"end": 209, |
|
"text": "Krishnamurthy and Mitchell, 2012;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 210, |
|
"end": 230, |
|
"text": "Cai and Yates, 2013;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 256, |
|
"text": "Kwiatkowski et al., 2013;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 277, |
|
"text": "Berant et al., 2013;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 301, |
|
"text": "Berant and Liang, 2014;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 302, |
|
"end": 321, |
|
"text": "Reddy et al., 2014)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, a closed predicate vocabulary has inherent limitations. First, its coverage will be limited, as such vocabularies are typically manually constructed. Second, it may abstract away potentially relevant semantic differences. For example, the semantics of \"Republican front-runner\" cannot be adequately encoded in the Freebase schema because it lacks the concept of a \"front-runner.\" We could choose to encode this concept as \"politician\" at the cost of abstracting away the distinction between the two. As this example illustrates, these two problems are prevalent in even the largest knowledge bases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "An alternative paradigm is an open predicate vocabulary, where each natural language word or phrase is given its own formal predicate. This paradigm is embodied in both open information extraction (Banko et al., 2007) and universal schema . Open predicate vocabularies have the potential to capture subtle semantic distinctions and achieve high coverage. However, we have yet to develop compelling approaches to compositional semantics within this paradigm.", |
|
"cite_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 217, |
|
"text": "(Banko et al., 2007)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper takes a step toward compositional se- (REPUB., G. BUSH) ...", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 66, |
|
"text": "(REPUB., G. BUSH)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "I N N N . . . .9 .1 .1 .9 .1 .1 .1 .1 .7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "F R O M L I V E S", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Figure 1: Overview of our approach. Top left: the text is converted to logical form by CCG syntactic parsing and a collection of manually-defined rules. Bottom: low-dimensional embeddings of each entity (entity pair) and category (relation) are learned from an entity-linked web corpus. These embeddings are used to construct a probabilistic database. The labels of these matrices are shortened for space reasons. Top right: evaluating the logical form on the probabilistic database computes the marginal probability that each entity is an element of the text's denotation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "F R O M L I V E S", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "mantics with an open predicate vocabulary. Our approach defines a distribution over denotations (sets of Freebase entities) given an input text. The model has two components, shown in Figure 1 . The first component is a rule-based semantic parser that uses a syntactic CCG parser and manually-defined rules to map entity-linked texts to logical forms containing predicates derived from the words in the text. The second component is a probabilistic database with a possible worlds semantics that defines a distribution over denotations for each textually-derived predicate. This database assigns independent probabilities to individual predicate instances, such as P (FRONT-RUNNER(/EN/GEORGE BUSH)) = 0.9. Together, these components define an exponentiallylarge distribution over denotations for an input text; to simplify this output, we compute the marginal probability, over all possible worlds, that each entity is an element of the text's denotation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 192, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "F R O M L I V E S", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The learning problem in our approach is to train the probabilistic database to predict a denotation for each predicate. We pose this problem as probabilistic matrix factorization with a novel query/answer ranking objective. This factorization learns a lowdimensional embedding of each entity (entity pair) and category (relation) such that the denotation of a predicate is likely to contain entities or entity pairs with nearby vectors. To train the database, we first collect training data by analyzing entity-linked sentences in a large web corpus with the rule-based semantic parser. This process generates a collection of logical form queries with observed entity answers. The query/answer ranking objective, when optimized, trains the database to rank the observed answers for each query above unobserved answers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "F R O M L I V E S", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We evaluate our approach on a question answering task, finding that our approach outperforms several baselines and that our new training objective improves performance over a previously-proposed objective. We also evaluate the trade-offs between open and closed predicate vocabularies by comparing our approach to a manually-annotated Freebase query for each question. This comparison reveals that, when Freebase contains predicates that cover the question, it achieves higher precision and recall than our approach. However, our approach can correctly answer many questions not covered by Freebase.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "F R O M L I V E S", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The purpose of our system is to predict a denotation \u03b3 for a given natural language text s. The denotation \u03b3 is the set of Freebase entities that s refers to; for example, if s = \"president of the US,\" then \u03b3 = {/EN/OBAMA, /EN/BUSH, ...}. 1 Our system represents this prediction problem using the following probabilistic model:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Overview", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "P (\u03b3|s) = w P (\u03b3| , w)P (w)P ( |s)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Overview", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The first term in this factorization, P ( |s), is a distribution over logical forms given the text s. This term corresponds to the rule-based semantic parser (Section 3). This semantic parser is deterministic, so this term assigns probability 1 to a single logical form for each text. The second term, P (w), represents a distribution over possible worlds, where each world is an assignment of truth values to all possible predicate instances. The distribution over worlds is represented by a probabilistic database (Section 4). The final term, P (\u03b3| , w), deterministically evaluates the logical form on the world w to produce a denotation \u03b3. This term represents query evaluation against a fixed database, as in other work on semantic parsing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Overview", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Section 5 describes inference in our model. To produce a ranked list of entities ( Figure 1 , top right) from P (\u03b3|s), our system computes the marginal probability that each entity is an element of the denotation \u03b3. This problem corresponds to query evaluation in a probabilistic database, which is known to be tractable in many cases (Suciu et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 335, |
|
"end": 355, |
|
"text": "(Suciu et al., 2011)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 91, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System Overview", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Section 6 describes training, which estimates parameters for the probabilistic database P (w). This step first automatically generates training data using the rule-based semantic parser. This data is used to formulate a matrix factorization problem that is optimized to estimate the database parameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Overview", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The first part of our compositional semantics system is a rule-based system that deterministically computes a logical form for a text s. This component is used during inference to analyze the logical structure of text, and during training to generate training data (see Section 6.1). Several input/output pairs for this system are shown in Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 340, |
|
"end": 348, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Rule-Based Semantic Parser", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The conversion to logical form has 3 phases:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule-Based Semantic Parser", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "1. CCG syntactic parsing parses the text and applies several deterministic syntactic transformations to facilitate semantic analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule-Based Semantic Parser", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "2. Entity linking marks known Freebase entities in the text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule-Based Semantic Parser", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "3. Semantic analysis assigns a logical form to each word, then composes them to produce a logical form for the complete text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule-Based Semantic Parser", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The first step in our analysis is to syntactically parse the text. We use the ASP-SYN parser (Krishnamurthy and Mitchell, 2014) trained on CCG-Bank (Hockenmaier and Steedman, 2002) . We then automatically transform the resulting syntactic parse to make the syntactic structure more amenable to semantic analysis. This step marks NP s in conjunctions by replacing their syntactic category with NP [conj]. This transformation allows semantic analysis to distinguish between appositives and comma-separated lists. It also transforms all verb arguments to core arguments, i.e., using the category PP /NP as opposed to ((S \\NP )\\(S \\NP ))/NP . This step simplifies the semantic analysis of verbs with prepositional phrase arguments. The final transformation adds a word feature to each PP category, e.g., mapping PP to PP [by]. These features are used to generate verb-preposition relation predicates, such as DIRECTED BY.", |
|
"cite_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 180, |
|
"text": "(Hockenmaier and Steedman, 2002)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Parsing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The second step is to identify mentions of Freebase entities in the text. This step could be performed by an off-the-shelf entity linking system (Ratinov et al., 2011; Milne and Witten, 2008) or string matching. However, our training and test data is derived from Clueweb 2009, so we rely on the entity linking for this corpus provided by Gabrilovich et. al (2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 167, |
|
"text": "(Ratinov et al., 2011;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 168, |
|
"end": 191, |
|
"text": "Milne and Witten, 2008)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 364, |
|
"text": "Gabrilovich et. al (2013)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Linking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Our system incorporates the provided entity links into the syntactic parse provided that they are consistent with the parse structure. Specifically, we require that each mention is either (1) a constituent in the parse tree with syntactic category N or N P or (2) a collection of N/N or N P/N P modifiers with a single head word. The first case covers noun and noun phrase mentions, while the second case covers noun compounds. In both cases, we substitute a single multi-word terminal into the parse tree spanning the mention and invoke special semantic rules for mentions described in the next section. Figure 2 : Example input/output pairs for our semantic analysis system. Mentions of Freebase entities in the text are indicated by underlines.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 605, |
|
"end": 613, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Entity Linking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2203x, y, z.x = /EN/TOM CRUISE \u2227 PLAYS(x, y) \u2227 y = /EN/MAVERICK (CHARACTER) \u2227 PLAYS IN(x, z) \u2227 z = /EN/TOP GUN", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Linking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The final step uses the syntactic parse and entity links to produce a logical form for the text. The system induces a logical form for every word in the text based on its syntactic CCG category. Composing these logical forms according to the syntactic parse produces a logical form for the entire text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Our semantic analyses are based on a relatively na\u00efve model-theoretic semantics. We focus on language whose semantics can be represented with existentially-quantified conjunctions of unary and binary predicates, ignoring, for example, temporal scope and superlatives. Generally, our system models nouns and adjectives as unary predicates, and verbs and prepositions as binary predicates. Special multi-word predicates are generated for verb-preposition combinations. Entity mentions are mapped to the mentioned entity in the logical form. We also created special rules for analyzing conjunctions, appositives, and relativizing conjunctions. The complete list of rules used to produce these logical forms is available online. 2 We made several notable choices in designing this component. First, multi-argument verbs are analyzed using pairwise relations, as in the third example in Figure 2 . This analysis allows us to avoid reasoning about entity triples (quadruples, etc.), which are challenging for the matrix factorization due to sparsity. Second, noun-preposition combinations are analyzed as a category and relation, as in the first example in Figure 2 . We empirically found that combining the noun and preposition in such instances resulted in worse performance, as it dramatically increased the sparsity of training instances for the combined relations. Third, entity mentions with the N /N category are analyzed using a special noun-noun relation, as in the second example in Figure 2 . Our intuition is that this relation shares instances with other relations (e.g., \"city in Texas\" implies \"Texan city\"). Finally, we lowercased each word to create its predicate name, but performed no lemmatization or other normalization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 725, |
|
"end": 726, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 882, |
|
"end": 890, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1151, |
|
"end": 1159, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1487, |
|
"end": 1495, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semantic analysis", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The scope of our semantic analysis system is somewhat limited relative to other similar systems (Bos, 2008; Lewis and Steedman, 2013) as it only outputs existentially-quantified conjunctions of predicates. Our goal in building this system was to analyze noun phrases and simple sentences, for which this representation generally suffices. The reason for this focus is twofold. First, this subset of language is sufficient to capture much of the language surrounding Freebase entities. Second, for various technical reasons, this restricted semantic representation is easier to use (and more informative) for training the probabilistic database (see Section 6.3).", |
|
"cite_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 107, |
|
"text": "(Bos, 2008;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 108, |
|
"end": 133, |
|
"text": "Lewis and Steedman, 2013)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Note that this system can be straightforwardly extended to model additional linguistic phenomena, such as additional logical operators and generalized quantifiers, by writing additional rules. The semantics of logical forms including these operations are well-defined in our model, and the system does not even need to be re-trained to incorporate these additions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The second part of our compositional semantics system is a probabilistic database. This database represents a distribution over possible worlds, where each world is an assignment of truth values to every predicate instance. Equivalently, the probabilistic database can be viewed as a distribution over databases or knowledge bases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Database", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Formally, a probabilistic database is a collection of random variables, each of which represents the truth value of a single predicate instance. Given entities e \u2208 E, categories c \u2208 C, and relations r \u2208 R the probabilistic database contains boolean random variables c(e) and r(e 1 , e 2 ) for each category and relation instance, respectively. All of these random variables are assumed to be independent. Let a world w represent an assignment of truth values to all of these random variables, where c(e) = w c,e and r(e 1 , e 2 ) = w r,e 1 ,e 2 . By independence, the probability of a world can be written as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Database", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "P (w) = e\u2208E c\u2208C P (c(e) = w c,e ) \u00d7 e 1 \u2208E e 2 \u2208E r\u2208R P (r(e 1 , e 2 ) = w r,e 1 ,e 2 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Database", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The next section discusses how probabilistic matrix factorization is used to model the probabilities of these predicate instances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Database", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The probabilistic matrix factorization model treats the truth of each predicate instance as an independent boolean random variable that is true with probability:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Matrix Factorization Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "P (c(e) = TRUE) = \u03c3(\u03b8 T c \u03c6 e ) P (r(e 1 , e 2 ) = TRUE) = \u03c3(\u03b8 T r \u03c6 (e 1 ,e 2 ) ) Above, \u03c3(x) = e x 1+e", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Matrix Factorization Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "x is the logistic function. In these equations, \u03b8 c and \u03b8 r represent k-dimensional vectors of per-predicate parameters, while \u03c6 e and \u03c6 (e 1 ,e 2 ) represent k-dimensional vector embeddings of each entity and entity pair. This model contains a low-dimensional embedding of each predicate and entity such that each predicate's denotation has a high probability of containing entities with nearby vectors. The probability that each variable is false is simply 1 minus the probability that it is true.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Matrix Factorization Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "This model can be viewed as matrix factorization, as depicted in Figure 1 . The category and relation instance probabilities can be arranged in a pair of matrices of dimension |E| \u00d7 |C| and |E| 2 \u00d7 |R|. Each row of these matrices represents an entity or entity pair, each column represents a category or relation, and each value is between 0 and 1 and represents a truth probability (Figure 1 , bottom right). These two matrices are factored into matrices of size |E| \u00d7 k and k \u00d7 |C|, and |E| 2 \u00d7 k and k \u00d7 |R|, respectively, containing k-dimensional embeddings of each entity, category, entity pair and relation ( Figure 1 , bottom left). These low-dimensional embeddings are represented by the parameters \u03c6 and \u03b8.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 73, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 383, |
|
"end": 392, |
|
"text": "(Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 615, |
|
"end": 624, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Matrix Factorization Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Inference computes the marginal probability, over all possible worlds, that each entity is an element of a text's denotation. In many cases -depending on the text -these marginal probabilities can be computed exactly in polynomial time. The inference problem is to calculate P (e \u2208 \u03b3|s) for each entity e. Because both the semantic parser P ( |s) and query evaluation P (\u03b3| , w) are deterministic, this problem can be rewritten as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference: Computing Marginal Probabilities", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "P (e \u2208 \u03b3|s) = \u03b3 1(e \u2208 \u03b3)P (\u03b3|s) = w 1(e \u2208 w )P (w)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference: Computing Marginal Probabilities", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Above, represents the logical form for the text s produced by the rule-based semantic parser, and 1 represents the indicator function. The notation w represents denotation produced by (deterministically) evaluating the logical form on world w. This inference problem corresponds to query evaluation in a probabilistic database, which is #P-hard in general. Intuitively, this problem can be difficult because P (\u03b3|s) is a joint distribution over sets of entities that can be exponentially large in the number of entities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference: Computing Marginal Probabilities", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "However, a large subset of probabilistic database queries, known as safe queries, permit polynomial time evaluation (Dalvi and Suciu, 2007) . Safe queries can be evaluated extensionally using a probabilistic notion of a denotation that treats each entity as independent. Let P denote a probabilistic denotation, which is a function from entities (or entity pairs) to probabilities, i.e., P (e) \u2208 [0, 1]. The denotation of a logical form is then computed recursively, in the same manner as a non-probabilistic denotation, using probabilistic extensions of the typical rules, such as: c P (e) = w P (w)1(w c,e ) r P (e 1 , e 2 ) = w P (w)1(w r,e 1 ,e 2 ) c 1 (x) \u2227 c 2 (x) P (e) = c 1 P (e) \u00d7 c 2 P (e) \u2203y.r(x, y) P (e) = 1 \u2212 y\u2208E (1 \u2212 r P (e, y))", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 139, |
|
"text": "(Dalvi and Suciu, 2007)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference: Computing Marginal Probabilities", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The first two rules are base cases that simply retrieve predicate probabilities from the probabilistic database. The remaining rules compute the probabilistic denotation of a logical form from the denotations of its parts. 3 The formula for the probabilistic computation on the right of each of these rules is a straightforward consequence of the (assumed) independence of entities. For example, the last rule computes the probability of an OR of a set of independent random variables (indexed by y) using the identity A \u2228 B = \u00ac(\u00acA \u2227 \u00acB). For safe queries, P (e) = P (e \u2208 \u03b3|s), that is, the probabilistic denotation computed according to the above rules is equal to the marginal probability distribution. In practice, all of the queries in the experiments are safe, because they contain only one query variable and do not contain repeated predicates. For more information on query evaluation in probabilistic databases, we refer the reader to Suciu et al. (2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 223, |
|
"end": 224, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 943, |
|
"end": 962, |
|
"text": "Suciu et al. (2011)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference: Computing Marginal Probabilities", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Note that inference does not compute the most probable denotation, max \u03b3 P (\u03b3|s). In some sense, the most probable denotation is the correct output for a model-theoretic semantics. However, it is highly sensitive to the probabilities in the database, and in many cases it is empty (because a conjunction of independent boolean random variables is unlikely to be true). Producing a ranked list of entities is also useful for evaluation purposes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference: Computing Marginal Probabilities", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The training problem in our approach is to learn parameters \u03b8 and \u03c6 for the probabilistic database. We consider two different objective functions for learning these parameters that use slightly different forms of training data. In both cases, training has two phases. First, we generate training data, in the form of observed assertions or query-answer pairs, by applying the rule-based semantic parser to a corpus of entity-linked web text. Second, we optimize the parameters of the probabilistic database to rank observed assertions or answers above unobserved assertions or answers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Training data is generated by applying the process illustrated in Figure 3 to each sentence in an entitylinked web corpus. First, we apply our rule-based semantic parser to the sentence to produce a logical form. Next, we extract portions of this logical form where every variable is bound to a particular Freebase entity, resulting in a simplified logical form. Because the logical forms are existentiallyquantified conjunctions of predicates, this step simply discards any conjuncts in the logical form containing a variable that is not bound to a Freebase entity. From this simplified logical form, we generate two types of training data: (1) predicate instances, and (2) queries with known answers (see Figure 3 ). In both cases, the corpus consists entirely of assumed-to-be-true statements, making obtaining negative examples a major challenge for training. 4", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 74, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 707, |
|
"end": 715, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training Data", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Riedel et al. 2013introduced a ranking objective to work around the lack of negative examples in a similar matrix factorization problem. Their objective is a modified version of Bayesian Personalized Ranking (Rendle et al., 2009) that aims to rank observed predicate instances above unobserved instances.", |
|
"cite_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 229, |
|
"text": "(Rendle et al., 2009)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicate Ranking Objective", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "This objective function uses observed predicate instances (Figure 3 , bottom left) as training data. This data consists of two collections, {(c i , e i )} n i=1 and {(r j , t j )} m j=1 , of observed category and relation instances. We use t j to denote a tuple of entities, t j = (e j,1 , e j,2 ), to simplify notation. The predicate ranking objective is:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 67, |
|
"text": "(Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Predicate Ranking Objective", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "O P (\u03b8, \u03c6) = n i=1 log \u03c3(\u03b8 T c i (\u03c6 e i \u2212 \u03c6 e i )) + m j=1 log \u03c3(\u03b8 T r j (\u03c6 t j \u2212 \u03c6 t j ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicate Ranking Objective", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "where e i is a randomly sampled entity such that (c i , e i ) does not occur in the training data. Similarly, t j is a random entity tuple such that (r j , t j ) does not occur. Maximizing this function attempts", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicate Ranking Objective", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Original sentence and logical form General Powell, appearing Sunday on CNN 's Late Edition, said ... Figure 3 : Illustration of training data generation applied to a single sentence. We generate two types of training data, predicate instances and queries with observed answers, by semantically parsing the sentence and extracting portions of the generated logical form with observed entity arguments. The predicate instances are extracted from the conjuncts in the simplified logical form, and the queries are created by removing a single entity from the simplified logical form.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 109, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Predicate Ranking Objective", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2203w, x, y, z. w = /EN/POWELL \u2227 GENERAL(w) \u2227 APPEARING(w, x) \u2227 SUNDAY(x) \u2227 APPEARING ON(w, y) \u2227 y = /EN/LATE \u2227 'S(z, y) \u2227 z = /EN/CNN \u2227 SAID(w, ...) Simplified logical form \u2203w, y, z. w = /EN/POWELL \u2227 GENERAL(w) \u2227 APPEARING ON(w, y) \u2227 y = /EN/LATE \u2227 'S(z, y) \u2227 z = /EN/CNN Instances Queries Answers GENERAL(/EN/POWELL) \u03bbw.GENERAL(w) \u2227 APPEARING ON(w, /EN/LATE) /EN/POWELL APPEARING ON(/EN/POWELL, /EN/LATE) \u03bby.APPEARING ON(/EN/POWELL, y) \u2227 'S(/EN/CNN, y) /EN/LATE 'S(/EN/CNN, /EN/LATE) \u03bbz.'S(z, /EN/LATE) /EN/CNN", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicate Ranking Objective", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "to find \u03b8 c i , \u03c6 e i and \u03c6 e i such that P (c i (e i )) is larger than P (c i (e i )) (and similarly for relations). During training, e i and t j are resampled on each pass over the data set according to each entity or tuple's empirical frequency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Predicate Ranking Objective", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The previous objective aims to rank the entities within each predicate well. However, such withinpredicate rankings are insufficient to produce correct answers for queries containing multiple predicatesthe scores for each predicate must further be calibrated to work well with each other given the independence assumptions of the probabilistic database. We introduce a new training objective that encourages good rankings for entire queries instead of single predicates. The data for this objective consists of tuples, {( i , e i )} n i=1 , of a query i with an observed answer e i (Figure 3, bottom right) . Each i is a function with exactly one entity argument, and i (e) is a conjunction of predicate instances. For example, the last query in Figure 3 is a function of one argument z, and (e) is a single predicate instance, 'S(e, /EN/LATE). The new objective aims to rank the observed entity answer above unobserved entities for each query:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 582, |
|
"end": 606, |
|
"text": "(Figure 3, bottom right)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 746, |
|
"end": 754, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Query Ranking Objective", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "O Q (\u03b8, \u03c6) = n i=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Ranking Objective", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "log P rank ( i , e i , e i ) P rank generalizes the approximate ranking probability defined by the predicate ranking objec-tive to more general queries. The expression \u03c3(\u03b8 T c (\u03c6 e \u2212 \u03c6 e )) in the predicate ranking objective can be viewed as an approximation of the probability that e is ranked above e in category c. P rank uses this approximation for each individual predicate in the query. For example, given the query = \u03bbx.c(x) \u2227 r(x, y) and entities (e, e ), P rank ( , e, e ) = \u03c3(\u03b8 c (\u03c6 e \u2212 \u03c6 e )) \u00d7 \u03c3(\u03b8 r (\u03c6 (e,y) \u2212 \u03c6 (e ,y) )). For this objective, we sample e i such that ( i , e i ) does not occur in the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Ranking Objective", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "When 's body consists of a conjunction of predicates, the query ranking objective simplifies considerably. In this case, can be described as three sets of one-argument functions: categories C( ) = {\u03bbx.c(x)}, left arguments of relations R L ( ) = {\u03bbx.r(x, y)}, and right arguments of relations R R ( ) = {\u03bbx.r(y, x)}. Furthermore, P rank is a product so we can distribute the log:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Ranking Objective", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "O Q (\u03b8, \u03c6) = n i=1 \u03bbx.c(x)\u2208C( i ) log \u03c3(\u03b8 c (\u03c6 e i \u2212 \u03c6 e i )) + \u03bbx.r(x,y)\u2208R L ( i ) log \u03c3(\u03b8 r (\u03c6 (e i ,y) \u2212 \u03c6 (e i ,y) )) + \u03bbx.r(y,x)\u2208R R ( i ) log \u03c3(\u03b8 r (\u03c6 (y,e i ) \u2212 \u03c6 (y,e i ) ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Ranking Objective", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "This simplification reveals that the main difference between O Q and O P is the sampling of the unobserved entities e and tuples t . O P samples them in an unconstrained fashion from their empirical distributions for every predicate. O Q considers the larger context in which each predicate occurs, with two major effects. First, more negative examples are generated for categories because the logical forms are more specific. For example, both \"president of Sprint\" and \"president of the US\" generate instances of the PRESIDENT predicate; O Q will use entities that only occur with one of these as negative examples for the other. Second, the relation parameters are trained to rank tuples with a shared argument, as opposed to tuples in general.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Ranking Objective", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Note that, although P rank generalizes to more complex logical forms than existentially-quantified conjunctions, training with these logical forms is more difficult because P rank is no longer a product. In these cases, it becomes necessary to perform inference within the gradient computation, which can be expensive. The restriction to conjunctions makes inference trivial, enabling the factorization above.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Ranking Objective", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "We evaluate our approach to compositional semantics on a question answering task. Each test example is a (compositional) natural language question whose answer is a set of Freebase entities. We compare our open domain approach to several baselines based on prior work, as well as a human-annotated Freebase query for each example.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We used Clueweb09 web corpus 5 with the corresponding Google FACC entity linking (Gabrilovich et al., 2013) to create the training and test data for our experiments. The training data is derived from 3 million webpages, and contains 2.1m predicate instances, 1.1m queries, 172k entities and 181k entity pairs. Predicates that appeared fewer than 6 times in the training data were replaced with the predicate UNK, resulting in 25k categories and 2.2k relations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 107, |
|
"text": "(Gabrilovich et al., 2013)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "Our test data consists of fill-in-the-blank natural language questions such as \"Incan emperor \" or \"Cunningham directed Auchtre's second music video .\" These questions were created by applying the training data generation process (Section 6.1) to a collection of held-out webpages. Each natural language question has a corresponding logical form query containing at least one category and relation. We chose not to use existing data sets for semantic parsing into Freebase as our goal is to model the semantics of language that cannot necessarily be modelled using the Freebase schema. Existing data sets, such as Free917 (Cai and Yates, 2013) and We-bQuestions (Berant et al., 2013) , would not allow us to evaluate performance on this subset of language. Consequently, we evaluate our system on a new data set with unconstrained language. However, we do compare our approach against manually-annotated Freebase queries on our new data set (Section 7.5).", |
|
"cite_spans": [ |
|
{ |
|
"start": 622, |
|
"end": 643, |
|
"text": "(Cai and Yates, 2013)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 662, |
|
"end": 683, |
|
"text": "(Berant et al., 2013)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "All of the data for our experiments is available at http://rtw.ml.cmu.edu/tacl2015_csf.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "Our evaluation methodology is inspired by information retrieval evaluations (Manning et al., 2008) . Each system predicts a ranked list of 100 answers for each test question. We then pool the top 30 answers of each system and manually judge their correctness. The correct answers from the pool are then used to evaluate the precision and recall of each system. In particular, we compute average precision (AP) for each question and report the mean average precision (MAP) across all questions. We also report a weighted version of MAP, where each question's AP is weighted by its number of annotated correct answers. Average precision is computed as", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 98, |
|
"text": "(Manning et al., 2008)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "1 m m k=1 Prec(k) \u00d7 Correct(k),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "where Prec(k) is the precision at rank k, Correct(k) is an indicator function for whether the kth answer is correct, and m is the number of returned answers (at most 100).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "Statistics of the annotated test set are shown in Table 1 . A consequence of our unconstrained data generation approach is that some test questions are difficult to answer: of the 220 queries, at least one system was able to produce a correct answer for 116. because they reference rare entities unseen in the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 57, |
|
"text": "Table 1", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "We implemented two baseline models based on existing techniques. The CORPUSLOOKUP baseline answers test questions by directly using the predicate instances in the training data as its knowledge base. For example, given the query \u03bbx.CEO(x) \u2227 OF(x, /EN/SPRINT), this model will return the set of entities e such that CEO(e) and OF(e, /EN/SPRINT) both appear in the training data. All answers found in this fashion are assigned probability 1. The CLUSTERING baseline first clusters the predicates in the training corpus, then answers questions using the clustered predicates. The clustering aggregates predicates with similar denotations, ideally identifying synonyms to smooth over sparsity in the training data. Our approach is closely based on Lewis and Steedman (2013) , though is also conceptually related to approaches such as DIRT (Lin and Pantel, 2001 ) and USP (Poon and Domingos, 2009) . We use the Chinese Whispers clustering algorithm (Biemann, 2006) and calculate the similarity between predicates as the cosine similarity of their TF-IDF weighted entity count vectors. The denotation of each cluster is the union of the denotations of the clustered predicates, and each entity in the denotation is assigned probability 1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 744, |
|
"end": 769, |
|
"text": "Lewis and Steedman (2013)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 835, |
|
"end": 856, |
|
"text": "(Lin and Pantel, 2001", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 867, |
|
"end": 892, |
|
"text": "(Poon and Domingos, 2009)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 944, |
|
"end": 959, |
|
"text": "(Biemann, 2006)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models and Baselines", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "We also trained two probabilistic database models, FACTORIZATION (O P ) and FACTORIZATION (O Q ), using the two objective functions described in Sections 6.2 and 6.3, respectively. We optimized both objectives by performing 100 passes over the training data with AdaGrad (Duchi et al., 2011) using an L2 regularization parameter of \u03bb = 10 \u22124 . The predicate and entity embeddings have 300 dimensions. These parameters were selected on the basis of preliminary experiments with a small validation set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 271, |
|
"end": 291, |
|
"text": "(Duchi et al., 2011)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models and Baselines", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "Finally, we observed that CORPUSLOOKUP has high precision but low recall, while both matrix factorization models have high recall with somewhat lower precision. This observation suggested that an ensemble of CORPUSLOOKUP and FACTORIZA-TION could outperform either model individually. We created two ensembles, ENSEMBLE (O P ) and ENSEMBLE (O Q ), by calculating the probability of each predicate as a 50/50 mixture of each model's predicted probability. Table 2 shows the results of our MAP evaluation, and Figure 4 shows a precision/recall curve for each model. The MAP numbers are somewhat low because almost half of the test questions have no correct answers and all models get an average precision of 0 on these questions. The upper bound on MAP is the fraction of questions with at least 1 correct answer. Note that the models perform well on the answerable questions, as reflected by the ratio of the achieved MAP to the upper bound. The weighted MAP metric also corrects for these unanswerable questions, as they are assigned 0 weight in the weighted average.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 454, |
|
"end": 461, |
|
"text": "Table 2", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 507, |
|
"end": 515, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Models and Baselines", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "These results demonstrate several findings. First, we find that both FACTORIZATION models outperform the baselines in both MAP and weighted MAP. The performance improvement seems to be most significant in the high recall regime (right side of Figure 4 ). Second, we find that the query ranking objective O Q improves performance over the predicate ranking objective O P by 2-4% on the answerable queries. The precision/recall curves show that this improvement is concentrated in the low recall regime. Finally, the ensemble models are considerably better than their component models; however, even in the ensembled models, we find that O Q outperforms O P by a few percent.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 251, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7.4" |
|
}, |
|
{ |
|
"text": "A natural question is whether our open vocabulary approach outperforms a closed approach for the same problem, such as semantic parsing to Freebase (e.g., Reddy et al. (2014) ). In order to answer this question, we compared our best performing model to a manually-annotated Freebase query for each test question. This comparison allows us to understand the relative advantages of open and closed predicate vocabularies. The first author manually annotated a Freebase MQL query for each natural language question in the test data set. This annotation is somewhat subjective, as many of the questions can only be inexactly mapped on to the Freebase schema. We used the following guidelines in performing the mapping: (1) all relations in the text must be mapped to one or more Freebase relations, (2) all entities mentioned in the text must be included in the query, (3) adjective modifiers can be ignored and (4) entities not mentioned in the text may be included in the query. The fourth condition is necessary because many one-place predicates, such as MAYOR(x), are represented in Freebase using a binary relation to a particular entity, such as GOVERNMENT We compared our best performing model, EN-SEMBLE (O Q ), to the manually annotated Freebase queries using the same pooled evaluation methodology. The set of correct answers contains the correct predictions of ENSEMBLE (O Q ) from the previous evaluation along with all answers from Freebase.", |
|
"cite_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 174, |
|
"text": "Reddy et al. (2014)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1148, |
|
"end": 1158, |
|
"text": "GOVERNMENT", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison to Semantic Parsing to Freebase", |
|
"sec_num": "7.5" |
|
}, |
|
{ |
|
"text": "Results from this evaluation are shown in Table 4 . 6 In terms of overall MAP, Freebase outperforms our approach by a fair margin. However, this initial impression belies a more complex reality, which is shown in Table 5 . This table compares both approaches by their relative performance on each test question. On approximately one-third of the questions, Freebase has a higher AP than our approach. On another third, our approach has a higher AP than Freebase. On the final third, both approaches perform equally well -these are typically questions where neither approach returns any correct answers (67 of the 75). Freebase outperforms in the overall MAP evaluation because it tends to return more correct answers to each question.", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 54, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 50, |
|
"text": "Table 4", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 221, |
|
"text": "Table 5", |
|
"ref_id": "TABREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison to Semantic Parsing to Freebase", |
|
"sec_num": "7.5" |
|
}, |
|
{ |
|
"text": "Note that the annotated Freebase queries have several advantages in this evaluation. First, Freebase contains significantly more predicate instances than our training data, which allows it to produce more complete answers. Second, the Freebase queries # of queries Freebase higher AP 75 (34%) equal AP 75 (34%) ENSEMBLE (O Q ) higher AP 70 (31%) correspond to the performance of a perfect semantic parser, while current semantic parsers achieve accuracies around 68% (Berant and Liang, 2014) . The results from this experiment suggest that closed and open predicate vocabularies are complementary. Freebase produces high quality answers when it covers a question. However, many of the remaining questions can be answered correctly using an open vocabulary approach like ours. This evaluation also suggests that recall is a limiting factor of our approach; in the future, recall can be improved by using a larger corpus or including Freebase instances during training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 467, |
|
"end": 491, |
|
"text": "(Berant and Liang, 2014)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison to Semantic Parsing to Freebase", |
|
"sec_num": "7.5" |
|
}, |
|
{ |
|
"text": "Open Predicate Vocabularies There has been considerable work on generating semantic representations with an open predicate vocabulary. Much of the work is non-compositional, focusing on identifying similar predicates and entities. DIRT (Lin and Pantel, 2001) , Resolver (Yates and Etzioni, 2007) and other systems (Yao et al., 2012) cluster synonymous expressions in a corpus of relation triples. Matrix factorization is an alternative approach to clustering that has been used for relation extraction and finding analogies (Turney, 2008; Speer et al., 2008) . All of this work is closely related to distributional semantics, which uses distributional information to identify semantically similar words and phrases (Turney and Pantel, 2010; Griffiths et al., 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 258, |
|
"text": "(Lin and Pantel, 2001)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 270, |
|
"end": 295, |
|
"text": "(Yates and Etzioni, 2007)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 314, |
|
"end": 332, |
|
"text": "(Yao et al., 2012)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 524, |
|
"end": 538, |
|
"text": "(Turney, 2008;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 539, |
|
"end": 558, |
|
"text": "Speer et al., 2008)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 715, |
|
"end": 740, |
|
"text": "(Turney and Pantel, 2010;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 741, |
|
"end": 764, |
|
"text": "Griffiths et al., 2007)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Some work has considered the problem of compositional semantics with an open predicate vocabulary. Unsupervised semantic parsing (Poon and Domingos, 2009; Titov and Klementiev, 2011 ) is a clustering-based approach that incorporates com-position using a generative model for each sentence that factors according to its parse tree. Lewis and Steedman (2013) also present a clustering-based approach that uses CCG to perform semantic composition. This approach is similar to ours, except that we use matrix factorization and Freebase entities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 129, |
|
"end": 154, |
|
"text": "(Poon and Domingos, 2009;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 155, |
|
"end": 181, |
|
"text": "Titov and Klementiev, 2011", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 331, |
|
"end": 356, |
|
"text": "Lewis and Steedman (2013)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Finally, some work has focused on the problem of textual inference within this paradigm. Fader et al. (2013) present a question answering system that learns to paraphrase a question so that it can be answered using a corpus of Open IE triples (Fader et al., 2011) . Distributional similarity has also been used to learn weighted logical inference rules that can be used for recognizing textual entailment or identifying semantically similar text (Garrette et al., 2011; Beltagy et al., 2013) . This line of work focuses on performing inference between texts, whereas our work computes a text's denotation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 108, |
|
"text": "Fader et al. (2013)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 263, |
|
"text": "(Fader et al., 2011)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 446, |
|
"end": 469, |
|
"text": "(Garrette et al., 2011;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 470, |
|
"end": 491, |
|
"text": "Beltagy et al., 2013)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "A significant difference between our work and most of the related work above is that our work computes denotations containing Freebase entities. Using these entities has two advantages: (1) it enables us to use entity linking to disambiguate textual mentions, and (2) it facilitates a comparison against alternative approaches that rely on a closed predicate vocabulary. Disambiguating textual mentions is a major challenge for previous approaches, so an entity-linked corpus is a much cleaner source of data. However, our approach could also work with automatically constructed entities, for example, created by clustering mentions in an unsupervised fashion (Singh et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 660, |
|
"end": 680, |
|
"text": "(Singh et al., 2011)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Several semantic parsers have been developed for Freebase (Cai and Yates, 2013; Kwiatkowski et al., 2013; Berant et al., 2013; Berant and Liang, 2014 ). Our approach is most similar to that of Reddy et al. (2014) , which uses fixed syntactic parses of unlabeled text to train a Freebase semantic parser. Like our approach, this system automatically-generates query/answer pairs for training. However, this system, like all Freebase semantic parsers, uses a closed predicate vocabulary consisting of only Freebase predicates. In contrast, our approach uses an open predicate vocabulary and can learn denotations for words whose semantics cannot be represented using Freebase predicates. Consequently, our approach can answer many questions that these Freebase semantic parsers cannot (see Section 7.5).", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 79, |
|
"text": "(Cai and Yates, 2013;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 80, |
|
"end": 105, |
|
"text": "Kwiatkowski et al., 2013;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 106, |
|
"end": 126, |
|
"text": "Berant et al., 2013;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 127, |
|
"end": 149, |
|
"text": "Berant and Liang, 2014", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 193, |
|
"end": 212, |
|
"text": "Reddy et al. (2014)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Parsing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The rule-based semantic parser used in this paper is very similar to several other rule-based systems that produce logical forms from syntactic CCG parses (Bos, 2008; Lewis and Steedman, 2013) . We developed our own system in order to have control over the particulars of the analysis; however, our approach is compatible with these systems as well.", |
|
"cite_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 166, |
|
"text": "(Bos, 2008;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 167, |
|
"end": 192, |
|
"text": "Lewis and Steedman, 2013)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Parsing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our system assigns a model-theoretic semantics to statements in natural language (Dowty et al., 1981 ) using a learned distribution over possible worlds. This distribution is concisely represented in a probabilistic database, which can be viewed as a simple Markov Logic Network (Richardson and Domingos, 2006) where all of the random variables are independent. This independence simplifies query evaluation: probabilistic databases permit efficient exact inference for safe queries (Suciu et al., 2011) , and approximate inference for the remainder (Gatterbauer et al., 2010; Gatterbauer and Suciu, 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 100, |
|
"text": "(Dowty et al., 1981", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 279, |
|
"end": 310, |
|
"text": "(Richardson and Domingos, 2006)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 483, |
|
"end": 503, |
|
"text": "(Suciu et al., 2011)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 550, |
|
"end": 576, |
|
"text": "(Gatterbauer et al., 2010;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 577, |
|
"end": 605, |
|
"text": "Gatterbauer and Suciu, 2015)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Databases", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This paper presents an approach for compositional semantics with an open predicate vocabulary. Our approach defines a probabilistic model over denotations (sets of Freebase entities) conditioned on an input text. The model has two components: a rulebased semantic parser that produces a logical form for the text, and a probabilistic database that defines a distribution over denotations for each predicate. A training phase learns the probabilistic database by applying probabilistic matrix factorization with a query/answer ranking objective to logical forms derived from a large, entity-linked web corpus. An experimental analysis demonstrates that this approach outperforms several baselines and can answer many questions that cannot be answered by semantic parsing into Freebase.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "Our approach learns a model-theoretic semantics for natural language text tied to Freebase, as do some semantic parsers, except with an open predicate vocabulary. This difference influences several other aspects of the system's design. First, because no knowledge base with the necessary knowledge exists, the system is forced to learn its knowledge base (in the form of a probabilistic database). Second, the system can directly map syntactic CCG parses to logical forms, as it is no longer necessary to map words to a closed vocabulary of knowledge base predicates. In some sense, our approach is the exact opposite of the typical semantic parsing approach: usually, the semantic parser is learned and the knowledge base is fixed; here, the knowledge base is learned and the semantic parser is fixed. From a machine learning perspective, training a probabilistic database via matrix factorization is easier than training a semantic parser, as there are no difficult inference problems. However, it remains to be seen whether a learned knowledge base can achieve similar recall as a fixed knowledge base on the subset of language it covers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "There are two limitations of this work. The most obvious limitation is the restriction to existentially quantified conjunctions of predicates. This limitation is not inherent to the approach, however, and can be removed in future work by using a system like Boxer (Bos, 2008) for semantic parsing. A more serious limitation is the restriction to one-and two-argument predicates, which prevents our system from representing events and n-ary relations. Conceptually, a similar matrix factorization approach could be used to learn embeddings for n-ary entity tuples; however, in practice, the sparsity of these tuples makes learning challenging. Developing methods for learning n-ary relations is an important problem for future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 275, |
|
"text": "(Bos, 2008)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "A direction for future work is scaling up the size of the training corpus to improve recall. Low recall is the main limitation of our current system as demonstrated by the experimental analysis. Both stages of training, the data generation and matrix factorization, can be parallelized using a cluster. All of the relation instances in Freebase can also be added to the training corpus. It should be feasible to increase the quantity of training data by a factor of 10-100, for example, to train on all of ClueWeb. Scaling up the training data may allow a semantic parser with an open predicate vocabulary to outperform comparable closed vocabulary systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "This paper uses a simple model-theoretic semantics where the denotation of a noun phrase is a set of entities and the denotation of a sentence is either true or false. However, for notational convenience, denotations \u03b3 will be treated as sets of entities throughout.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://rtw.ml.cmu.edu/tacl2015_csf", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This listing of rules is partial as it does not include, e.g., negation or joins between one-argument and two-argument logical forms. However, the corresponding rules are easy to derive.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A seemingly simple solution to this problem is to randomly generate negative examples; however, we empirically found that this approach performs considerably worse than both of the proposed ranking objectives.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.lemurproject.org/clueweb09. php", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The numbers in this table are not comparable to the numbers inTable 2as the correct answers for each question are different.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research was supported in part by DARPA under contract number FA8750-13-2-0005, and by a generous grant from Google. We additionally thank Matt Gardner, Ndapa Nakashole, Amos Azaria and the anonymous reviewers for their helpful comments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Open information extraction from the web", |
|
"authors": [ |
|
{ |
|
"first": "Michele", |
|
"middle": [], |
|
"last": "Banko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Cafarella", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Soderland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Broadhead", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 20th International Joint Conference on Artifical Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open infor- mation extraction from the web. In Proceedings of the 20th International Joint Conference on Artifical Intel- ligence.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Montague meets markov: Deep semantics with probabilistic logical form", |
|
"authors": [ |
|
{ |
|
"first": "Islam", |
|
"middle": [], |
|
"last": "Beltagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cuong", |
|
"middle": [], |
|
"last": "Chau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gemma", |
|
"middle": [], |
|
"last": "Boleda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Garrette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katrin", |
|
"middle": [], |
|
"last": "Erk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Islam Beltagy, Cuong Chau, Gemma Boleda, Dan Gar- rette, Katrin Erk, and Raymond Mooney. 2013. Mon- tague meets markov: Deep semantics with probabilis- tic logical form. In Second Joint Conference on Lexi- cal and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Semantic parsing via paraphrasing", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Berant and Percy Liang. 2014. Semantic pars- ing via paraphrasing. In Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Semantic parsing on Freebase from question-answer pairs", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Chou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Frostig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Chinese whispers: An efficient graph clustering algorithm and its application to natural language processing problems", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the First Workshop on Graph Based Methods for Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Biemann. 2006. Chinese whispers: An efficient graph clustering algorithm and its application to natu- ral language processing problems. In Proceedings of the First Workshop on Graph Based Methods for Nat- ural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Wide-coverage semantic analysis with boxer", |
|
"authors": [ |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 2008 Conference on Semantics in Text Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johan Bos. 2008. Wide-coverage semantic analysis with boxer. In Proceedings of the 2008 Conference on Se- mantics in Text Processing.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Large-scale Semantic Parsing via Schema Matching and Lexicon Extension", |
|
"authors": [ |
|
{ |
|
"first": "Qingqing", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Yates", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qingqing Cai and Alexander Yates. 2013. Large-scale Semantic Parsing via Schema Matching and Lexicon Extension. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Efficient query evaluation on probabilistic databases", |
|
"authors": [ |
|
{ |
|
"first": "Nilesh", |
|
"middle": [], |
|
"last": "Dalvi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Suciu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "The VLDB Journal", |
|
"volume": "16", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nilesh Dalvi and Dan Suciu. 2007. Efficient query eval- uation on probabilistic databases. The VLDB Journal, 16(4), October.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Adaptive subgradient methods for online learning and stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Duchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elad", |
|
"middle": [], |
|
"last": "Hazan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoram", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2121--2159", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121-2159, July.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Identifying relations for open information extraction", |
|
"authors": [ |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Fader", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Soderland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information ex- traction. In Proceedings of the Conference on Empiri- cal Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Paraphrase-driven learning for open question answering", |
|
"authors": [ |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Fader", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "FACC1: Freebase annotation of ClueWeb corpora", |
|
"authors": [ |
|
{ |
|
"first": "Evgeniy", |
|
"middle": [], |
|
"last": "Gabrilovich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Ringgaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amarnag", |
|
"middle": [], |
|
"last": "Subramanya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evgeniy Gabrilovich, Michael Ringgaard, and Amar- nag Subramanya. 2013. FACC1: Freebase anno- tation of ClueWeb corpora, Version 1 (Release date 2013-06-26, Format version 1, Correction level 0). http://lemurproject.org/clueweb09/.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Integrating logical representations with probabilistic information using markov logic", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Garrette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katrin", |
|
"middle": [], |
|
"last": "Erk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the International Conference on Computational Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Garrette, Katrin Erk, and Raymond Mooney. 2011. Integrating logical representations with probabilistic information using markov logic. In Proceedings of the International Conference on Computational Seman- tics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A formal approach to linking logical form and vector-space lexical semantics", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Garrette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katrin", |
|
"middle": [], |
|
"last": "Erk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Computing Meaning", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "27--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Garrette, Katrin Erk, and Raymond J. Mooney. 2013. A formal approach to linking logical form and vector-space lexical semantics. In Harry Bunt, Johan Bos, and Stephen Pulman, editors, Computing Mean- ing, volume 4, pages 27-48.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Approximate lifted inference with probabilistic databases", |
|
"authors": [ |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Gatterbauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Suciu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the VLDB Endowment", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wolfgang Gatterbauer and Dan Suciu. 2015. Approx- imate lifted inference with probabilistic databases. Proceedings of the VLDB Endowment, 8(5), January.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Dissociation and propagation for efficient query evaluation over probabilistic databases", |
|
"authors": [ |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Gatterbauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abhay", |
|
"middle": [], |
|
"last": "Kumar Jha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Suciu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Fourth International VLDB workshop on Management of Uncertain Data (MUD 2010) in conjunction with VLDB 2010", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wolfgang Gatterbauer, Abhay Kumar Jha, and Dan Su- ciu. 2010. Dissociation and propagation for efficient query evaluation over probabilistic databases. In Pro- ceedings of the Fourth International VLDB workshop on Management of Uncertain Data (MUD 2010) in conjunction with VLDB 2010, Singapore, September 13, 2010.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Topics in semantic representation", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Psychological Review", |
|
"volume": "114", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas L. Griffiths, Joshua B. Tenenbaum, and Mark Steyvers. 2007. Topics in semantic representation. Psychological Review 114.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Acquiring compact lexicalized grammars from a cleaner treebank", |
|
"authors": [ |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hockenmaier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of Third International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Julia Hockenmaier and Mark Steedman. 2002. Acquir- ing compact lexicalized grammars from a cleaner tree- bank. In Proceedings of Third International Confer- ence on Language Resources and Evaluation.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Weakly supervised training of semantic parsers", |
|
"authors": [ |
|
{ |
|
"first": "Jayant", |
|
"middle": [], |
|
"last": "Krishnamurthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jayant Krishnamurthy and Tom M. Mitchell. 2012. Weakly supervised training of semantic parsers. In Proceedings of the 2012 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Joint syntactic and semantic parsing with combinatory categorial grammar", |
|
"authors": [ |
|
{ |
|
"first": "Jayant", |
|
"middle": [], |
|
"last": "Krishnamurthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jayant Krishnamurthy and Tom M. Mitchell. 2014. Joint syntactic and semantic parsing with combinatory cat- egorial grammar. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Scaling semantic parsers with on-the-fly ontology matching", |
|
"authors": [ |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Kwiatkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eunsol", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Artzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Combined distributional and logical semantics", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "179--192", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Lewis and Mark Steedman. 2013. Combined distributional and logical semantics. Transactions of the Association for Computational Linguistics, 1:179- 192.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "DIRT -discovery of inference rules from text", |
|
"authors": [ |
|
{ |
|
"first": "Dekang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Pantel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dekang Lin and Patrick Pantel. 2001. DIRT -discov- ery of inference rules from text. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Introduction to Information Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prabhakar", |
|
"middle": [], |
|
"last": "Raghavan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hin- rich Sch\u00fctze. 2008. Introduction to Information Re- trieval. Cambridge University Press, New York, NY, USA.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Learning to link with wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Milne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ian", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Witten", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 17th ACM Conference on Information and Knowledge Management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Milne and Ian H. Witten. 2008. Learning to link with wikipedia. In Proceedings of the 17th ACM Con- ference on Information and Knowledge Management.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Unsupervised semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Hoifung", |
|
"middle": [], |
|
"last": "Poon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pedro", |
|
"middle": [], |
|
"last": "Domingos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hoifung Poon and Pedro Domingos. 2009. Unsuper- vised semantic parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Lan- guage Processing.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Local and global algorithms for disambiguation to wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Lev", |
|
"middle": [], |
|
"last": "Ratinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lev Ratinov, Dan Roth, Doug Downey, and Mike An- derson. 2011. Local and global algorithms for dis- ambiguation to wikipedia. In Proceedings of the 49th", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Meeting of the Association for Computational Linguistics: Human Language Technologies.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Large-scale semantic parsing without question-answer pairs", |
|
"authors": [ |
|
{ |
|
"first": "Siva", |
|
"middle": [], |
|
"last": "Reddy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Transactions of the Association of Computational Linguistics", |
|
"volume": "2", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siva Reddy, Mirella Lapata, and Mark Steedman. 2014. Large-scale semantic parsing without question-answer pairs. Transactions of the Association of Computa- tional Linguistics -Volume 2, Issue 1.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "BPR: Bayesian personalized ranking from implicit feedback", |
|
"authors": [ |
|
{ |
|
"first": "Steffen", |
|
"middle": [], |
|
"last": "Rendle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Freudenthaler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeno", |
|
"middle": [], |
|
"last": "Gantner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lars", |
|
"middle": [], |
|
"last": "Schmidt-Thieme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian per- sonalized ranking from implicit feedback. In Proceed- ings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Markov logic networks", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Machine Learning", |
|
"volume": "62", |
|
"issue": "", |
|
"pages": "107--136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markov logic networks. Machine Learning, 62(1- 2):107-136, February.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Relation extraction with matrix factorization and universal schemas", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Limin", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Marlin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Joint Human Language Technology Conference/Annual Meeting of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Joint Human Language Technology Conference/Annual Meeting of the North American Chapter of the Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Large-scale crossdocument coreference using distributed inference and hierarchical models", |
|
"authors": [ |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amarnag", |
|
"middle": [], |
|
"last": "Subramanya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Association for Computational Linguistics: Human Language Technologies (ACL HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sameer Singh, Amarnag Subramanya, Fernando Pereira, and Andrew McCallum. 2011. Large-scale cross- document coreference using distributed inference and hierarchical models. In Association for Computa- tional Linguistics: Human Language Technologies (ACL HLT).", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "AnalogySpace: Reducing the dimensionality of common sense knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Speer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Catherine", |
|
"middle": [], |
|
"last": "Havasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Henry", |
|
"middle": [], |
|
"last": "Lieberman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Speer, Catherine Havasi, and Henry Lieberman. 2008. AnalogySpace: Reducing the dimensionality of common sense knowledge. In AAAI.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Probabilistic databases", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Suciu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Olteanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "R\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Koch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Synthesis Lectures on Data Management", |
|
"volume": "3", |
|
"issue": "2", |
|
"pages": "1--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Suciu, Dan Olteanu, Christopher R\u00e9, and Christoph Koch. 2011. Probabilistic databases. Synthesis Lec- tures on Data Management, 3(2):1-180.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "A bayesian model for unsupervised semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Klementiev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ivan Titov and Alexandre Klementiev. 2011. A bayesian model for unsupervised semantic parsing. In Proceed- ings of the 49th Annual Meeting of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "From frequency to meaning: vector space models of semantics", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Turney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pantel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "37", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter D. Turney and Patrick Pantel. 2010. From fre- quency to meaning: vector space models of semantics. Journal of Artificial Intelligence Research, 37(1), Jan- uary.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "The latent relation mapping engine: Algorithm and experiments", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Turney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "33", |
|
"issue": "1", |
|
"pages": "615--655", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter D. Turney. 2008. The latent relation mapping en- gine: Algorithm and experiments. Journal of Artificial Intelligence Research, 33(1):615-655, December.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Natural language questions for the web of data", |
|
"authors": [ |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Yahya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Berberich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shady", |
|
"middle": [], |
|
"last": "Elbassuoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maya", |
|
"middle": [], |
|
"last": "Ramanath", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Volker Tresp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohamed Yahya, Klaus Berberich, Shady Elbassuoni, Maya Ramanath, Volker Tresp, and Gerhard Weikum. 2012. Natural language questions for the web of data. In Proceedings of the 2012 Joint Conference on Em- pirical Methods in Natural Language Processing and Computational Natural Language Learning.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Unsupervised relation discovery with sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Limin", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Limin Yao, Sebastian Riedel, and Andrew McCallum. 2012. Unsupervised relation discovery with sense dis- ambiguation. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguistics: Long Papers -Volume 1.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Universal schema for entity type prediction", |
|
"authors": [ |
|
{ |
|
"first": "Limin", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Workshop on Automated Knowledge Base Construction", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Limin Yao, Sebastian Riedel, and Andrew McCallum. 2013. Universal schema for entity type prediction. In Proceedings of the 2013 Workshop on Automated Knowledge Base Construction.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Unsupervised resolution of objects and relations on the web", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Yates", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Yates and Oren Etzioni. 2007. Unsupervised resolution of objects and relations on the web. In Pro- ceedings of the 2007 Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Learning to parse database queries using inductive logic programming", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Zelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the thirteenth national conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic pro- gramming. In Proceedings of the thirteenth national conference on Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Learning to map sentences to logical form: structured classification with probabilistic categorial grammars", |
|
"authors": [ |
|
{ |
|
"first": "Luke", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "UAI '05, Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luke S. Zettlemoyer and Michael Collins. 2005. Learn- ing to map sentences to logical form: structured clas- sification with probabilistic categorial grammars. In UAI '05, Proceedings of the 21st Conference in Un- certainty in Artificial Intelligence.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "Averaged 11-point precision/recall curves for the 116 answerable test questions.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td colspan=\"3\">Input Text Republican front-runner from Texas</td><td>\u2192</td><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Entity /EN/GEORGE BUSH 0.57 Prob. /EN/RICK PERRY 0.45 ...</td></tr><tr><td/><td/><td/><td/><td/><td>I N</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td>S T A T E . . .</td><td/><td/><td>F R O M L I V E S N N . . .</td><td/><td/><td/><td/><td>S T A T E . . .</td><td/></tr><tr><td>TEXAS REPUB. G. BUSH ...</td><td>\u03c6</td><td>\u03b8</td><td>(REPUB., G. BUSH) ... (G. BUSH, TEXAS) (G. BUSH, REPUB.)</td><td>\u03c6</td><td>\u03b8</td><td>\u2192</td><td>... G. BUSH TEXAS REPUB.</td><td>.9 .1 .1</td><td>.9 .1 .1</td><td>.1 .8 .1</td><td>(G. BUSH, TEXAS) (G. BUSH, REPUB.)</td></tr><tr><td colspan=\"4\">Entity/Predicate Embeddings</td><td/><td/><td/><td colspan=\"5\">Probabilistic Database</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Logical Form\u03bbx.\u2203y, z.FRONT-RUNNER(x)\u2227y = /EN/REPUBLICAN\u2227NN(y, x)\u2227z = /EN/TEXAS \u2227 FROM(x, z) \u2192 . -R U N . P O L . . -R U N . P O L .", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Dan Hesse, CEO of Sprint \u03bbx.\u2203y.x = /EN/DAN HESSE \u2227 CEO(x) \u2227 OF(x, y) \u2227 y = /EN/SPRINT Yankees pitcher \u03bbx.\u2203y.PITCHER(x) \u2227 NN(y, x) \u2227 y = /EN/YANKEES Tom Cruise plays Maverick in the movie Top Gun.", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Statistics of the test data set.", |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td>: Mean average precision for our question</td></tr><tr><td>answering task. The difference in MAP between</td></tr><tr><td>each pair of adjacent models is statistically signifi-</td></tr><tr><td>cant (p < .05) via the sign test.</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Statistics of the Freebase MQL queries annotated for the test data set.", |
|
"num": null |
|
}, |
|
"TABREF9": { |
|
"content": "<table><tr><td>Table 3. Coverage is reasonably high: we were able</td></tr><tr><td>to annotate a Freebase query for 142 questions (65%</td></tr><tr><td>of the test set). The remaining unannotatable ques-</td></tr><tr><td>tions are due to missing predicates in Freebase, such</td></tr><tr><td>as a relation defining the emperor of the Incan em-</td></tr><tr><td>pire. Of the 142 annotated Freebase queries, 95 of</td></tr><tr><td>them return at least one entity answer. The queries</td></tr><tr><td>with no answers typically reference uncommon en-</td></tr><tr><td>tities which have few or no known relation instances</td></tr><tr><td>in Freebase. The annotated queries contain an aver-</td></tr><tr><td>age of 2.62 Freebase predicates.</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Mean average precision of our best performing model compared to a manually annotated Freebase query for each test question.", |
|
"num": null |
|
}, |
|
"TABREF10": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Question-by-question comparison of model performance. Each test question is placed into one of the three buckets above, depending on whether Freebase or ENSEMBLE (O Q ) achieves a better average precision (AP) for the question.", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |