Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R15-1048",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:57:34.536376Z"
},
"title": "A Supervised Semantic Parsing with Lexical Extension and Syntactic Constraint",
"authors": [
{
"first": "Zhihua",
"middle": [],
"last": "Liao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Foreign Studies College Hunan Normal University",
"location": {
"settlement": "Changsha",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Qixian",
"middle": [],
"last": "Zeng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Hunan Normal University",
"location": {
"settlement": "Changsha",
"country": "China"
}
},
"email": ""
},
{
"first": "Qiyun",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Hunan Normal University",
"location": {
"settlement": "Changsha",
"country": "China"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Existing semantic parsing research has steadily improved accuracy on a few domains and their corresponding meaning representations. In this paper, we present a novel supervised semantic parsing algorithm, which includes the lexicon extension and the syntactic supervision. This algorithm adopts a large-scale knowledge base from the open-domain Freebase to construct efficient, rich Combinatory Categorial Grammar (CCG) lexicon in order to supplement the inadequacy of its manually-annotated training dataset in the small closed-domain while allows for the syntactic supervision from the dependency-parsed sentences to penalize the ungrammatical semantic parses. Evaluations on both benchmark closed-domain datasets demonstrate that this approach learns highly accurate parser, whose parsing performance benefits greatly from the open-domain CCG lexicon and syntactic constraint.",
"pdf_parse": {
"paper_id": "R15-1048",
"_pdf_hash": "",
"abstract": [
{
"text": "Existing semantic parsing research has steadily improved accuracy on a few domains and their corresponding meaning representations. In this paper, we present a novel supervised semantic parsing algorithm, which includes the lexicon extension and the syntactic supervision. This algorithm adopts a large-scale knowledge base from the open-domain Freebase to construct efficient, rich Combinatory Categorial Grammar (CCG) lexicon in order to supplement the inadequacy of its manually-annotated training dataset in the small closed-domain while allows for the syntactic supervision from the dependency-parsed sentences to penalize the ungrammatical semantic parses. Evaluations on both benchmark closed-domain datasets demonstrate that this approach learns highly accurate parser, whose parsing performance benefits greatly from the open-domain CCG lexicon and syntactic constraint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Semantic parsers convert natural language sentences to logical forms through a meaning representation language. Recent research has focused on learning such parsers directly from corpora made up of sentences paired with logical meaning representations (Artzi and Zettlemoyer, 2011; Lu et al., 2008; Lu and Tou, 2011; Liao and Zhang, 2013; Kwiatkowski et al., 2010; Kwiatkowski et al., 2011; Kwiatkowski et al., 2012; Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Zettlemoyer and Collins, 2009; Zettlemoyer and Collins, 2012) . And its goal is to learn a grammar that can map new, unseen sentences onto their corresponding meanings, or logical expressions.",
"cite_spans": [
{
"start": 252,
"end": 281,
"text": "(Artzi and Zettlemoyer, 2011;",
"ref_id": "BIBREF31"
},
{
"start": 282,
"end": 298,
"text": "Lu et al., 2008;",
"ref_id": "BIBREF29"
},
{
"start": 299,
"end": 316,
"text": "Lu and Tou, 2011;",
"ref_id": "BIBREF30"
},
{
"start": 317,
"end": 338,
"text": "Liao and Zhang, 2013;",
"ref_id": "BIBREF35"
},
{
"start": 339,
"end": 364,
"text": "Kwiatkowski et al., 2010;",
"ref_id": "BIBREF26"
},
{
"start": 365,
"end": 390,
"text": "Kwiatkowski et al., 2011;",
"ref_id": "BIBREF27"
},
{
"start": 391,
"end": 416,
"text": "Kwiatkowski et al., 2012;",
"ref_id": "BIBREF28"
},
{
"start": 417,
"end": 447,
"text": "Zettlemoyer and Collins, 2005;",
"ref_id": "BIBREF12"
},
{
"start": 448,
"end": 478,
"text": "Zettlemoyer and Collins, 2007;",
"ref_id": "BIBREF13"
},
{
"start": 479,
"end": 509,
"text": "Zettlemoyer and Collins, 2009;",
"ref_id": "BIBREF14"
},
{
"start": 510,
"end": 540,
"text": "Zettlemoyer and Collins, 2012)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For decades there have been many algorithms that learn probabilistic CCG grammars. These grammars are well suited to the semantic parsing because of the close linking with syntactic and semantic information. Thus, they are used to model a wide range of complex linguistic phenomena and are strongly lexicalized, which store all languagespecific grammatical information directly with the words and the CCG lexicon. This CCG lexicon is useful for learning parser. However, it often suffers from the sparsity and the diversity in the training and testing datasets. Consequently, we hold that a large-scale knowledge base should play a key role in the semantic parsing. That is, it might be quite favorable in training such parser and resolving these syntactic ambiguities. Using the knowledge base which contains rich semantic information from the open-domain such as Freebase, can improve efficiently the parser's ability to solve complex syntactic parsing problem and benefit the accuracy. Besides, many previous approaches do not involve the syntactic constraint to penalize the ungrammatical parses when semantic parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents a supervised approach to learn semantic parsing task using a large-scale open-domain knowledge base and syntactic constraint. The semantic parser is trained to learn parsing via a large-scale open-domain CCG lexicon while simultaneously producing parses that syntactically agree with their dependency parses. Combining these two elements allows us to train a more accurate semantic parser. In particular, it also contains a factored CCG lexicon from the closed-domain GeoQuery and ATIS. Therefore, our approach not only includes two traditional CCG lexicons from the closed-domain GeoQuery and ATIS, and from the open-domain Freebase, but also includes the factored lexicon from the closed-domain GeoQuery and ATIS. This joint of such different lexicons does well in dealing with the sparsity and the diversity of the dataset where some words or phrases have been never appeared during the training and testing procedures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is structured as follows. We first provide some background information about Freebase dataset, Combinatory Categorial Grammar, probabilistic CCG (PCCG) and syntactic constraint function in Section 2. Section 3 describes how we use FUBL algorithm to construct a semantic parser FUBLLESC, and Section 4 presents our experiments and reports the results. Section 5 describes the related work. Finally, we make the conclusion and give the future work in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Freebase is a free, online, user-contributed, relational database covering many different domains of knowledge (Cai and Yates, 2013; Cai and Yates, 2014; Reddy et al., 2014) . The full schema and contents are available for download 1 . One main motivation we adopt Freebase is that it provides a much rich knowledge base to build a large-scale CCG lexicon for semantic parsing than traditional benchmark database like GeoQuery. The Geo-Query database contains only a single geography domain, 7 relations, and 698 total instances. However, the \"Freebase Commons\" subset of Freebase consists of 86 domains, an average of 25 relations per domain (total of 2134 relations), and 615000 known instances per domain (53 million instances total). The total dataset can be divided into 11 different subsets in terms of the domain types.",
"cite_spans": [
{
"start": 111,
"end": 132,
"text": "(Cai and Yates, 2013;",
"ref_id": "BIBREF19"
},
{
"start": 133,
"end": 153,
"text": "Cai and Yates, 2014;",
"ref_id": "BIBREF20"
},
{
"start": 154,
"end": 173,
"text": "Reddy et al., 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Freebase Dataset",
"sec_num": "2"
},
{
"text": "CCG is a linguistic formalism that tightly couples syntax and semantic (Steedman, 1996; Steedman, 2000) . It can be used to model a wide range of language phenomena. A traditional CCG grammar includes a lexicon \u039b with entries like the following:",
"cite_spans": [
{
"start": 71,
"end": 87,
"text": "(Steedman, 1996;",
"ref_id": "BIBREF24"
},
{
"start": 88,
"end": 103,
"text": "Steedman, 2000)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Categorial Grammar",
"sec_num": "2.2"
},
{
"text": "f lights N : \u03bbx.f light(x) to (N \\N )/N P : \u03bby.\u03bbf.\u03bbx.f (x) \u2227 to(x, y) Boston N P : bos",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Categorial Grammar",
"sec_num": "2.2"
},
{
"text": "where each lexical item w X : h has words w, a syntactic category X, and a logical form h. For the first example, these are flights, N , and \u03bbx.f light(x). Furthermore, we also introduce the factored lexicon as (lexeme,template) pairs, as described in Subsection 3.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Categorial Grammar",
"sec_num": "2.2"
},
{
"text": "CCG syntactic categories may be atomic (such as S or N P ) or complex (such as (N \\N )/N P ) where the slash combinators encode word order information. CCG uses a small set of combinatory rules to build syntactic parses and semantic representations concurrently. It includes forward (>) and backward (<) application rules, and forward (>B) and backward (<B) composition rules as well as coordination rule. Except for the standard forward and backward slashes of CCG we also include a vertical slash for which the direction of application is underspecified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Categorial Grammar",
"sec_num": "2.2"
},
{
"text": "Due to the ambiguity in both the CCG lexicon and the order in which combinators are applied, there will be many parses for each sentence. We discriminate between competing parses using a loglinear model which has a syntactic constraint function \u03a6 that will be described in the next Subsection 2.4, a feature vector \u03c6, and a parameter vector \u03b8. The probability of a parse y that returns logical form z i , i = 1 . . . n, given a sentence x i , i = 1 . . . n and a weak supervision variable \u00b5 is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic CCG",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y, zi, \u00b5|xi; \u03b8, \u039b) = \u03a6(xi, y, \u00b5)e \u03b8\u2022\u03c6(x i ,y,z i ,\u00b5) \u03a3 y ,z ,\u00b5 \u03a6(xi, y , \u00b5 )e \u03b8\u2022\u03c6(x i ,y ,z ,\u00b5 )",
"eq_num": "(1)"
}
],
"section": "Probabilistic CCG",
"sec_num": "2.3"
},
{
"text": "Subsection 4.3 fully defines the set of features used in the system presented. The most important of these control the generation of lexical items from (lexeme,template)pairs. Each (lexeme, template) pair used in a parse fires three lexical features as we will see in more details in Subsection 4.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic CCG",
"sec_num": "2.3"
},
{
"text": "The parsing or inference problem done at the testing step requires us to find the most likely logical form z given a sentence x i and a weak supervision variable \u00b5 to encourage the agreement between the semantic parses and syntactic-based dependency ones, assuming that the parameters \u03b8 and lexicon \u039b are known:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic CCG",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (xi) = arg max z p(z|xi; \u03b8, \u039b)",
"eq_num": "(2)"
}
],
"section": "Probabilistic CCG",
"sec_num": "2.3"
},
{
"text": "where the probability of the logical form is found by summing over all parses that produce it:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic CCG",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(z|xi; \u03b8, \u039b) = \u03a3y\u2208Y st.\u00b5=1p(y, z, \u00b5|xi; \u03b8, \u039b)",
"eq_num": "(3)"
}
],
"section": "Probabilistic CCG",
"sec_num": "2.3"
},
{
"text": "In this approach the distribution over parse trees y is modeled as a hidden variable. Thereby, the parse tree y must agree with a dependency parse of the same sentence x i . That is, it must guarantee the weak supervision variable \u00b5 value to be 1. For each sentence x i , we perform a beam search to produce all possible semantic parse y, then check the value of the syntactic constraint function \u03a6 for each generated parse and eliminate parses which are not consistent with their dependency parses. The sum over parses can be calculated efficiently using the inside-outside algorithm with a CKYstyle parsing algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic CCG",
"sec_num": "2.3"
},
{
"text": "To estimate the parameters themselves, we use stochastic gradient updates. Given a set of n sentence-meaning pairs (x i , z i ) : i = 1 . . . n, we update the parameters \u03b8 iteratively, for each example i, by following the local gradient of the conditional log-likelihood objective O i = log P (z i |x i ; \u03b8, \u039b). The local gradient of the individual parameter \u03b8 j associated with feature \u03c6 j and training instance (x i , z i ) is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic CCG",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2202Oi \u2202\u03b8j = E p(y,\u00b5|x i ,z i ;\u03b8,\u039b) [\u03c6j(xi, y, zi, \u00b5)] \u2212 E p(y,z,\u00b5|x i ;\u03b8,\u039b) [\u03c6j(xi, y, z, \u00b5)]",
"eq_num": "(4)"
}
],
"section": "Probabilistic CCG",
"sec_num": "2.3"
},
{
"text": "All of the expectations in above equation are calculated through the use of the inside-outside algorithm on a pruned parse chart. For a sentence of length m, each parse chart span is pruned using a beam width proportional to m 2 3 , to allow larger beams for shorter sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic CCG",
"sec_num": "2.3"
},
{
"text": "A main problem within the above semantic parsing is that it admits a large number of ungrammatical parses. This may result in the waste of time for searching the parse space. Our motivation using the syntactic constraint is that it can shrink the space of searching parse tree and reduce the time of finding the correct parse. Thus, it will enhance the efficiency of semantic parsing. The syntactic constraint function penalizes ungrammatical parses by encouraging the semantic parser to produce parse trees that agree with a dependency parse of the same sentence (Krishnamurthy and Mitchell, 2012; Krishnamurthy and Mitchell, 2013; Krishnamurthy and Mitchell, 2015) . Specifically, the syntactic constraint requires the predicate-argument structure of the CCG parse to agree with the predicate-argument structure of the dependency parse.",
"cite_spans": [
{
"start": 599,
"end": 632,
"text": "Krishnamurthy and Mitchell, 2013;",
"ref_id": "BIBREF5"
},
{
"start": 633,
"end": 666,
"text": "Krishnamurthy and Mitchell, 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Constraint Function \u03a6",
"sec_num": "2.4"
},
{
"text": "Therefore, the agreement can be defined as a function of each CCG rule application in y. In the parse tree y, each rule application combines two subtrees, y h and y c , into a single tree spanning a larger portion of the sentence x i . A rule application AGREE(y,t) is consistent with a dependency parse t if the head words of y h and y c have a dependency edge between them in t. Here, the weak supervision variable \u00b5 is defined as AGREE(y,t). Therefore, the syntactic Constraint function \u03a6(\u00b5, y, x i ) is true if and only if every rule application AGREE(y,t) in y is consistent with t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Constraint Function \u03a6",
"sec_num": "2.4"
},
{
"text": "\u03a6(\u00b5, y, xi) = 1 if \u00b5 = AGREE(y, DEPPARSE(xi)) 0 otherwise (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Constraint Function \u03a6",
"sec_num": "2.4"
},
{
"text": "3 Learning Factored PCCGs with Lexicon Extension and Syntactic Constraint",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Constraint Function \u03a6",
"sec_num": "2.4"
},
{
"text": "Our factored unification based learning method with lexicon extension and syntactic constraint (FUBLLESC) extends the factored unification based learning (FUBL) algorithm (Kwiatkowski et al., 2011) to induce an open-domain lexicon, while also simultaneously adding dependencybased syntactic constraint to permit semantic parsing. In this section, we first define knowledge base K -Freebase and construct the open-domain CCG lexicon \u039b O , then provide the factored lexicon \u039b F from the closed-domain GeoGuery and ATIS, and finally present our FUBLLESC algorithm.",
"cite_spans": [
{
"start": 171,
"end": 197,
"text": "(Kwiatkowski et al., 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Constraint Function \u03a6",
"sec_num": "2.4"
},
{
"text": "The main input in our system is a propositional knowledge base K = (E, , C, \u2206) (Hoffmann et al., 2011) . It contains entities E, categories C, relations , and relation instances \u2206. The categories and relations are predicates which operate on the entities and return truth values; the categories c \u2208 C are one-place predicates and the relations r \u2208 are two-place predicates. The entity e \u2208 E represents a real-world entity and has a set of known text names. Examples of such knowledge base come from the open-domain Freebase. This knowledge base influences the semantic parser by two ways. Firstly, CCG logical forms are constructed by combining the categories, relations and entities from the knowledge base with logical connectives; hence, the predicates in the knowledge base determine the expressivity of the parser's semantic representation. Secondly, the known relation instances r(e 1 , e 2 ) \u2208 \u2206 are used to train the semantic parser.",
"cite_spans": [
{
"start": 79,
"end": 102,
"text": "(Hoffmann et al., 2011)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Base K -Freebase",
"sec_num": "3.1"
},
{
"text": "-domain CCG Lexicon \u039b O",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construct the Open",
"sec_num": "3.2"
},
{
"text": "The first step in constructing the semantic parser is to define a open-domain CCG lexicon \u039b O . We construct \u039b O by applying simple dependencyparse-based heuristics to sentences in the training corpus (i.e., NYT-Freebase 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construct the Open",
"sec_num": "3.2"
},
{
"text": "Here we adopt MALTPARSER (Nivre et al., 2006) as the dependency-parser. The resulting lexicon \u039b 0 captures a variety of linguistic phenomena, including verbs, common nouns, noun compounds and prepositional modifiers. Next, we use the mention identification procedure to identify all mentions of entities in the sentence set x i , i = 1 . . . n.",
"cite_spans": [
{
"start": 25,
"end": 45,
"text": "(Nivre et al., 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Construct the Open",
"sec_num": "3.2"
},
{
"text": "Here we adopt sentential relation extractor MUL-TIR (Hoffmann et al., 2011) , which is a stateof-the-art weakly supervised relation extractor for multi-instance learning with overlapping relation that combines a sentence-level extraction model with a simple, corpus-level component for aggregating the individual facts. This process results in (e 1 , e 2 , x i ) triple, consisting of sentences with two entity mentions. The dependency path between e 1 and e 2 in x i is then matched against the dependency parse patterns in Table 1 . Each matched pattern adds one or more lexical entries to \u039b O .",
"cite_spans": [
{
"start": 52,
"end": 75,
"text": "(Hoffmann et al., 2011)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 525,
"end": 532,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Construct the Open",
"sec_num": "3.2"
},
{
"text": "Each pattern in Table 1 has a corresponding lexical category template, which is a CCG lexical category containing parameters e, c and r that are chosen at initialization time. Given the triple (e 1 , e 2 , x i ), the relations r are chosen such that r(e 1 , e 2 ) \u2208 \u2206, and the categories c are chosen such that c(e 1 ) \u2208 \u2206 or c(e 2 ) \u2208 \u2206. The template is then instantiated with every combination of these e, c and r values.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Construct the Open",
"sec_num": "3.2"
},
{
"text": "A factored lexicon includes a set L of lexemes and a set T of lexical templates (Kwiatkowski et al., 2011) . ",
"cite_spans": [
{
"start": 80,
"end": 106,
"text": "(Kwiatkowski et al., 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Factored Lexicon \u039b F",
"sec_num": "3.3"
},
{
"text": "\u03bb(\u03c9, v).[\u03c9 X : h v ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Factored Lexicon \u039b F",
"sec_num": "3.3"
},
{
"text": "where h v is a logical expression that contains variables from the list v. Applying this template to input lexeme (w, c) gives the full lexical item w X : h where the variable \u03c9 has been replaced with the wordspan w and the logical form h has been created by replacing each of the variables in v with the counterpart constants from c. Then the lexical items are constructed from the specific lexemes and templates. Figure 1 shows the FUBLLESC learning algorithm. We assume training data {(x i , z i ) : i = 1 . . . n} where each example is a sentence x i paired with a logical form z i . The algorithm induces a factored PCCG with lexicon extension and syntactic constraint, including traditional CCG lexicon \u039b T from the closed-domain GeoQuery and ATIS, the CCG lexicon \u039b O from the opendomain Freebase, the lexeme L, templates T , the factored lexicon \u039b F from the closed-domain Geo-Query and ATIS, and parameter \u03b8.",
"cite_spans": [],
"ref_spans": [
{
"start": 415,
"end": 423,
"text": "Figure 1",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Factored Lexicon \u039b F",
"sec_num": "3.3"
},
{
"text": "This algorithm is online, repeatedly performing both lexical expansion (Step 1) and parameter update ( Step 2) procedures for each training example. The overall approach is closely related to the FUBL algorithm (Kwiatkowski et al., 2011) , but includes a large-scale CCG lexicon from the opendomain Freebase knowledge base and the syntactic constraint function from the dependency parser. Definitions: NEW-LEX(y) returns a set of new lexical items from a parse y. MAX-FAC(l) generates a (lexeme, template) pair from a lexical item l \u2208 lF \u222a lT \u222a lO. PART-FAC(y) generates a set of templates from parse y. The distributions p(y, \u00b5|x, z; \u03b8, \u039bF ) and p(y, z, \u00b5|x; \u03b8, \u039bF ) are defined by the log-linear model. e 2 ), and c represents a category where c(e 1 ). Each template may be instantiated with multiple values for the variables e, c, r.",
"cite_spans": [
{
"start": 101,
"end": 102,
"text": "(",
"ref_id": null
},
{
"start": 211,
"end": 237,
"text": "(Kwiatkowski et al., 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 705,
"end": 706,
"text": "e",
"ref_id": null
}
],
"eq_spans": [],
"section": "The FUBLLESC Algorithm",
"sec_num": "3.4"
},
{
"text": "\u2022 For i = 1 \u2022 \u2022 \u2022 n. * (\u03a8, \u03c0) = MAX-FAC (xi S : zi) * L = L \u222a \u03a8, T = T \u222a \u03c0 \u2022 set L = L \u222a Le.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization: Let",
"sec_num": null
},
{
"text": "\u2022 set \u039bF = (L, T ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization: Let",
"sec_num": null
},
{
"text": "\u2022 set \u039bF = \u039bF \u222a \u039bT \u222a \u039bO.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization: Let",
"sec_num": null
},
{
"text": "\u2022 Initialize \u03b8 using coocurrence statistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization: Let",
"sec_num": null
},
{
"text": "Algorithm:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization: Let",
"sec_num": null
},
{
"text": "For t = 1 \u2022 \u2022 \u2022 J, i = 1 \u2022 \u2022 \u2022 n:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization: Let",
"sec_num": null
},
{
"text": "Step 1: Add Lexemes and Templates",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization: Let",
"sec_num": null
},
{
"text": "\u2022 Let y * = arg maxy,\u00b5 i p(y, \u00b5i|xi, zi; \u03b8, \u039bF ) \u2022 For l \u2208 NEW-LEX(y * ) * (\u03a8, \u03c0) = MAX-FAC(l) * L = L \u222a \u03a8, T = T \u222a \u03c0, \u039bF = \u039bF \u222a (\u03a8, \u03c0) \u2022 \u03a0 = PART-FAC (y * ), T = T \u222a \u03a0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization: Let",
"sec_num": null
},
{
"text": "Step 2: Update Parameters with Syntactic Constraint",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization: Let",
"sec_num": null
},
{
"text": "\u2022 Let \u03b3 = \u03b1 0 1+c\u00d7k where k = i + t \u00d7 n. \u2022 Let \u00b5i = AGREE(y, DEPPARSE(xi)). \u2022 Let \u2206 = E p(y,\u00b5 i |x i ,z i ;\u03b8,\u039b F ) [\u03c6(xi, y, zi, \u00b5i)] \u2212 E p(y,z,\u00b5 i |x i ;\u03b8,\u039b F ) [\u03c6(xi, y, z, \u00b5i)] \u2022 Set \u03b8 = \u03b8 + \u03b3\u2206",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization: Let",
"sec_num": null
},
{
"text": "Output: Lexeme L, template T , factored lexicon \u039bF , and parameters \u03b8. Initialization This model is initialized with two traditional CCG lexicons and a factored lexicon as follow. Firstly, a traditional CCG lexicon \u039b T is built from the closed-domain GeoQuery and ATIS whereas another CCG lexicon \u039b O is constructed from the open-domain Freebase. Secondly, we start to build the factored lexicon \u039b F from the closed-domain GeoQuery and ATIS. MAX-FAC is a function that takes a lexical item l and returns the maximal factoring of it, that is the unique, maximal (lexeme,template) pair that can be combined to construct l. We apply MAX-FAC to each of the training examples (x i , z i ), creating a single way of producing the desired meaning z from a lexeme containing all of the words in x i . The lexemes and templates created in this way provide the initial factored lexicon \u039b F . Finally, we combine the initial factored lexicon \u039b F with these two traditional CCG lexicons \u039b T and \u039b O to create a new larger factored lexicon \u039b F .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization: Let",
"sec_num": null
},
{
"text": "Step 1: The first step of the learning algorithm adds lexemes and templates to the factored model given by performing manipulations on the highest scoring correct parse y * of the current training example (x i , z i ). NEW-LEX function generates lexical items by splitting and merging nodes in the best parse tree of each training example. The splitting procedure is a three-step process that first splits the logical form h, then splits the CCG syntactic category X and finally splits the string w. The merging procedure is to recreate the original parse tree X : h spanning w by recombining two new lexical items with CCG combinators (application or composition). First, the NEW-LEX procedure is run on y * to generate new lexical items. We then use the function MAX-FAC to create the maximal factoring of each of these new lexical items and these are added to the factored representation of the lexicon \u039b F . New templates can also be introduced through partial factoring of in-ternal parse nodes. These templates are generated by using the function PART-FAC to abstract over the wordspan and a subset of the constants contained in the internal parse nodes of y * . This step allows for templates that introduce new semantic content to model elliptical language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization: Let",
"sec_num": null
},
{
"text": "Step 2: The second step does a stochastic gradient descent update on the parameter \u03b8 used in the parsing model. In particular, this update first computes the weak supervision variable \u00b5 i value for each parse tree y through the syntactic constraint function \u03a6 and then judges whether the punishment need to be done. More details about this update are described in Subsection 2.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization: Let",
"sec_num": null
},
{
"text": "This section describes our experimental setup and comparisons of the result. We follow the setup of Zettlemoyer and Collins (2007; and Kwiatkowski et al. (2010; , including datasets, features, evaluation metrics, and initialization as well as systems, as reviewed below. Finally, we report the experimental results.",
"cite_spans": [
{
"start": 100,
"end": 130,
"text": "Zettlemoyer and Collins (2007;",
"ref_id": "BIBREF13"
},
{
"start": 135,
"end": 160,
"text": "Kwiatkowski et al. (2010;",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "We evaluate on two benchmark closed-domain datasets. GeoQuery is made up of natural language queries to a database of geographical information, while ATIS contains natural language queries to a flight booking system (Zettlemoyer and Collins, 2007; Zettlemoyer and Collins, 2009; Zettlemoyer and Collins, 2012; Kwiatkowski et al., 2010; Kwiatkowski et al., 2011) . The Geo880 dataset has 880(English sentence, logical form) pairs split into a training set of 600 pairs and a test set of 280 ones. The Geo250 dataset is a subset of the Geo880, and is used 10-fold cross validation experiments with the same splits of this subset. The ATIS dataset contains 5410 (English sentence, logical form) pairs split into a 5000 example development set and a 450 example test set.",
"cite_spans": [
{
"start": 216,
"end": 247,
"text": "(Zettlemoyer and Collins, 2007;",
"ref_id": "BIBREF13"
},
{
"start": 248,
"end": 278,
"text": "Zettlemoyer and Collins, 2009;",
"ref_id": "BIBREF14"
},
{
"start": 279,
"end": 309,
"text": "Zettlemoyer and Collins, 2012;",
"ref_id": "BIBREF15"
},
{
"start": 310,
"end": 335,
"text": "Kwiatkowski et al., 2010;",
"ref_id": "BIBREF26"
},
{
"start": 336,
"end": 361,
"text": "Kwiatkowski et al., 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "We report exact math Recall, Precision and F1. Recall is the percentage of sentences for which the correct logical form was returned, Precision is the percentage of returned logical forms that are correct, and F1 is the harmonic mean of Precision and Recall. For ATIS we also report partial match Recall, Precision and F1. Partial match Recall is the percentage of correct literals returned. Partial match Precision is the percentage of returned literals that are correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "We introduce two types of features to discriminate among parses: lexical features and logical-form features. First, for each lexical item L \u2208 \u039b T \u222a \u039b O from the closed-domain CCG lexicon \u039b T and the open-domain CCG lexicon \u039b O , we include a feature \u03c6 L that fires when L was used. Second, For each (lexeme, template) pair used to create another lexical item (l, t) \u2208 \u039b F about the factored lexicon \u039b F we have indicator features \u03c6 l for the lexeme used, \u03c6 t for the template used, and \u03c6 l,t for the pair that was used. Thereby, the lexical feature includes \u03c6 L and \u03c6 l,t . We assign the features on the lexical templates a weight of 0.1 to prevent them from swamping the far less frequent but equally informative lexeme features. For each logical-form feature, it is computed on the lambda-calculus expression z returned at the root of the parse. Each time a predicate p in the output logical expression z takes a argument a with type T (a) in position i, it triggers two binary indicator features: \u03c6 (p,a,i) for the predicate-argument relation and \u03c6 (p,T (a),i) for the predicate argumenttype relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.3"
},
{
"text": "The weights for lexeme features are initialized according to coocurrance statistics between words and logical constants. They are estimated with the GIZA++ implementation of IBM Model 1 (Och and Ney, 2003; Och and Ney, 2004) . The weights of the seed lexical entries from the closed-domain CCG lexicon \u039b T and the open-domain CCG lexicon \u039b O are set to 10 that can be equivalent to the highest possible coocurrence score. The initial weights for templates are set by adding \u22120.1 for each slash in the syntactic category and \u22122 if the template contains logical constants. Features on (lexeme, template) pairs and all parse features are initialized to zero. We use the learning rate \u03b1 0 = 1.0 and cooling rate c = 10 \u22125 in all training, and run the algorithm for J = 20 iterations.",
"cite_spans": [
{
"start": 186,
"end": 205,
"text": "(Och and Ney, 2003;",
"ref_id": "BIBREF1"
},
{
"start": 206,
"end": 224,
"text": "Och and Ney, 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization",
"sec_num": "4.4"
},
{
"text": "We compare this performance to those recentlypublished and directly-comparable results. For GeoQuery, they include the ZC07 (Zettlemoyer and Collins, 2007) , \u03bb-WASP (Wong and Mooney, 2007) , UBL (Kwiatkowski et al., 2010) FUBL (Kwiatkowski et al., 2011) . For ATIS, we report results from ZC07 (Zettlemoyer and Collins, 2007) , UBL (Kwiatkowski et al., 2010) and FUBL (Kwiatkowski et al., 2011) .",
"cite_spans": [
{
"start": 124,
"end": 155,
"text": "(Zettlemoyer and Collins, 2007)",
"ref_id": "BIBREF13"
},
{
"start": 165,
"end": 188,
"text": "(Wong and Mooney, 2007)",
"ref_id": "BIBREF34"
},
{
"start": 195,
"end": 221,
"text": "(Kwiatkowski et al., 2010)",
"ref_id": "BIBREF26"
},
{
"start": 227,
"end": 253,
"text": "(Kwiatkowski et al., 2011)",
"ref_id": "BIBREF27"
},
{
"start": 294,
"end": 325,
"text": "(Zettlemoyer and Collins, 2007)",
"ref_id": "BIBREF13"
},
{
"start": 332,
"end": 358,
"text": "(Kwiatkowski et al., 2010)",
"ref_id": "BIBREF26"
},
{
"start": 368,
"end": 394,
"text": "(Kwiatkowski et al., 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Systems",
"sec_num": "4.5"
},
{
"text": "Tables 2-4 present all the results on the GeoGuery and ATIS domains. In all cases, FUBLLESC achieves at state-of-the-art recall and precision when compared to directly comparable systems and it significantly outperforms FUBL and ZC07. Most importantly, it is obvious that on precision our FUBLLESC remarkably exceeds other systems because of the joint effect about the addition of an open-domain CCG lexicon and the usage of syntactic constraint. As shown in Table 2 , on Geo250 FUBLLESC achieves the highest recall 86.2% and precision 92.0%, whereas on Geo880 the only higher recall and precision (90.8% and 95.6%) are also achieved by FUBLLESC. On the ATIS development set, FUBLLESC outperforms FUBL by 3.3% of recall and by 10.7% of precision, which is shown in Table 3 . Table 4 indicates that on the ATIS test set FUBLLESC significantly outperforms FBUL by 10% of precision on Exact Match and 5% of precision on Partial Match, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 459,
"end": 466,
"text": "Table 2",
"ref_id": null
},
{
"start": 765,
"end": 772,
"text": "Table 3",
"ref_id": null
},
{
"start": 775,
"end": 782,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.6"
},
{
"text": "Semantic parsers have been thought of mapping sentences to logical representations of their underlying meanings. There has been significant work on supervised learning for inducing semantic parsers. Various techniques were applied to this problem including machine translation (Wong and Mooney, 2006; Wong and Mooney, 2007) , using CCG to building meaning representations (Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Zettlemoyer and Collins, 2009; Zettlemoyer and Collins, 2012) , higher-order unification (Kwiatkowski et al., 2010; Kwiatkowski et al., 2011) , model-ing child language acquisition (Kwiatkowski et al., 2012) ,generative model (Ruifang and Mooney, 2006; Lu et al., 2008) , inductive logic programming (Zelle and Mooney, 1996; Thompson and Mooney, 2003; Tang and Mooney, 2000) , probabilistic forest to string model for language generation (Lu and Tou, 2011) , and the extension from English to Chinese (Liao and Zhang, 2013) . The algorithm we develop in this paper builds on some previous work on the supervised learning CCG parsers (Kwiatkowski et al., 2010; Kwiatkowski et al., 2011) , as described in Section 3.4.",
"cite_spans": [
{
"start": 277,
"end": 300,
"text": "(Wong and Mooney, 2006;",
"ref_id": "BIBREF33"
},
{
"start": 301,
"end": 323,
"text": "Wong and Mooney, 2007)",
"ref_id": "BIBREF34"
},
{
"start": 372,
"end": 403,
"text": "(Zettlemoyer and Collins, 2005;",
"ref_id": "BIBREF12"
},
{
"start": 404,
"end": 434,
"text": "Zettlemoyer and Collins, 2007;",
"ref_id": "BIBREF13"
},
{
"start": 435,
"end": 465,
"text": "Zettlemoyer and Collins, 2009;",
"ref_id": "BIBREF14"
},
{
"start": 466,
"end": 496,
"text": "Zettlemoyer and Collins, 2012)",
"ref_id": "BIBREF15"
},
{
"start": 524,
"end": 550,
"text": "(Kwiatkowski et al., 2010;",
"ref_id": "BIBREF26"
},
{
"start": 551,
"end": 576,
"text": "Kwiatkowski et al., 2011)",
"ref_id": "BIBREF27"
},
{
"start": 616,
"end": 642,
"text": "(Kwiatkowski et al., 2012)",
"ref_id": "BIBREF28"
},
{
"start": 661,
"end": 687,
"text": "(Ruifang and Mooney, 2006;",
"ref_id": "BIBREF3"
},
{
"start": 688,
"end": 704,
"text": "Lu et al., 2008)",
"ref_id": "BIBREF29"
},
{
"start": 735,
"end": 759,
"text": "(Zelle and Mooney, 1996;",
"ref_id": "BIBREF10"
},
{
"start": 760,
"end": 786,
"text": "Thompson and Mooney, 2003;",
"ref_id": "BIBREF0"
},
{
"start": 787,
"end": 809,
"text": "Tang and Mooney, 2000)",
"ref_id": "BIBREF11"
},
{
"start": 873,
"end": 891,
"text": "(Lu and Tou, 2011)",
"ref_id": "BIBREF30"
},
{
"start": 936,
"end": 958,
"text": "(Liao and Zhang, 2013)",
"ref_id": "BIBREF35"
},
{
"start": 1068,
"end": 1094,
"text": "(Kwiatkowski et al., 2010;",
"ref_id": "BIBREF26"
},
{
"start": 1095,
"end": 1120,
"text": "Kwiatkowski et al., 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Recent research in this field has focused on learning for various forms of relatively weak but easily gathered supervision. This includes unannotated text (Poon and Domingos, 2009; Poon and Domingos, 2010) , learning from question-answer pairs (Liang et al., 2011; Berant et al., 2013) , via paraphrase model (Berant and Liang, 2014) , from conversational logs (Artzi and Zettlemoyer, 2011) , with distant supervision (Krishnamurthy and Mitchell, 2012; Krishnamurthy and Mitchell, 2013; Krishnamurthy and Mitchell, 2015; Cai and Yates, 2013; Cai and Yates, 2014) , and from sentences paired with system behaviors (Artzi and Zettlemoyer, 2013) as well as via semantic graphs (Reddy et al., 2014) .",
"cite_spans": [
{
"start": 155,
"end": 180,
"text": "(Poon and Domingos, 2009;",
"ref_id": "BIBREF17"
},
{
"start": 181,
"end": 205,
"text": "Poon and Domingos, 2010)",
"ref_id": "BIBREF18"
},
{
"start": 244,
"end": 264,
"text": "(Liang et al., 2011;",
"ref_id": "BIBREF16"
},
{
"start": 265,
"end": 285,
"text": "Berant et al., 2013)",
"ref_id": "BIBREF8"
},
{
"start": 309,
"end": 333,
"text": "(Berant and Liang, 2014)",
"ref_id": "BIBREF9"
},
{
"start": 361,
"end": 390,
"text": "(Artzi and Zettlemoyer, 2011)",
"ref_id": "BIBREF31"
},
{
"start": 418,
"end": 452,
"text": "(Krishnamurthy and Mitchell, 2012;",
"ref_id": "BIBREF4"
},
{
"start": 453,
"end": 486,
"text": "Krishnamurthy and Mitchell, 2013;",
"ref_id": "BIBREF5"
},
{
"start": 487,
"end": 520,
"text": "Krishnamurthy and Mitchell, 2015;",
"ref_id": "BIBREF6"
},
{
"start": 521,
"end": 541,
"text": "Cai and Yates, 2013;",
"ref_id": "BIBREF19"
},
{
"start": 542,
"end": 562,
"text": "Cai and Yates, 2014)",
"ref_id": "BIBREF20"
},
{
"start": 613,
"end": 642,
"text": "(Artzi and Zettlemoyer, 2013)",
"ref_id": "BIBREF32"
},
{
"start": 674,
"end": 694,
"text": "(Reddy et al., 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Our approach builds on a number of existing algorithm ideas which include adopting PCCG to building the meaning representation (Kwiatkowski et al., 2010; Kwiatkowski et al., 2011) , using the weakly supervised parameter leaning with the syntactic constraint (Krishnamurthy and Mitchell, 2012; Krishnamurthy and Mitchell, 2013) , and employing the opendomain Freebase to semantic parsing (Cai and Yates, 2013) .",
"cite_spans": [
{
"start": 127,
"end": 153,
"text": "(Kwiatkowski et al., 2010;",
"ref_id": "BIBREF26"
},
{
"start": 154,
"end": 179,
"text": "Kwiatkowski et al., 2011)",
"ref_id": "BIBREF27"
},
{
"start": 293,
"end": 326,
"text": "Krishnamurthy and Mitchell, 2013)",
"ref_id": "BIBREF5"
},
{
"start": 387,
"end": 408,
"text": "(Cai and Yates, 2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "This paper presents a novel supervised method for semantic parsing which induces PCCG from sentences paired with logical forms. This approach contains an open-domain Freebase lexicon and syntactic constraint which employs dependency parser to penalize uncorrect CCG parsing tree. The experiments on both benchmark datasets (i.e., GeoQuery and ATIS) show that our method achieves higher performances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "In the future work, we are interested in exploring morphological model and containing more open-domain lexicons as well as more syntactic information. Besides, it will also be important to better model some variations within the existing lexemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "http://www.freebase.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://iesl.cs.umass.edu/riedel/data-univSchema/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to the anonymous reviewers for their valuable feedback on an earlier version of this paper. This research was supported in part by Undergraduate Innovative Research Training project of Hunan Normal University (grant no.201401021) and the Social Science Foundation (grant no.14YBA260) in Hunan Province, China.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Acquiring Word-Meaning Mappings for Natural Language Interfaces",
"authors": [
{
"first": "Cynthia",
"middle": [
"A"
],
"last": "Thompson",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Artificial Intelligence Research",
"volume": "18",
"issue": "",
"pages": "1--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cynthia A. Thompson and Raymond J. Mooney. 2003. Acquiring Word-Meaning Mappings for Natural Language Interfaces. Journal of Artificial Intelli- gence Research, 18:1-44.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Systematic Comparison of Various Statistical Alignment Models",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Joseph Och and Hermann Ney. 2003. A Sys- tematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Alignment Template Approach to Statistical Machine Translation",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "",
"pages": "417--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Joseph Och and Hermann Ney. 2004. The Align- ment Template Approach to Statistical Machine Translation. Computational Linguistics, 30:417- 449.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Discriminative Reranking for Semantic Parsing",
"authors": [
{
"first": "Ruifang",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruifang Ge and Raymond J. Mooney. 2006. Discrimi- native Reranking for Semantic Parsing. In Proceed- ings of the Conference of the Association for Com- putational Linguistics (ACL).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Weakly Supervised Training of Semantic Parsers",
"authors": [
{
"first": "Jayant",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computatioanl Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jayant Krishnamurthy and Tom M. Mitchell. 2012. Weakly Supervised Training of Semantic Parsers. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Process- ing and Computatioanl Natural Language Learning (EMNLP-CoNLL).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Joint Syntactic and Semantic Parsing with Combinatory Categorial Grammar",
"authors": [
{
"first": "Jayant",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 52th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jayant Krishnamurthy and Tom M. Mitchell. 2013. Joint Syntactic and Semantic Parsing with Combi- natory Categorial Grammar. In Proceedings of the 52th Annual Meeting of the Association for Compu- tational Linguistics (ACL).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning a Compositional Semantics for Freebase with an Open Predicate Vocabulary",
"authors": [
{
"first": "Jayant",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jayant Krishnamurthy and Tom M. Mitchell. 2015. Learning a Compositional Semantics for Freebase with an Open Predicate Vocabulary. Transactions of the Association for Computational Linguistics (TACL).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Maltparser: A Data-driven Parser-denerator for Dependency Parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computatonal Lianguistics and 44th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Johan Hall, and Jens Nilsson. 2006. Maltparser: A Data-driven Parser-denerator for Dependency Parsing. In Proceedings of the 21st In- ternational Conference on Computatonal Lianguis- tics and 44th Annual Meeting of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semantic Parsing on Freebase from Question-Answer Pairs",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Frostig",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic Parsing on Freebase from Question-Answer Pairs. In Proceedings of EMNLP.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semantic Parsing via Paraphrasing",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant and Percy Liang. 2014. Semantic Parsing via Paraphrasing. In Proceedings of ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning to Parse Database Queries using Inductive Logic Programming",
"authors": [
{
"first": "M",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Zelle",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the National Conference on Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John M. Zelle and Raymond J. Mooney. 1996. Learn- ing to Parse Database Queries using Inductive Logic Programming. In Proceedings of the National Con- ference on Artificial Intelligence (AAAI).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automated Construction of Database Interfaces: Integrating Statistical and Relational Learning for Semantic Parsing",
"authors": [
{
"first": "R",
"middle": [],
"last": "Lappoon",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Tang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Joint Conference of Empirical Methods in Natural Language Procesing and Very Large Corpora (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lappoon R. Tang and Raymond J. Mooney. 2000. Au- tomated Construction of Database Interfaces: Inte- grating Statistical and Relational Learning for Se- mantic Parsing. In Proceedings of the Joint Con- ference of Empirical Methods in Natural Language Procesing and Very Large Corpora (EMNLP).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars",
"authors": [
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of UAI",
"volume": "",
"issue": "",
"pages": "658--666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke S. Zettlemoyer and Michael Collins. 2005. Learning to Map Sentences to Logical Form: Struc- tured Classification with Probabilistic Categorial Grammars. In Proceedings of UAI, pages 658-666.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Online Learning of Relaxed CCG Grammars for Parsing to Logical Form",
"authors": [
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "678--687",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke S. Zettlemoyer and Michael Collins. 2007. On- line Learning of Relaxed CCG Grammars for Pars- ing to Logical Form. In Proceedings of EMNLP- CoNLL, pages 678-687.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning Context-Dependent Mappings from Sentences to Logical Form",
"authors": [
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "976--984",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke S. Zettlemoyer and Michael Collins. 2009. Learning Context-Dependent Mappings from Sen- tences to Logical Form. In Proceedings of ACL- IJCNLP, pages 976-984.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars",
"authors": [
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke S. Zettlemoyer and Michael Collins. 2012. Learning to Map Sentences to Logical Form: Struc- tured Classification with Probabilistic Categorial Grammars. CoRR abs.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning Dependency-based Compositional Semantics",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Michael I. Jordan, and Dan Klein. 2011. Learning Dependency-based Compositional Seman- tics. In Proceedings of the Conference of the Asso- ciation for Computational Linguistics (ACL).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Unsupervised Semantic Parsing",
"authors": [
{
"first": "Poon",
"middle": [],
"last": "Hoifung",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Domingos",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Conference on Empiricial Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Poon Hoifung and Pedro Domingos. 2009. Unsuper- vised Semantic Parsing. In Proceedings of the Con- ference on Empiricial Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Unsupervised Ontology Induction from Text",
"authors": [
{
"first": "Poon",
"middle": [],
"last": "Hoifung",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Domingos",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Poon Hoifung and Pedro Domingos. 2010. Unsuper- vised Ontology Induction from Text. In Proceedings of the Conference of the Association for Computa- tional Linguistics (ACL).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Semantic Parsing Freebase: Towards Open-domain Semantic Parsing",
"authors": [
{
"first": "Qingqing",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Yates",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Second Joint Conference on Lexical and Computational Semantics (SEM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qingqing Cai and Alexander Yates. 2013. Semantic Parsing Freebase: Towards Open-domain Seman- tic Parsing. In Proceedings of the Second Joint Conference on Lexical and Computational Seman- tics (SEM).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Large-scale Semantic Parsing via Schema Matching and Lexicon Extension",
"authors": [
{
"first": "Qingqing",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Yates",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qingqing Cai and Alexander Yates. 2014. Large-scale Semantic Parsing via Schema Matching and Lexi- con Extension. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Knowledge-based Weak Supervision for Information Extraction of Overlapping Relations",
"authors": [],
"year": null,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Knowledge-based Weak Supervision for Information Extraction of Overlapping Relations. In Proceed- ings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Large-scale Semantic Parsing without Question-Answer Pairs",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siva Reddy, Mirella Lapata, Mark Steedman. 2014. Large-scale Semantic Parsing without Question- Answer Pairs. Transactions of the Association for Computational Linguistics (TACL).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Surface Structure and Interpretation",
"authors": [
{
"first": "Steedman",
"middle": [],
"last": "Mark",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steedman Mark. 1996. Surface Structure and Inter- pretation. The MIT Press.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The Syntactic Process",
"authors": [
{
"first": "Steedman",
"middle": [],
"last": "Mark",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steedman Mark. 2000. The Syntactic Process. The MIT Press.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Inducing Probabilistic CCG Grammars from Logical Form with Higher-order Unification",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwa- ter, and Mark Steedman. 2010. Inducing Prob- abilistic CCG Grammars from Logical Form with Higher-order Unification. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), Cambridge, MA.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Lexical Generalization in CCG Grammar Induction for Semantic Parsing",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Luke Zettlemoyer, Sharon Gold- water, and Mark Steedman. 2011. Lexical Gen- eralization in CCG Grammar Induction for Seman- tic Parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Edinburgh, UK.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A Probabilistic Model of Syntactic and Semantic Acquisition from Child-Directed Utterances and their Meanings",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the European Chapter of the Association for Computational Linguistics (EACL), Avignon",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Luke Zettlemoyer, Sharon Gold- water, and Mark Steedman. 2012. A Probabilistic Model of Syntactic and Semantic Acquisition from Child-Directed Utterances and their Meanings. In Proceedings of the European Chapter of the Asso- ciation for Computational Linguistics (EACL), Avi- gnon, France.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A Generative Model for Parsing Natural Language to Meaning Representations",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Wee",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Luke",
"middle": [
"S"
],
"last": "Sun Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of The Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "783--792",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Lu, Hwee Tou Ng, Wee Sun Lee, and Luke S. Zettlemoyer. 2008. A Generative Model for Parsing Natural Language to Meaning Representa- tions. In Proceedings of The Conference on Em- pirical Methods in Natural Language Processing (EMNLP), pages 783-792.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A Probabilistic Forest-to-String Model for Language Generation from Typed Lambda Calculus Expressions",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of The Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1611--1622",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Lu and Hwee Tou Ng. 2011. A Probabilis- tic Forest-to-String Model for Language Generation from Typed Lambda Calculus Expressions. In Pro- ceedings of The Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1611-1622.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Bootstrapping Semantic Parsers from Conversations",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Artzi and Luke Zettlemoyer. 2011. Bootstrap- ping Semantic Parsers from Conversations. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Weakly Supervised Learning of Semantic Parsers for Mapping Instructions to Actions",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics (TACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Artzi and Luke Zettlemoyer. 2013. Weakly Su- pervised Learning of Semantic Parsers for Mapping Instructions to Actions. Transactions of the Associ- ation for Computational Linguistics (TACL).",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Learning for Semantic Parsing with Statistical Machine Translation",
"authors": [
{
"first": "Yuk",
"middle": [
"Wah"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the North American Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuk Wah Wong and Raymond J. Mooney. 2006. Learning for Semantic Parsing with Statistical Ma- chine Translation. In Proceedings of the Hu- man Language Technology Conference of the North American Association for Computational Linguis- tics (NAACL).",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Learning Synchronous Grammars for Semantic Parsing with Lambda Calculus",
"authors": [
{
"first": "Yuk",
"middle": [
"Wah"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuk Wah Wong and Raymond J. Mooney. 2007. Learning Synchronous Grammars for Semantic Parsing with Lambda Calculus. In Proceedings of the Conference of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Learning to Map Chinese Sentences to Logical Forms",
"authors": [
{
"first": "Zhihua",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Zili",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 7th International Conference on Knowledge Science, Engineering and Management (KSEM)",
"volume": "",
"issue": "",
"pages": "463--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhihua Liao and Zili Zhang. 2013. Learning to Map Chinese Sentences to Logical Forms. In Pro- ceedings of the 7th International Conference on Knowledge Science, Engineering and Management (KSEM), pages 463-472.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "A lexeme (w, c) pairs a word sequence with an ordered list of logical constants c = [c 1 . . . c m ]. For example, lexemes can contain a single lexeme (flight, [flight]). It also can contain multiple constants, for example (cheapest, [arg max,cost]). A lexical template takes a lexeme and produces a lexical items. Templates have the general form",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Training set{(xi, zi) : i = 1 \u2022 \u2022 \u2022 n} where each example is a sentence xi paired with a logical form zi. Set of entity name lexemes Le. Number of iteration J. Learning rate parameter \u03b10 and cooling rate parameter c. Set of entity name lexemes Le. Empty lexeme set L. Empty template set T . Set of NP lexical items lF from the factored lexicon \u039bF . Set of NP lexical items lT from the closed-domain CCG lexicon \u039bT . Set of NP lexical items lO from the open-domain CCG lexicon \u039bO.",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Type change N : \u03bbx.c(x) to N |N : \u03bbf.\u03bbx.\u2203y.c(x) \u2227 f (y) \u2227 r(x, y) ModifierSacramento, CaliforniaN : \u03bbx.City(x) to N |N : \u03bbf.\u03bbx.\u2203y.City(x) \u2227 f (y) \u2227 LocatedIn(x(N \\N )/N : \u03bbf.\u03bbg.\u03bbx.\u2203y.f (y) \u2227 g(x) \u2227 r(x, y) Preposition Sacremento in California in := (N \\N )/N : \u03bbf.\u03bbg.\u03bbx.\u2203y.f (y) \u2227 g(x) \u2227 LocatedIn(xP P/N : \u03bbf.\u03bbx.f (x) Sacramento is located in California in := P P/N : \u03bbf.\u03bbx.f (x) e1 SBJ =\u21d2 w * OBJ \u21d0= e2 w * := (S\\N )/N : \u03bbf.\u03bbg.\u2203x, y.f (y) \u2227 g(x) \u2227 r(x, y) Sacramento governs California governs := (S\\N )/N : \u03bbf.\u03bbg.\u2203x, y.f (y) \u2227 g(x) \u2227 LocatedIn(x, y) e1 SBJ =\u21d2 w * ADV \u21d0= [IN, T O]OBJ \u21d0= e2 w * := (S\\N )/P P : \u03bbf.\u03bbg.\u2203x, y.f (y) \u2227 g(x) \u2227 r(x, y) Verb Sacramento is located in California islocated := (S\\N )/P P :\u03bbf.\u03bbg.\u2203x, y.f (y) \u2227 g(x) \u2227 LocatedIn(x, y) e1 N M OD =\u21d2 w * ADV \u21d0= [IN, T O]OBJ \u21d0= e2 w * := (S\\N )/P P : \u03bbf.\u03bbg.\u03bby.f (y) \u2227 g(x) \u2227 r(x, y) Sacramento located in California located := (S\\N )/P P : \u03bbf.\u03bbg.\u03bby.f (y) \u2227 g(x) \u2227 LocatedIn(x, y) Forms of \"to be\" (none) w * := (S\\N )/N : \u03bbf.\u03bbg.\u2203x.g(x) \u2227 f (x)",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "The FUBLLESC algorithm.",
"num": null
},
"TABREF0": {
"html": null,
"num": null,
"type_str": "table",
"text": "Dependency parse pattern used to instantiate lexical categories for the semantic parser lexicon \u039b O . Each pattern is followed by an example phrase that instantiates it. An * indicates a position that may be filled by multiple consecutive words in the sentence. e 1 and e 2 are the entities identified in the sentence, r represents a relation where r(e 1 ,",
"content": "<table/>"
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"text": "Performance of Exact Match between the different GeoQuery test sets. Performance of Exact and Partial Matches on the ATIS test set.",
"content": "<table><tr><td colspan=\"2\">(a) The Geo250 test set</td><td colspan=\"2\">(b) The Geo880 test set</td></tr><tr><td>system</td><td>Rec. Pre. F1</td><td>system</td><td>Rec. Pre. F1</td></tr><tr><td>\u03bb-WASP</td><td>75.6 91.8 82.9</td><td>ZC07</td><td>86.1 91.6 88.8</td></tr><tr><td>UBL</td><td>81.8 83.5 82.6</td><td>UBL</td><td>87.9 88.5 88.2</td></tr><tr><td>FUBL</td><td>83.7 83.7 83.7</td><td>FUBL</td><td>88.6 88.6 88.6</td></tr><tr><td colspan=\"2\">FUBLLESC 86.2 92.0 89.0</td><td colspan=\"2\">FUBLLESC 90.8 95.6 93.1</td></tr><tr><td/><td>(a) Exact Match</td><td/><td>(b) Partial Match</td></tr><tr><td>system</td><td>Rec. Pre. F1</td><td>system</td><td>Rec. Pre. F1</td></tr><tr><td>ZC07</td><td>84.6 85.8 85.2</td><td>ZC07</td><td>96.7 95.1 95.9</td></tr><tr><td>UBL</td><td>71.4 72.1 71.7</td><td>UBL</td><td>78.2 98.2 87.1</td></tr><tr><td>FUBL</td><td>82.8 82.8 82.8</td><td>FUBL</td><td>95.2 93.6 94.6</td></tr><tr><td colspan=\"2\">FUBLLESC 86.4 92.8 89.5</td><td colspan=\"2\">FUBLLESC 97.2 98.6 97.9</td></tr></table>"
}
}
}
}