|
{ |
|
"paper_id": "W16-0105", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:57:14.036336Z" |
|
}, |
|
"title": "Neural Enquirer: Learning to Query Tables in Natural Language", |
|
"authors": [ |
|
{ |
|
"first": "Pengcheng", |
|
"middle": [], |
|
"last": "Yin", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Zhengdong", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Hang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Kao", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We propose NEURAL ENQUIRER-a neural network architecture for answering natural language (NL) questions given a knowledge base (KB) table. Unlike previous work on end-to-end training of semantic parsers, NEU-RAL ENQUIRER is fully \"neuralized\": it gives distributed representations of queries and KB tables, and executes queries through a series of differentiable operations. The model can be trained with gradient descent using both endto-end and step-by-step supervision. During training the representations of queries and the KB table are jointly optimized with the query execution logic. Our experiments show that the model can learn to execute complex NL queries on KB tables with rich structures. 2 Model Given an NL query Q and a KB table T , NEU-RAL ENQUIRER executes Q against T and outputs a ranked list of answers. The execution is done by first 29 Which city hosted the longest Olympic Games before the Games in Beijing? query \u0de8 Query Encoder Executor-1 Memory Layer-1 Executor-2 Memory Layer-2 Executor-3 Memory Layer-3 Executor-4 Memory Layer-4 Executor-5 Athens (probability distribution over table entries) year host_city #_duration #_medals", |
|
"pdf_parse": { |
|
"paper_id": "W16-0105", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We propose NEURAL ENQUIRER-a neural network architecture for answering natural language (NL) questions given a knowledge base (KB) table. Unlike previous work on end-to-end training of semantic parsers, NEU-RAL ENQUIRER is fully \"neuralized\": it gives distributed representations of queries and KB tables, and executes queries through a series of differentiable operations. The model can be trained with gradient descent using both endto-end and step-by-step supervision. During training the representations of queries and the KB table are jointly optimized with the query execution logic. Our experiments show that the model can learn to execute complex NL queries on KB tables with rich structures. 2 Model Given an NL query Q and a KB table T , NEU-RAL ENQUIRER executes Q against T and outputs a ranked list of answers. The execution is done by first 29 Which city hosted the longest Olympic Games before the Games in Beijing? query \u0de8 Query Encoder Executor-1 Memory Layer-1 Executor-2 Memory Layer-2 Executor-3 Memory Layer-3 Executor-4 Memory Layer-4 Executor-5 Athens (probability distribution over table entries) year host_city #_duration #_medals", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Natural language dialogue and question answering often involve querying a knowledge base (Wen et al., 2015; Berant et al., 2013) . The traditional approach involves two steps: First, a given queryQ is semantically parsed into an \"executable\" representation, which is often expressed in certain logical formZ (e.g., SQL-like queries). Second, the representation is executed against a KB from which an answer is obtained. For queries that involve complex semantics and logic (e.g., \"Which city hosted the longest Olympic Games before the Games in Beijing?\"), semantic parsing and query execution become extremely complex. For example, carefully hand-crafted features and rules are needed to correctly parse a complex query into its logical form (see example in the lower-left corner of Figure 1) . To partially overcome this complexity, recent works (Clarke et al., 2010; Liang et al., 2011; Pasupat and Liang, 2015) attempt to \"backpropagate\" query execution results to revise the semantic representation of a query. This approach, however, is greatly hindered by the fact that traditional semantic parsing mostly involves rule-based features and symbolic manipulation, and is subject to intractable search space incurred by the great flexibility of natural language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 107, |
|
"text": "(Wen et al., 2015;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 108, |
|
"end": 128, |
|
"text": "Berant et al., 2013)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 784, |
|
"end": 793, |
|
"text": "Figure 1)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 848, |
|
"end": 869, |
|
"text": "(Clarke et al., 2010;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 870, |
|
"end": 889, |
|
"text": "Liang et al., 2011;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 890, |
|
"end": 914, |
|
"text": "Pasupat and Liang, 2015)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper we propose NEURAL ENQUIRER -a neural network system that learns to understand NL queries and execute them on a KB table from examples of queries and answers. Unlike similar efforts along this line of research (Neelakantan et al., 2015) , NEURAL ENQUIRER is a fully neuralized, end-to-end differentiable network that jointly models semantic parsing and query execution. It encodes queries and KB tables into distributed representations, and executes compositional queries against the KB through a series of differentiable operations. The model is trained using queryanswer pairs, where the distributed representations of queries and the KB are optimized together with the query execution logic in an end-to-end fashion. We demonstrate using a synthetic QA task that NEURAL ENQUIRER is capable of learning to execute complex compositional NL questions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 223, |
|
"end": 249, |
|
"text": "(Neelakantan et al., 2015)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "query embedding table embedding Table Encoder where year < (select year, where host_city = Beijing), argmax(host_city, #_duration)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 47, |
|
"text": "Table Encoder", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Find row r1 where host_city=Beijing Select year of r1 as a Find row sets R where year < a Find r2 in R with max(#_duration)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Select host_city of r2 logical form \u0de8 Figure 1 : An overview of NEURAL ENQUIRER with five executors using Encoders to encode the query and table into distributed representations, which are then sent to a cascaded pipeline of Executors to derive the answer. Figure 1 gives an illustrative example. It consists of the following components:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 46, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 265, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Query Encoder abstracts the semantics of an NL query Q and encodes it into a query embedding", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Encoder", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "q \u2208 R d Q . Let {x 1 , x 2 , .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Encoder", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": ". . , x T } be the embeddings of the words in Q, where x t \u2208 R d W is from an embedding matrix L. We employ a bidirectional Gated Recurrent Unit (GRU) (Bahdanau et al., 2015) to summarize the sequence of word embeddings in forward and reverse orders. q is formed by concatenating the last hidden states in the two directions. We remark that Query Encoder can find the representation of a rather general class of symbol sequences, agnostic to the actual representation of the query (e.g., natural language, SQL, etc). The model is able to learn the semantics of input queries through end-to-end training, making it a generic model for query understanding and query execution.", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 174, |
|
"text": "(Bahdanau et al., 2015)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Encoder", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Table Encoder converts a KB table T into a distributed representation, which is used as an input to executors. Suppose T has M rows and N columns. In our model, the n-th column is associated with a field name (e.g., host city). Each cell value is a word (e.g., Beijing) in the vocabulary. We use w mn to denote the cell value in row m column n, and w mn to denote its embedding. Let f n be the embedding of the field name for column n. For each entry (cell) w mn , Table Encoder computes a field, value composite embedding e mn \u2208 R d E by fusing f n and w mn through a non-linear transformation:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table Encoder", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "e mn = tanh(W \u2022 [f n ; w mn ] + b), where [\u2022; \u2022] denotes vector concatenation. The out- put of Table Encoder is an M \u00d7 N \u00d7 d E tensor that consists of M \u00d7 N embeddings, each of length d E .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table Encoder", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "NEURAL ENQUIRER executes an input query on a KB table through layers of execution. Each layer consists of an executor that, after learning, performs certain operation (e.g., select, max) relevant to the input query. An executor outputs intermediate execution results, referred to as annotations, which are saved in the external memory of the executor. A query is executed sequentially through a stack of executors. Such a cascaded architecture enables the model to answer complex, compositional queries. An example is given in Figure 1 in which descriptions of the operation each executor is assumed to perform for the queryQ are shown. We will demonstrate in Section 4 that the model is capable of learning the operation logic of each executor via end-toend training.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 527, |
|
"end": 535, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Executor", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "As illustrated in Figure 2 , an executor at Layer-(denoted as Executor-) consists of two major neural network components: a Reader and an Annotator. The executor processes a ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 26, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Executor", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "As illustrated in Figure 3 , for the m-th row with N field, value composite embeddings R m = {e m1 , e m2 , . . . , e mN }, the Reader fetches a read vector r m from R m via an attentive reading operation:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 26, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Reader", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "r m = f R (R m , F T , q, M \u22121 ) = N n=1\u03c9 (f n , q, g \u22121 )e mn", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reader", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "where M \u22121 denotes the content of memory Layer-( \u22121), and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reader", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "F T = {f 1 , f 2 , . . . , f N }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reader", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "is the set of field name embeddings.\u03c9(\u2022) is the normalized attention weights given by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reader", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c9(f n , q, g \u22121 )= exp(\u03c9(f n , q, g \u22121 )) N n =1 exp(\u03c9(f n , q, g \u22121 ))", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Reader", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "where \u03c9(\u2022) is modeled as a Deep Neural Network (denoted as DNN ( ) 1 ). Since each executor models a specific type of computation, it should only attend to a subset of entries that are pertinent to its execution. This is modeled by the Reader. Our approach is related to the content-based addressing of Neural Turing Machines (Graves et al., 2014) and the attention mechanism in neural machine translation models (Bahdanau et al., 2015).", |
|
"cite_spans": [ |
|
{ |
|
"start": 326, |
|
"end": 347, |
|
"text": "(Graves et al., 2014)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reader", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "The Annotator of Executor-computes row and table annotations based on read vectors fetched by the Reader. The results are stored in the -th memory layer M accessible to Executor-( +1). The last executor is the only exception, which outputs the final answer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotator", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "[Row annotations] Capturing row-wise execution result, the annotation a m for row m in Executoris given by", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotator", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "a m =f A (r m , q, M \u22121 )=DNN ( ) 2 ([r m ; q; a \u22121 m ; g \u22121 ]). DNN ( )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotator", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "2 fuses the corresponding read vector r m , the results saved in the previous memory layer (row and table annotations a \u22121 m , g \u22121 ), and the query embedding q. Specifically,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotator", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "\u2022 row annotation a \u22121 m represents the local status of execution before Layer-;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotator", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "\u2022 table annotation g \u22121 summarizes the global status of execution before Layer-;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotator", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "\u2022 read vector r m stores the value of attentive reading;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotator", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "\u2022 query embedding q encodes the overall execution agenda.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotator", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "All the above values are combined through DNN ( ) 2 to form the annotation of row m in the current layer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotator", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "[ Table annotations ] Capturing global execution state, a table annotation summarizes all row annotations via a global max pooling operation:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 2, |
|
"end": 19, |
|
"text": "Table annotations", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotator", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "g = f MAXPOOL (a 1 , a 2 , . . . , a M ) = [g 1 , g 2 , . . . , g d G ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotator", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "year host city # participants # medals # duration # audience host country GDP country size population Which country hosted the longest game before the game in Athens? How big is the country which hosted the shortest game? How many people watched the earliest game that lasts for more days than the game in 1956? ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotator", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "= max({a 1 (k), a 2 (k), . . . , a M (k)})", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotator", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "is the maximum value among the k-th elements of all row annotations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotator", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "Instead of computing annotations based on read vectors, the last executor in NEURAL ENQUIRER directly outputs the probability of an entry w mn in table T being the answer a:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Last Layer Executor", |
|
"sec_num": "2.3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(a = w mn |Q, T ) = exp(f ANS (e mn , q, a \u22121 m , g \u22121 )) M,N m =1,n =1 exp(f ANS (e m n , q, a \u22121 m , g \u22121 ))", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Last Layer Executor", |
|
"sec_num": "2.3.3" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Last Layer Executor", |
|
"sec_num": "2.3.3" |
|
}, |
|
{ |
|
"text": "f ANS (\u2022) is modeled as a DNN (DNN ( )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Last Layer Executor", |
|
"sec_num": "2.3.3" |
|
}, |
|
{ |
|
"text": "3 ). Note that the last executor, which is devoted to returning answers, could still carry out execution in DNN ( ) 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Last Layer Executor", |
|
"sec_num": "2.3.3" |
|
}, |
|
{ |
|
"text": "NEURAL ENQUIRER can be trained in an end-toend (N2N) fashion. Given a set of N D query-tableanswer triples D = {(Q (i) , T (i) , y (i) )}, the model is optimized by maximizing the log-likelihood of goldstandard answers:", |
|
"cite_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 134, |
|
"text": "(i)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L N2N (D) = N D i=1 log p(a = y (i) |Q (i) , T (i) )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Learning", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The training can also be carried out with stronger guidance, i.e., step-by-step (SbS) supervision, by softly guiding the learning process via controlling the attention weightsw(\u2022) in Eq. (1). As an example, for Executor-1 in Figure 1 , by biasing the attention weight of the host city field towards 1.0, only the value of host city will be fetched and sent to the Annotator. In this way we can \"force\" the executor to learn the where operation to find the row whose host city is Beijing. Formally, this is done by introducing additional supervision signal to Eq. 3:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 225, |
|
"end": 233, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "L SbS (D) = N D i=1 [log p(a = y (i) |Q (i) , T (i) ) + \u03b1 L\u22121 =1 logw(f i, , \u2022, \u2022)] (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where \u03b1 is a tuning weight, and L is the number of executors. f i, is the embedding of the field known a priori to be used by Executor-in answering the i-th example.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this section we evaluate NEURAL ENQUIRER on synthetic QA tasks with NL queries of varying compositional depths.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We present a synthetic QA task with a large number of QA examples at various levels of complexity to evaluate the performance of NEURAL EN-QUIRER. Starting with \"artificial\" tasks accelerates the development of novel deep models (Weston et al., 2015) , and has gained increasing popularity in recent research on modeling symbolic computation using DNNs (Graves et al., 2014; Zaremba and Sutskever, 2014) . Our synthetic dataset consists of query-tableanswer triples {(Q (i) , T (i) , y (i) )}. To generate a triple, we first randomly sample a table T (i) of size 10\u00d710 from a synthetic schema of Olympic Games. The cell values of T (i) are drawn from a vocabulary of 120 location names and 120 numbers. Figure 4 gives an example table. Next, we sample a query Q (i) generated using NL templates, and obtain its gold-standard answer y (i) on T (i) . Our task consists of four types of NL queries, with examples given in Table 1 . We also give the logical form template for each type of query. The templates define the semantics and compositionality of queries. We generate queries at various compositional depths, ranging from simple SELECT WHERE queries to more complex NEST ones. This makes the dataset have similar complexity as a real-world one, except for the relatively small vocabulary. The queries are flexible enough to involve complex matching between NL phrases and logical constituents, which makes query understanding nontrivial: (1) the same field is described by different NL phrases (e.g., \"How big is the country ...\" and \"What is the size of the country ...\" for the country size field);", |
|
"cite_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 250, |
|
"text": "(Weston et al., 2015)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 353, |
|
"end": 374, |
|
"text": "(Graves et al., 2014;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 375, |
|
"end": 403, |
|
"text": "Zaremba and Sutskever, 2014)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 632, |
|
"end": 635, |
|
"text": "(i)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 703, |
|
"end": 711, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 919, |
|
"end": 926, |
|
"text": "Table 1", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Synthetic QA Task", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "(2) different fields may be referred to by the same NL pattern (e.g, \"in China\" for host country and \"in Beijing\" for host city);", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Synthetic QA Task", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "(3) simple NL constituents may be grounded to complex logical operations (e.g., \"after the game in Beijing\" implies comparing between the values of year fields).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Synthetic QA Task", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To simulate the read-world scenario where queries of various types are issued to the model, we construct two MIXED datasets, with 25K and 100K training examples respectively, where four types of queries are sampled with the ratio 1 : 1 : 1 : 2. Both datasets share the same testing set of 20K examples, 5K for each type of query. We enforce that no tables and queries are shared between training/testing sets. 3 are 2, 3, 3. The length of word embeddings and annotations is 20. \u03b1 is 0.2. We train the model using ADADELTA (Zeiler, 2012) on a Tesla K40 GPU. The training converges fast within 2 hours.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Synthetic QA Task", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "[Metric] We evaluate in terms of accuracy, defined as the fraction of correctly answered queries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "[Models] We compare the results of the following settings:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u2022 Sempre (Pasupat and Liang, 2015) is a state-ofthe-art semantic parser and serves as the baseline;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u2022 N2N, NEURAL ENQUIRER model trained using end-to-end setting (Sec 4.3);", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u2022 SbS, NEURAL ENQUIRER model trained using step-by-step setting (Sec 4.4). Table 2 summarizes the results of SEMPRE and NEURAL ENQUIRER under different settings. We show both the individual performance for each query type and the overall accuracy. We evaluate SEM-PRE only on MIXED-25K because of its long training time even on this small dataset (about 3 days).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 82, |
|
"text": "Table 2", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this section we discuss the results under endto-end (N2N) training setting. On MIXED-25K, the relatively low performance of SEMPRE indicates that our QA task, although synthetic, is highly nontrivial. Surprisingly, our model outperforms SEMPRE on all types of queries, with a marginal gain on simple queries (SELECT WHERE, SU-PERLATIVE), and significant improvement on complex queries (WHERE SUPERLATIVE, NEST). On MIXED-100K, our model achieves a decent overall accuracy of 90.6%. These results show that in our QA task, NEURAL ENQUIRER is very effective in answering compositional NL queries, especially those with complex semantic structures compared with the state-of-the-art system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "End-to-End Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "To further understand why our model is capable of answering compositional queries, we study the attention weightsw(\u2022) of Readers (Eq. 1) for executors in intermediate layers, and the answer probability (Eq. 2) the last executor outputs for each entry in the table. Those statistics are obtained on MIXED-100K. We sample two queries (Q 1 and Q 2 ) in the testing set that our model answers correctly and visualize their corresponding values in Figure 5 . To Q 1 : How long was the game with the most medals that had fewer than 3,000 participants?", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 443, |
|
"end": 451, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "End-to-End Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Z 1 : where # participants < 3,000, argmax(# duration, # medals) y e a r h o s t _ c it y # _ p a r t ic ip a n t s # Executor-5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "End-to-End Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Q 2 : Which country hosted the longest game before the game in Athens? Z 2 : where year < (select year,where host city=Athens), argmax(host country, # duration) better understand the query execution process, we also give the logical forms (Z 1 and Z 2 ) of the two queries. Note that the logical forms are just for reference purpose and unknown by the model. We find that each executor actually learns its execution logic from just the correct answers in N2N training, which is in accordance with our assumption. The model executes Q 1 in three steps, with each of the last three executors performs a specific type of operation. For each row, Executor-3 takes the value of the # participants field as input, while Executor-4 attends to the # medals field. Finally, Executor-5 outputs a high probability for the # duration field in the 3-rd row. The attention weights for Executor-1 and Executor-2 appear to be meaningless because Q 1 requires only three steps of execution, and the model learns to defer the meaningful execution to the last three executors. Compared with the logical form Z 1 of Q 1 , we can deduce that Executor-3 \"executes\" the where clause in Z 1 to find row sets R satisfying the condition, and Executor-4 performs the first part of argmax to find the row r \u2208 R with the maximum value of # medals, while Executor-5 outputs the value of # duration in row r.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "End-to-End Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Compared with Q 1 , Q 2 is more complicated. According to Z 2 , Q 2 involves an additional nest subquery to be solved by two extra executors, and requires a total of five steps of execution. The last three executors function similarly as in answering Q 1 , yet the execution logic for the first two executors (devoted to solving the sub-query) is a bit obscure, since their attention weights are scattered in-stead of being perfectly centered on the ideal fields as highlighted in red dashed rectangles. We posit that this is because during the end-to-end training, the supervision signal propagated from the top layer has decayed along the long path down to the first two executors, which causes vanishing gradients.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "End-to-End Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "To alleviate the vanishing gradient problem when training on complex queries like Q 2 , we train the model using step-by-step (SbS) setting (Eq. 4), where we encourage each intermediate executor to attend to the field that is known a priori to be relevant to its execution logic. Results are shown in Table 2 (column SbS). With stronger supervision signal, the model significantly outperforms the N2N setting, and achieves perfect accuracy on MIXED-100K. This shows that NEURAL ENQUIRER is capable of leveraging the additional supervision signal given to intermediate layers in SbS training. Let us revisit the query Q 2 in SbS setting. In contrast to the result in N2N setting ( Figure 5) where the attention weights for the first two executors are obscure, now the weights are perfectly skewed towards each relevant field with a value of 1.0, which corresponds with the highlighted ideal weights.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 680, |
|
"end": 689, |
|
"text": "Figure 5)", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "With Additional Step-by-Step Supervision", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We propose NEURAL ENQUIRER, a fully neural, end-to-end differentiable network that learns to execute compositional natural language queries on knowledge-base tables.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "[", |
|
"middle": [], |
|
"last": "References", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "References [Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Semantic parsing on freebase from question-answer pairs", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1533--1544", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Berant et al.2013] Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In EMNLP, pages 1533-1544.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Driving semantic parsing from the world's response", |
|
"authors": [ |
|
{ |
|
"first": "[", |
|
"middle": [], |
|
"last": "Clarke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "18--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Clarke et al.2010] James Clarke, Dan Goldwasser, Ming- Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world's response. In CoNLL, pages 18-27.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Alex Graves, Greg Wayne, and Ivo Danihelka", |
|
"authors": [ |
|
{ |
|
"first": "[", |
|
"middle": [], |
|
"last": "Graves", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Neural turing machines. CoRR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "[Graves et al.2014] Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. CoRR, abs/1410.5401.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Learning dependency-based compositional semantics", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "ACL (1)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "590--599", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang et al.2011] Percy Liang, Michael I. Jordan, and Dan Klein. 2011. Learning dependency-based com- positional semantics. In ACL (1), pages 590-599. [Neelakantan et al.2015] Arvind Neelakantan, Quoc V.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Semantically conditioned lstm-based natural language generation for spoken dialogue systems", |
|
"authors": [ |
|
{ |
|
"first": "Le", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever ; Panupong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Pasupat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ";", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Neural programmer: Inducing latent programs with gradient descent. CoRR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1711--1721", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Le, and Ilya Sutskever. 2015. Neural programmer: In- ducing latent programs with gradient descent. CoRR, abs/1511.04834. [Pasupat and Liang2015] Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In ACL (1), pages 1470-1480. [Wen et al.2015] Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei hao Su, David Vandyke, and Steve J. Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue sys- tems. In EMNLP, pages 1711-1721. [Weston et al.2015] Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards ai- complete question answering: A set of prerequisite toy tasks. CoRR, abs/1502.05698. [Zaremba and Sutskever2014] Wojciech Zaremba and Ilya Sutskever. 2014. Learning to execute. CoRR, abs/1410.4615.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "ADADELTA: an adaptive learning rate method", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zeiler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Illustration of the Reader in Executor-.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "Tuning] We adopt a model with five executors. The lengths of hidden states for GRU and DNNs are 150, 50. The numbers of layers for DNN", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"text": "Weights visualization of queries Q 1 and Q 2", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"text": "table row-by-row. The Reader reads in data from each row m in the form of a read vector r m , which is then sent to the Annotator to perform the actual execution. The output of the Annotator is a row annotation a", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td>query embedding</td><td>Memory Layer-\u2113</td></tr><tr><td>table embedding</td><td>row annotations</td></tr><tr><td>read vectors</td><td/></tr><tr><td>Reader</td><td>Annotator</td></tr><tr><td/><td>pooling</td></tr><tr><td>Memory Layer-(\u2113-1)</td><td>table annotation</td></tr><tr><td colspan=\"2\">Figure 2: Overview of an Executor-</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"text": "An example table in the synthetic QA task (only one row shown)1 SELECT WHERE [select Fa, where Fb = wb] 3 WHERE SUPERLATIVE [where Fa >|< wa, argmax/min(Fb, Fc)] How many people participated in the game in Beijing?How long was the game with the most medals that had fewer than 3,000 participants? In which city was the game hosted in 2012? How many medals were in the first game after 2008?", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td>2008 Beijing</td><td>4,200</td><td>2,500</td><td>30</td><td>67,000</td><td>China</td><td>2,300</td><td>960</td><td>130</td></tr><tr><td colspan=\"4\">Figure 4: 2 SUPERLATIVE [argmax/min(Fa, Fb)] 4 NEST</td><td colspan=\"5\">[where Fa >|< (select Fa,where Fb=wb), argmax/min(Fc, Fd)]</td></tr><tr><td>When was the latest game hosted?</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"text": "Example queries for each query type, with annotated SQL-like logical form templates where g k", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"text": "Accuracies on MIXED datasets", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |