Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W16-0106",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:49:45.995511Z"
},
"title": "Neural Generative Question Answering",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Yin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Peking University",
"location": {}
},
"email": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {
"laboratory": "Noah's Ark Lab",
"institution": "Huawei Technologies",
"location": {}
},
"email": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": "",
"affiliation": {
"laboratory": "Noah's Ark Lab",
"institution": "Huawei Technologies",
"location": {}
},
"email": ""
},
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": "",
"affiliation": {
"laboratory": "Noah's Ark Lab",
"institution": "Huawei Technologies",
"location": {}
},
"email": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Peking University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents an end-to-end neural network model, named Neural Generative Question Answering (GENQA), that can generate answers to simple factoid questions, based on the facts in a knowledge-base. More specifically, the model is built on the encoderdecoder framework for sequence-to-sequence learning, while equipped with the ability to enquire the knowledge-base, and is trained on a corpus of question-answer pairs, with their associated triples in the knowledge-base. Empirical study shows the proposed model can effectively deal with the variations of questions and answers, and generate right and natural answers by referring to the facts in the knowledge-base. The experiment on question answering demonstrates that the proposed model can outperform an embedding-based QA model as well as a neural dialogue model trained on the same data.",
"pdf_parse": {
"paper_id": "W16-0106",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents an end-to-end neural network model, named Neural Generative Question Answering (GENQA), that can generate answers to simple factoid questions, based on the facts in a knowledge-base. More specifically, the model is built on the encoderdecoder framework for sequence-to-sequence learning, while equipped with the ability to enquire the knowledge-base, and is trained on a corpus of question-answer pairs, with their associated triples in the knowledge-base. Empirical study shows the proposed model can effectively deal with the variations of questions and answers, and generate right and natural answers by referring to the facts in the knowledge-base. The experiment on question answering demonstrates that the proposed model can outperform an embedding-based QA model as well as a neural dialogue model trained on the same data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Question answering (QA) can be viewed as a special case of single-turn dialogue: QA aims at providing correct answers to the questions in natural language, while dialogue emphasizes on generating relevant and fluent responses to the messages also in natural language (Shang et al., 2015; Vinyals and Le, 2015) . Recent progress in deep learning has raised the possibility of realizing generation-based QA in a purely neutralized way. That is, the answer is generated by a neural network (e.g., recurrent neural network, or RNN) based on the question, which is able to handle the flexibility and diversity of language. More importantly, the model is trained in an end-to-end fashion, and thus there is no need in building the system using linguistic knowledge, e.g., creating a semantic parser.",
"cite_spans": [
{
"start": 267,
"end": 287,
"text": "(Shang et al., 2015;",
"ref_id": null
},
{
"start": 288,
"end": 309,
"text": "Vinyals and Le, 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is however one serious limitation of this generation-based approach to QA. It is practically impossible to store all the knowledge in a neural network to achieve a desired precision and coverage in real world QA. This is a fundamental difficulty, rooting deeply in the way in which knowledge is acquired, represented and stored. The neural network, and more generally the fully distributed way of representation, is good at representing smooth and shared patterns, i.e., modeling the flexibility and diversity of language, but improper for representing discrete and isolated concepts, i.e., depicting the lexicon of language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the other hand, the recent success of memorybased neural network models has greatly extended the ways of storing and accessing text information, in both short-term memory (e.g., in (Bahdanau et al., 2015) ) and long-term memory (e.g., in (Weston et al., 2015)). It is hence a natural choice to connect a neural model for QA with a neural model of knowledge-base on an external memory, which is also related to the traditional approach of templatebased QA from knowledge-base.",
"cite_spans": [
{
"start": 184,
"end": 207,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we report our exploration in this direction, with a proposed model called Neural Generative Question Answering (GENQA).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Learning Task: We formalize generative question answering (GENQA) as a supervised learning task or more specifically a sequence-to-sequence learning task. A GENQA system takes a sequence of words as input question and generates another sequence of words as answer. In order to provide right answers, the system is connected with a knowledge-base (K-B), which contains facts. During the process of answering, the system queries the KB, retrieves a set of candidate facts and generates a correct answer to the question using the right fact. The generated answer may contain two types of \"words\": one is common words for composing the answer (referred to as common word) and the other is specialized words in the KB denoting the answer (referred to as KB-word).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To learn a GENQA model, we assume that each training instance consists of a question-answer pair with the KB-word specified in the answer. In this paper, we only consider the case of simple factoid question, which means each question-answer pair is associated with a single fact (i.e., one triple) of the KB. Without loss of generality, we mainly focus on forward relation QA, where the question is on subject and predicate and the answer points to object. Tables 1 shows some examples of the training instances. Dataset: To facilitate research on the task of generative QA, we create a new dataset by collecting data from the web. We first build a knowledge-base by mining from three Chinese encyclopedia web sites 1 . Specifically we extract entities and associated triples (subject, predicate, object) from the structured parts (e.g. HTML tables) of the web pages. Then the extracted data is normalized and aggregated to form a knowledge-base. In this paper we sometimes refer to the items of a triple as a constituent of knowledgebase. Second, we collect question-answer pairs by extracting from two Chinese community QA sites 2 . Table 2 shows the statistics of the knowledge-base and QA-pairs. We construct the training and test data for GEN-QA by \"grounding\" the QA pairs with the triples in knowledge-base. Specifically, for each QA pair, a list of candidate triples with the subject fields appearing in the question, is retrieved by using the Aho-Corasick string searching algorithm. The triples in the candidate list are then judged by a series of rules for relevance to the QA pair. The basic requirement for relevance is that the answer contains the object of the triple, which specifies the KB-word in the answer. Besides, we use additional scoring and filtering rules, attempting to find out the triple that truly matches the QA pair, if there is any. As the result of processing, 720K instances (tuples of question, answer, triple) are finally obtained with an estimated 80% of instances being truly positive. The data are publicly available online 3 .",
"cite_spans": [
{
"start": 2064,
"end": 2065,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1135,
"end": 1142,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to test the generalization ability of the GENQA model, the data is randomly partitioned into training dataset and test dataset by using triple as the partition key. In that way, all the questions in the test data are regarding to the unseen facts (triples) in the training data. Table 3 shows some statistics of the datasets. By comparing the numbers of triples in Table 2 and Table 3 , we can see that a large portion of facts in the knowledge-base are not present in the training and test data, which demonstrates the necessity for the model to generalize to unseen facts. ",
"cite_spans": [],
"ref_spans": [
{
"start": 288,
"end": 295,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 374,
"end": 393,
"text": "Table 2 and Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Let Q = (x 1 , . . . , x T Q )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "and Y = (y 1 , . . . , y T Y ) denote the natural language question and answer respectively. The knowledge-base is organized as a set of triples (subject, predicate, object), each denoted as \u03c4 = (\u03c4 s , \u03c4 p , \u03c4 o ). Inspired by the work on the encoder-decoder framework for neural machine translation (Cho et al., 2014; Sutskever et al., 2014; and neural natural language dialogue (Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2015) , and the work on question answering with knowledge-base embedding (Bordes et al., 2014b; Bordes et al., 2014a; Bordes et al., 2015) , we propose an end-to-end neural network model for GENQA, which is illustrated in Figure 1 . The GENQA model consists of Interpreter, Enquirer, Answerer, and an external knowledge-base. Answerer further consists of Attention Model and Generator. Basically, Interpreter transforms the natural language question Q into a representation H Q and saves it in the short-term memory. Enquirer takes H Q as input to interact with the knowledgebase in the long-term memory, retrieves relevant facts (triples) from the knowledge-base, and summarizes the result in a vector r Q . The Answerer feeds on the question representation H Q (through the Attention Model) as well as the vector r Q and generates the answer with Generator. We elaborate each component hereafter.",
"cite_spans": [
{
"start": 300,
"end": 318,
"text": "(Cho et al., 2014;",
"ref_id": "BIBREF2"
},
{
"start": 319,
"end": 342,
"text": "Sutskever et al., 2014;",
"ref_id": null
},
{
"start": 380,
"end": 400,
"text": "(Shang et al., 2015;",
"ref_id": null
},
{
"start": 401,
"end": 422,
"text": "Vinyals and Le, 2015;",
"ref_id": null
},
{
"start": 423,
"end": 443,
"text": "Serban et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 511,
"end": 533,
"text": "(Bordes et al., 2014b;",
"ref_id": "BIBREF1"
},
{
"start": 534,
"end": 555,
"text": "Bordes et al., 2014a;",
"ref_id": "BIBREF1"
},
{
"start": 556,
"end": 576,
"text": "Bordes et al., 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 660,
"end": 668,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Interpreter: Given the question represented as word sequence Q = (x 1 , . . . , x T Q ), Interpreter encodes it to the array of vector representations. In our implementation, we adopt a bi-directional RN-N as in (Bahdanau et al., 2015), which processes the sequence in forward and reverse order by using two independent RNNs (here we use gated recurrent unit (GRU) (Chung et al., 2014)). By concatenating the hidden states (denoted as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(h 1 , \u2022 \u2022 \u2022 , h T Q )), the embeddings of the words (de- noted as (x 1 , \u2022 \u2022 \u2022 , x T Q ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": ", and the original one-hot representations of the words, we obtain an array of vectors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "H Q = (h 1 , \u2022 \u2022 \u2022 ,h T Q ), whereh t = [h t ; x t ; x t ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This array of vectors is saved in the short-term memory, allowing for further processing by Enquirer and Answerer for different purposes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Enquirer: Enquirer \"fetches\" the relevant facts from the knowledge-base with Q and H Q (as illustrated by Figure 2 ). Enquirer first performs termlevel matching to retrieve a list of relevant candidate triples, denoted as",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 114,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "T Q = {\u03c4 k } K Q k=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": ". K Q is the number of candidate triples, which is usually less than several hundreds in our data. This first round filtering, although fairly simple, is important in making the following step of differentiable operations (e.g., the weighting on the candidate set and the answer generation) and optimization feasible. After obtaining T Q , the task reduces to evaluating the relevance of each candidate triple with the question in the embedded space (Bordes et al., 2014b; Bordes et al., 2014a) .",
"cite_spans": [
{
"start": 450,
"end": 472,
"text": "(Bordes et al., 2014b;",
"ref_id": "BIBREF1"
},
{
"start": 473,
"end": 494,
"text": "Bordes et al., 2014a)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "More specifically Enquirer calculates the matching scores between the question and the K Q triples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For question Q, the scores are represented in a K Qdimensional vector r Q where the k th element of r Q is defined as the probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "r Q k = e S(Q,\u03c4 k ) K Q k =1 e S(Q,\u03c4 k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": ",",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "where S(Q, \u03c4 k ) denotes the matching score between question Q and triple \u03c4 k . The probability in r Q will be further taken into the probabilistic model in Answerer for generating a particular answering sentence. Since r Q is of modest size, after the filtering step, and differentiable with respect to its parameters, it can be effectively optimized by the supervision signal in recovering the original answers through back-propagation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we provide two implementations for Enquirer to calculate the matching scores between question and triples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Bilinear Model: The first implementation simply takes the average of the word embedding vectors in H Q as the representation of the question (with the result denoted asx Q ) . For each triple \u03c4 in the knowledge-base, it takes the mean of the embeddings of its subject and predicate as the representation of the triple (denoted as u \u03c4 ). Then we define the matching score as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "S(Q, \u03c4 ) =x Q Mu \u03c4 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "where M is the matrix parameterizing the matching between the question and the triple.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The second implementation employs the convolutional neural network (CNN) for modeling the matching score between question and triple, as in (Hu et al., 2014) and (Shen et al., 2014) . Specifically, the question is fed to a convolutional layer followed by a maxpooling layer, and summarized as a fixed-length vector, denoted as\u0125 Q . Then\u0125 Q and u \u03c4 (again as the mean of the embedding of the corresponding subject and predicate) are concatenated as input to a multilayer perceptron (MLP) to produce their matching score\u015c",
"cite_spans": [
{
"start": 140,
"end": 157,
"text": "(Hu et al., 2014)",
"ref_id": null
},
{
"start": 162,
"end": 181,
"text": "(Shen et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CNN-based Matching Model:",
"sec_num": null
},
{
"text": "(Q, \u03c4 ) = f MLP ([\u0125 Q ; u \u03c4 ]).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN-based Matching Model:",
"sec_num": null
},
{
"text": "For this model the parameters consist of both the C-NN for question representation and the MLP for the final matching decision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN-based Matching Model:",
"sec_num": null
},
{
"text": "Answerer: Answerer uses an RNN to generate the answer sentence based on the information of question saved in the short-term memory (represented by H Q ) and the relevant knowledge retrieved from the long-term memory (indexed by r Q ), as illustrated in Figure 3 . The probability of generating the answer sentence Y = (y 1 , y 2 , . . . , y T Y ) is defined as",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 261,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "CNN-based Matching Model:",
"sec_num": null
},
{
"text": "p(y 1 , \u2022 \u2022 \u2022 , y T Y |H Q , r Q ; \u03b8) = p(y 1 |H Q , r Q ; \u03b8) T Y t=2 p(y t |y 1 , . . . , y t\u22121 , H Q , r Q ; \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN-based Matching Model:",
"sec_num": null
},
{
"text": "where \u03b8 represents the parameters in the GEN-QA model. The conditional probability in the RNN model (with hidden state",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN-based Matching Model:",
"sec_num": null
},
{
"text": "s 1 , \u2022 \u2022 \u2022 , s T Y ) is specified by p(y t |y 1 , . . . , y t\u22121 , H Q , r Q ; \u03b8) = p(y t |y t\u22121 , s t , H Q , r Q ; \u03b8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN-based Matching Model:",
"sec_num": null
},
{
"text": "In generating the t th word y t in the answer sentence, the probability is given by the following mixture model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN-based Matching Model:",
"sec_num": null
},
{
"text": "p(y t |y t\u22121 , s t , H Q , r Q ; \u03b8) = p(z t = 0|s t ; \u03b8)p(y t |y t\u22121 , s t , H Q , z t = 0; \u03b8)+ p(z t = 1|s t ; \u03b8)p(y t |r Q , z t = 1; \u03b8),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN-based Matching Model:",
"sec_num": null
},
{
"text": "which sums the contributions from the \"language\" part and the \"knowledge\" part, with the coefficient p(z t |s t ; \u03b8) being realized by a logistic regression model with s t as input. Here the latent variable z t indicates whether the t th word is generated from a common vocabulary (for z t = 0) or a KB vocabulary (z t = 1). In this work, the KB vocabulary contains all the objects of the candidate triples associated with the particular question. For any word y that is only in the KB vocabulary, e.g., \"2.29m\", we have p(y t |y t\u22121 , s t , H Q , z t = 0; \u03b8) = 0, while for y that does not appear in KB, e.g., \"and\", we have p(y t |r Q , z t = 1; \u03b8) = 0. There are some words (e.g., \"Shanghai\") that appear in both common vocabulary and KB vocabulary, for which the probability contains nontrivial contributions of both bodies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN-based Matching Model:",
"sec_num": null
},
{
"text": "In generating common words, Answerer acts in the same way as the decoder RNN in with information from H Q selected by the attention model. Specifically, the hidden state at t step is computed as where c t is the context vector computed as weighted sum of the hidden states stored in the short-term memory H Q .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN-based Matching Model:",
"sec_num": null
},
{
"text": "s t = f s (y t\u22121 , s t\u22121 , c t ) and p(y t |y t\u22121 , s t , H Q , z t = 0; \u03b8) = f y (y t\u22121 , s t , c t ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN-based Matching Model:",
"sec_num": null
},
{
"text": "In generating KB-words via p(y t |r Q , z t = 1; \u03b8), Answerer simply employs the model p(y t = k|r Q , z t = 1; \u03b8) = r Q k . The better a triple matched with the question, the more likely the object of the triple is selected. Training: The parameters to be learned include the weights in the RNNs for Interpreter and Answerer, parameters in Enquirer, and the wordembeddings which are shared by the Interpreter RN-N and the knowledge-base. GENQA, although essentially containing a retrieval operation, can be trained in an end-to-end fashion by maximizing the likelihood of observed data, since the mixture form of probability in Answerer provides a unified way to generate words from common vocabulary and (dynamic) KB vocabulary. In practice the model is trained on machines with GPUs by using stochastic gradient-descent with mini-batch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN-based Matching Model:",
"sec_num": null
},
{
"text": "To our best knowledge there is no previous work on generative QA, we choose three baseline methods: a neural dialogue model, a retrieval-based QA model and the embedding based QA model, respectively corresponding to the generative aspect and the KB-retrieval aspect of GENQA:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Models",
"sec_num": "3.1"
},
{
"text": "Neural Responding Machine (NRM): NR-M (Shang et al., 2015) is a neural network based generative model specially designed for short-text conversation. We train the NRM model on the question-answer pairs in the training data with the same vocabulary as the vocabulary of GENQA. Since NRM does not access the knowledge-base during training and test, it actually remembers all the knowledge from the QA pairs in the weights of the model. Retrieval-based QA: the knowledge-base is indexed by an information retrieval system (we use Apache Solr), in which each triple is deemed as a document. At test phase, a question is used as the query and the top-retrieved triple is returned as the answer. Note that in general this method cannot generate natural language answers. Embedding-based QA: as proposed by (Bordes et al., 2014a; Bordes et al., 2014b) , the model is learnt from the question-triple pairs in the training data. The model learns to map questions and knowledgebase constituents into the same embedding space, where the similarities between questions and triples are computed as the inner product of the two embedding vectors.",
"cite_spans": [
{
"start": 800,
"end": 822,
"text": "(Bordes et al., 2014a;",
"ref_id": "BIBREF1"
},
{
"start": 823,
"end": 844,
"text": "Bordes et al., 2014b)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Models",
"sec_num": "3.1"
},
{
"text": "Since we have two implementations of matching score in Enquirer of the GENQA model, we denote the one using the bilinear model as GENQA and the other using CNN and MLP as GENQA CNN .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Models",
"sec_num": "3.1"
},
{
"text": "We evaluate the performance of the models in terms of 1) accuracy, i.e., the ratio of correctly answered questions, and 2) the fluency of answers. In order to ensure an accurate evaluation, we randomly select 300 questions from the test set, and manually remove the nearly duplicate cases and filter out the mistaken cases (e.g., non-factoid questions).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "Accuracy: Table 4 shows the accuracies of the models in the test set. NRM has the lowest accuracy, showing the lack of ability to remember the answers accurately and generalize to questions unseen in the training data. For example, to question \"Which country does Xavi play for as a midfielder?\" (Translated from Chinese), NRM gives the wrong answer \"He plays for France\" (Translated from Chinese), since the ",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 17,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "athlete actually plays for Spain. The retrieval-based method achieves a moderate accuracy, but like most string-matching methods it suffers from word mismatch between the question and the triples in K-B. The embedding-based QA model achieves higher accuracy on test set, thanks to its generalization ability from distributed representations. GENQA and GENQA CNN are both better than the competitors, showing that GENQA can further benefit from the end-to-end training for sequence-to-sequence learning. For example, as we conjecture, the task of generating the appropriate answer may help the learning of word-embeddings of the question. Among the two GENQA variants, GENQA CNN achieves the best accuracy, getting over half of the questions right. An explanation for that is that the convolution layer helps to capture salient features in matching. The experiment results demonstrate the ability of GEN-QA models to find the right answer from KB even with regard to new facts. For example, to the example question mentioned above, GENQA gives the correct answer \"He plays for Spain\". Fluency: We make some empirical comparisons and find no significant differences between NRM and GENQA in terms of the fluency of answers. In general, all the three models based on sequence generation yield correct patterns in most of the time. Figure 4 gives some examples of generated answers to the questions in the test set by our GENQA models, with the underlined words generated from K-B. Clearly it can smoothly blend the KB-words and common words in the sentence, thanks to the unified neural model that can learn to determine the right time to place a KB-word or a common word. We notice that most of the generated answers are short sentences, for which there are two possible reasons: 1) many answers to the factoid questions on the Community QA sites are usually short, and 2) we select the answers by beam-searching the sequence with maximum log-likelihood normalized by its length, which generally prefers short answers. Examples 1 to 4 show the correctly generated answers, where the model not only matches the right triples (and thus generate the right KB-words), but also generates suitable common words surrounding them. However, in some cases like examples 5 and 6 even the right triples are found, the surrounding common words are improper or incorrect from the knowledge-base point of view (e.g., in example 6 the author \"Jonathan Swift\" is from Ireland rather than France). By investigating the correctly generated answers on test data, we find roughly 8% of them having improper surrounding words. In some other cases, the model fails to match the correct triples with the questions, which produces completely wrong answers. For the instance in example 7, the question is about the release date of a movie, while the model finds its distributor and generates an answer incorrect both in terms of fact and language.",
"cite_spans": [],
"ref_spans": [
{
"start": 1328,
"end": 1336,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "52%",
"sec_num": null
},
{
"text": "https://github.com/jxfeb/Generative_QA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "[",
"middle": [],
"last": "References",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bahdanau",
"suffix": ""
}
],
"year": 2015,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References [Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Open question answering with weakly supervised embedding models",
"authors": [
{
"first": "",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2014,
"venue": "ECML PKDD",
"volume": "",
"issue": "",
"pages": "165--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Bordes et al.2014a] Antoine Bordes, Jason Weston, and Sumit Chopra. 2014a. Question answering with sub- graph embeddings. EMNLP. [Bordes et al.2014b] Antoine Bordes, Jason Weston, and Nicolas Usunier. 2014b. Open question answering with weakly supervised embedding models. In ECML PKDD, pages 165-180.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2014,
"venue": "Large-scale simple question answering with memory networks",
"volume": "",
"issue": "",
"pages": "2042--2050",
"other_ids": {
"arXiv": [
"arXiv:1506.02075"
]
},
"num": null,
"urls": [],
"raw_text": "[Bordes et al.2015] Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075. [Cho et al.2014] Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. EMNLP. [Chung et al.2014] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empir- ical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. [Hu et al.2014] Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing System- s, pages 2042-2050.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Building end-to-end dialogue systems using generative hierarchical neural network models",
"authors": [
{
"first": "",
"middle": [],
"last": "Serban",
"suffix": ""
}
],
"year": 2015,
"venue": "Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for shorttext conversation",
"volume": "",
"issue": "",
"pages": "1577--1586",
"other_ids": {
"arXiv": [
"arXiv:1507.04808"
]
},
"num": null,
"urls": [],
"raw_text": "[Serban et al.2015] Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2015. Building end-to-end dialogue systems using generative hierarchical neural network models. arX- iv preprint arXiv:1507.04808. [Shang et al.2015] Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short- text conversation. In Association for Computational Linguistics (ACL), pages 1577-1586.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning semantic representations using convolutional neural networks for web search",
"authors": [
{
"first": "[",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2014,
"venue": "Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "[Shen et al.2014] Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gr\u00e9goire Mesnil. 2014. Learn- ing semantic representations using convolutional neu- ral networks for web search. In Proceedings of the companion publication of the 23rd international con- ference on World wide web companion, pages 373- 374. International World Wide Web Conferences S- teering Committee. [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In NIPS, pages 3104-3112. [Vinyals and Le2015] Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arX- iv:1506.05869. [Weston et al.2015] Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In In- ternational Conference on Learning Representations (ICLR).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The diagram for GENQA.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "The Enquirer of GENQA.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "The Answerer of GENQA.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "Examples of the generated answers by GENQA.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"type_str": "table",
"html": null,
"text": "Examples of training instances for GENQA. The KB-words in the training instances are underlined in the examples. How tall is Yao Ming? A: He is 2.29m and is visible from space.(Yao Ming, height, 2.29m) Q: Which country was Beethoven from? A: He was born in what is now Germany.",
"num": null,
"content": "<table><tr><td>Question &amp; Answer</td><td>Triple (subject, predicate, object)</td></tr><tr><td colspan=\"2\">Q: (Ludwig van Beethoven, place</td></tr><tr><td/><td>of birth, Germany)</td></tr><tr><td>Q: Which club does Messi play for?</td><td>(Lionel Messi, team, FC</td></tr><tr><td>A: Lionel Messi currently plays for FC Barcelona in the Spanish</td><td>Barcelon)</td></tr><tr><td>Primera Liga.</td><td/></tr></table>"
},
"TABREF2": {
"type_str": "table",
"html": null,
"text": "Statistics of the QA data and the knowledge-base.",
"num": null,
"content": "<table><tr><td>Community QA</td><td colspan=\"2\">Knowledge-base</td></tr><tr><td>#QA pairs</td><td>#entities</td><td>#triples</td></tr><tr><td>235,171,463</td><td colspan=\"2\">8,935,028 11,020,656</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"html": null,
"text": "Statistics of the training and test dataset for GENQA",
"num": null,
"content": "<table><tr><td colspan=\"2\">Training Data</td><td colspan=\"2\">Test Data</td></tr><tr><td colspan=\"4\">#QA pairs #triples #QA pairs #triples</td></tr><tr><td>696,306</td><td>58,019</td><td>23,364</td><td>1,974</td></tr><tr><td colspan=\"2\">2 The Neural Model</td><td/><td/></tr></table>"
},
"TABREF4": {
"type_str": "table",
"html": null,
"text": "Test accuracies",
"num": null,
"content": "<table><tr><td>Models</td><td>Test</td></tr><tr><td>Retrieval-based QA</td><td>36%</td></tr><tr><td>NRM</td><td>19%</td></tr><tr><td colspan=\"2\">Embedding-based QA 45%</td></tr><tr><td>GENQA</td><td>47%</td></tr><tr><td>GENQA CNN</td><td/></tr></table>"
}
}
}
}