|
{ |
|
"paper_id": "E17-1036", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:53:06.193394Z" |
|
}, |
|
"title": "Generating Natural Language Question-Answer Pairs from a Knowledge Graph Using a RNN Based Question Generation Model", |
|
"authors": [ |
|
{ |
|
"first": "Sathish", |
|
"middle": [], |
|
"last": "Indurthi", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Dinesh", |
|
"middle": [], |
|
"last": "Raghu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IBM Research", |
|
"location": { |
|
"country": "India" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Mitesh", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Khapra", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Indian Institute of Technology Madras", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Sachindra", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IBM Research", |
|
"location": { |
|
"country": "India" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "", |
|
"pdf_parse": { |
|
"paper_id": "E17-1036", |
|
"_pdf_hash": "", |
|
"abstract": [], |
|
"body_text": [ |
|
{ |
|
"text": "In recent years, knowledge graphs such as Freebase that capture facts about entities and relationships between them have been used actively for answering factoid questions. In this paper, we explore the problem of automatically generating question answer pairs from a given knowledge graph. The generated question answer (QA) pairs can be used in several downstream applications. For example, they could be used for training better QA systems. To generate such QA pairs, we first extract a set of keywords from entities and relationships expressed in a triple stored in the knowledge graph. From each such set, we use a subset of keywords to generate a natural language question that has a unique answer. We treat this subset of keywords as a sequence and propose a sequence to sequence model using RNN to generate a natural language question from it. Our RNN based model generates QA pairs with an accuracy of 33.61 percent and performs 110.47 percent (relative) better than a state-of-the-art template based method for generating natural language question from keywords. We also do an extrinsic evaluation by using the generated QA pairs to train a QA system and observe that the F1-score of the QA system improves by 5.5 percent (relative) when using automatically generated QA pairs in addition to manually generated QA pairs available for training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Knowledge graphs store information about millions of things (or entities) and relationships between them. Freebase 1 is one such knowledge graph that describes and organizes more than 3 billion facts in a consistent ontology. Knowledge graphs usually capture relationships between different things that can be viewed as triples (for example, CEO(Sundar Pichai, Google)). Such triples are often referred to as facts and can be used for answering factoid questions. For example, the above triple can be used to answer the question \"Who is the CEO of Google ?\". It is not surprising that knowledge graphs are increasingly used for building Question Answering systems (Ferrucci, 2012; Yahya et al., 2013; Zou et al., 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 664, |
|
"end": 680, |
|
"text": "(Ferrucci, 2012;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 681, |
|
"end": 700, |
|
"text": "Yahya et al., 2013;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 701, |
|
"end": 718, |
|
"text": "Zou et al., 2014)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we focus on exploiting knowledge graphs for a related but different purpose. We propose that such triples or facts can be used for automatically generating Question Answer (QA) pairs. The generated QA pairs can then be used in certain downstream applications. For example, if some domain-specific knowledge graphs are available (such as History, Geography) then such QA pairs generated from them can be used for developing quiz systems for educational purposes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We now formally define the problem and then illustrate it with the help of an example. Consider a triple consisting of a subject, predicate and object. Typically, the predicate has a domain (subject type) and a range (object type) associated with it. The predicate may have zero or more parents in the knowledge graph. For the sake of simplicity let us assume that the predicate has a single parent. We define a set consisting of the subject, predicate, object, domain, range and predicate parent. We propose an approach to generate natural language factoid questions using a subset of this set such that the answer to the question also lies in the set. Given the set of keywords, as shown in Table 1 , we could generate the following QA pairs (keywords are italicized):", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 693, |
|
"end": 700, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Q: What is the designation of Sundar Pichai at Google? A: CEO Q: Which organization is Sundar Pichai the CEO of? A: Google", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The above problem is similar to the problem of generating questions from Web queries (instead of entities and relations) which was first suggested by Lin (Lin, 2008) . However, unlike existing works on query-to-questions which mainly rely on template based approaches, we formulate this as a sequence to sequence generation problem wherein the ordered set of keywords is an input sequence and the natural language question is the output sequence. We use a Recurrent Neural Network (RNN) (Werbos, 1990; Rumelhart et al., 1988) based model with Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) units to generate questions from the given set of keywords.", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 165, |
|
"text": "(Lin, 2008)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 487, |
|
"end": 501, |
|
"text": "(Werbos, 1990;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 502, |
|
"end": 525, |
|
"text": "Rumelhart et al., 1988)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 573, |
|
"end": 607, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The input to our question generation model is a set of keywords extracted from triples in a knowledge graph. For this, it is important to first select a subset of triples from which interesting and meaningful questions can be constructed. For example, no interesting questions can be constructed from the triple wikipage page ID(Google, 57570) and hence we should eliminate such triples. Further, even for an interesting triple, it may be possible to use only certain subsets of keywords to construct a meaningful question. For example, for the set of keywords shown in Table 1 , it is not possible to use the subset {person, designation} to form an interesting question. Hence, we need to automatically identify the right set of keywords that should be used to form the question such that the answer also lies in the set. In addition to the question generation model, we also propose a method for extracting a meaningful subset of keywords from the triples represented in the knowledge graph.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 570, |
|
"end": 577, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While our goal in this paper is to generate a set of question answer pairs for a given entity in a knowledge graph, we train the RNN model for generating natural language questions from a sequence of keywords using an open domain Community Question Answering (CQA) data. This ensures that the same trained RNN can be used with different knowledge graphs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The main contributions of our work can be summarized as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We propose a method for extracting triples and keywords from a knowledge graph for constructing question keywords and answer pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We formulate the problem of generating natural language questions from keywords as a sequence to sequence learning problem that performs 110.47 % (relative) better than existing template based approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We train our model using 1M questions from WikiAnswers thereby ensuring that it is not tied to any specific knowledge graph.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Finally, we show that appending the automatically generated QA pairs to existing training data for training a state of the art QA system (Jonathan Berant, 2014) improves the performance of the QA system by 5.5 percent (relative).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The remainder of this paper is organized as follows. In next section, we describe related work, followed by a description of our overall approach for extracting keywords from triples and generating natural language question answer pairs from them. We then describe the experiments performed to evaluate our system and then end with concluding remarks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There is only very recent work around generation of question answer pairs from knowledge graph (Seyler et al., 2015) . On the other hand, there are several works around question generation that have been proposed in past with different motivations. We first present a brief overview of the question generation techniques proposed in the literature along with their limitations and then discuss the work around generation of questions answer pairs from knowledge graph.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 116, |
|
"text": "(Seyler et al., 2015)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A number of papers have looked at the problem of generating vocabulary questions using Word-Net (Miller et al., 1990) and distributional similarity techniques (Brown et al., 2005; Heilman and Eskenazi, 2007) . There are numerous works in automatic question generation from text. Many proposed methods are syntax based methods that use the parse structure of sentences, identify key phrases and apply some known transformation rules to create questions (Ali et al., 2010; Kalady et al., 2010; Varga, 2010) . Mannem et al. (2010) further use semantic role labeling for transformation rules. There are also template based methods proposed where a question template is a predefined text with placeholder variables to be replaced with content from source text. Cai et al. (2006) propose an XML markup language that is used to manually create question templates. This is sensitive to the performance of syntactic and semantic parsing. Heilman and Smith (2010) use a rule based approach to transform a declarative sentence into several candidate questions and then rank them using a logistic regression model. These approaches involve creating templates manually and thus require huge manual work and have low recall.", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 117, |
|
"text": "Word-Net (Miller et al., 1990)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 159, |
|
"end": 179, |
|
"text": "(Brown et al., 2005;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 180, |
|
"end": 207, |
|
"text": "Heilman and Eskenazi, 2007)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 452, |
|
"end": 470, |
|
"text": "(Ali et al., 2010;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 471, |
|
"end": 491, |
|
"text": "Kalady et al., 2010;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 492, |
|
"end": 504, |
|
"text": "Varga, 2010)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 507, |
|
"end": 527, |
|
"text": "Mannem et al. (2010)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 756, |
|
"end": 773, |
|
"text": "Cai et al. (2006)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A problem that has been studied recently and is similar to our problem of generating questions using knowledge graph is that of generating questions from Web queries. The motivation here is to automatically generate questions from queries for community-based question answering services such as Yahoo! Answers and WikiAnswers. The idea was first suggested by (Lin, 2008) and further developed by (Zhao et al., 2011) and (Zheng et al., 2011) . Both of these approaches are template based approaches where the templates are learnt using a huge question corpus along with query logs. Dror et al. (2013) further proposed a learning to rank based method to obtain grammatically correct and diverse questions from a given query where the candidate questions are generated using the approach proposed by (Zhao et al., 2011) . These approaches use millions of query question pairs to learn question templates and thus have better generalization performance compared to earlier methods where templates were learnt manually.", |
|
"cite_spans": [ |
|
{ |
|
"start": 359, |
|
"end": 370, |
|
"text": "(Lin, 2008)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 415, |
|
"text": "(Zhao et al., 2011)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 420, |
|
"end": 440, |
|
"text": "(Zheng et al., 2011)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 581, |
|
"end": 599, |
|
"text": "Dror et al. (2013)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 797, |
|
"end": 816, |
|
"text": "(Zhao et al., 2011)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Recently, Seyler et al. 2015proposed a method to generate natural language questions from knowledge graphs given a topic of interest. They also provide a method to estimate difficulty of generated questions. The generation of question is done by manually created template patterns and therefore is limited in application. In contrast we propose an RNN based method to learn generation of natural language questions from a set of keywords. The model can be trained using a dataset containing open domain keywords and question pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section we propose an approach to generate Question Answer (QA) pairs for a given entity E. Let KG be the knowledge graph which contains information about various entities in the form of triples. A triple consists of a subject, a predicate and an object. Subjects and objects are nodes in the KG, which could represent a person, a place, an abstract concept or any physical entity. Predicates are edges in the KG. They define type of relationship between the subject and the object.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The framework to generate QA pairs consists two major modules. The first module, Question Keywords and Answer Extractor, is language independent and extracts required knowledge about the entity E from the KG. The second module is a language dependent RNN based Natural Language Question Generator. When fed with the information extracted from the first part it generates natural language QA pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Question keywords are keywords necessary to generate a question in natural language, or it could also be viewed as a concise representation of a natural language question. For example, to generate a QA pair for the entity London. We can generate a natural language question like What is the capital city of United Kingdom? with the keywords {Capital, City, United Kingdom}. In order to retrieve information about the given entity E, we need to first identify the node n that represents the entity E in the KG. One way to identify node n is to leverage the label (e.g, rdfs:label) property.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Question Keywords and Answer Extractor", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Also \u01b5 \u0110 \u016c \u015d \u0176 \u0150 \u015a \u0102 \u0175 W \u0102 \u016f \u0102 \u0110 \u011e \u0102 \u0189 \u015d \u019a \u0102 \u016f \u015d \u018c \u019a \u015a W \u016f \u0102 \u0110 \u011e \u017d \u0176 \u019a \u0102 \u015d \u0176 \u0190 > \u017d \u0176 \u011a \u017d \u0176 h \u0176 \u015d \u019a \u011e \u011a < \u015d \u0176 \u0150 \u011a \u017d \u0175 ^ \u019a \u011e \u0189 \u015a \u011e \u0176 t \u017d \u016f \u0128 \u018c \u0102 \u0175", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Question Keywords and Answer Extractor", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The next step is to retrieve all the neighbours of n. Let m i be a neighbour of n in KG, connected by a predicate p i . Here i is the index over all predicates whose subject or object is n. Figure 1 shows the entity London with three neighbours United Kingdom, Stephen Wolfram and Buckingham Palace. Each of these neighbours are related to London by a predicate. For example, Stephen Wolfram is related to London as it is his Birth Place.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 190, |
|
"end": 199, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Question Keywords and Answer Extractor", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Given a predicate p i , let sub(p i ) be the subject of p i and obj(p i ) be the object of p i . A predicate is usually defined with a domain (subject type) and a range (object type) to provide better semantics. The domain and range defines the entity types that can be used as the subject and object of the predicate respectively. Let domain(p i ) and range(p i ) be the domain and range of p i respectively. Let", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Question Keywords and Answer Extractor", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "{sub(p i ), domain(p i ), p i , obj(p i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Question Keywords and Answer Extractor", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", range(p i )} be the 5-tuple associated with every p i . Some examples of 5-tuples are shown column wise in Table 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 117, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Question Keywords and Answer Extractor", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We now describe how QKA pairs are extracted from 5-tuple. Let Q k be the question keywords set and A k be the answer to the question to be generated using Q k . (Q k , A k ) together will form a QKA pair. In this work, we consider only a single 5-tuple to generate a QKA pair. For example, we can generate QKA pair like ({Capital, City, United Kingdom}, London) using Column A of Table 2 . But we will not generate QKA pair like ({Capital, City, United Kingdom, Birth Place, Stephen Wol-fram}, London) using both Column A & B of Table 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 380, |
|
"end": 387, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 529, |
|
"end": 537, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Question Keywords and Answer Extractor", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We use the following rules to generate QKA pairs from 5-tuples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Question Keywords and Answer Extractor", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ": If p i is unique for sub(p i ) in KG, then Q k will include sub(p i ), p i and range(p i ). A k will be obj(p i ). If p i is not unique for sub(p i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unique Forward Relation", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": ", then there could be multiple possible answers to the generated question including obj(p i ), and therefore we do not generate such a QKA pair. When this is applied to Column A of Table 2 , we generate ({Capital, City, United Kingdom}, London) as a QKA pair. There is no QKA pair generated for Column C using this rule as London contains many locations like Buckingham Palace, City of Westminster, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 188, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Unique Forward Relation", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": ": If p i is unique for obj(p i ) in KG, then Q k will include obj(p i ), p i and domain(p i ). A k is sub(p i ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unique Reverse Relation", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Similar to unique forward relation, this rule can be applied to Column A of Table 2 and cannot be applied to Column B & C.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 83, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Unique Reverse Relation", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "In the previous sub-section, we proposed an approach for creating question keywords and answer pairs. Now we propose a model for generating natural language questions from a given set of question keywords. We treat the keywords, We propose a Natural Language Question Generation (NLQG) model that first encodes the input sequence using some distributed representation and then decodes the output sequence from this encoded representation. Specifically, we use a RNN based encoder and decoder recently proposed for language processing tasks by number of groups (Cho et al., 2014; Sutskever et al., 2014) . We now formally define the encoder and decoder models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 560, |
|
"end": 578, |
|
"text": "(Cho et al., 2014;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 579, |
|
"end": 602, |
|
"text": "Sutskever et al., 2014)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN based Natural Language Question Generator", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "QK = {qk 1 , \u2022 \u2022 \u2022 , qk m },", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN based Natural Language Question Generator", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Let m be the number of keywords in the input sequence. We represent each keyword using a fixed size vector x i \u2208 n . The function of the encoder is to map this sequence of x i 's to a fixed size encoding. We use a RNN to compute h m using the following recursive equation:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN based Natural Language Question Generator", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h i = \u03a6(h i\u22121 , x i ),", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "RNN based Natural Language Question Generator", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where, h i \u2208 n is the hidden representation at position i. h m is the final encoded hidden state vector for this sequence. We use LSTM units (Hochreiter and Schmidhuber, 1997) as \u03a6 for our implementation based on its recent success in language processing tasks (Bahdanau et al., 2015) . The function of the decoder is to compute the probability of the output sequence Q = {q 1 , \u2022 \u2022 \u2022 , q l } given the encoded vector h m . Note that l is the length of the output sequence and may be different from m. This joint conditional probability of Q is decomposed into l conditional probabilities:", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 175, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 284, |
|
"text": "(Bahdanau et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN based Natural Language Question Generator", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "p(q 1 , \u2022 \u2022 \u2022 , q l |h m ) = l j=1 p(q j |{q 1 , \u2022 \u2022 \u2022 , q j\u22121 }, h m ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN based Natural Language Question Generator", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(2) Now we model p(q j |q <j , h m ) at each position j by using a RNN decoder as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN based Natural Language Question Generator", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(q j |q <j , h m ) = \u0398(q j\u22121 , g j , h m ),", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "RNN based Natural Language Question Generator", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where \u0398 is a non-linear function, that outputs the probability of q j , and g j is the hidden state of the decoder RNN.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN based Natural Language Question Generator", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To train this RNN model, we use a keyword sequence and question pairs generated from an open domain Community Question Answering website. We provide more details on how the data is created and used for training in the experiments section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN based Natural Language Question Generator", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "At runtime, every permutation of the question keywords QK extracted is fed as input to the trained RNN. We pick the question Q with the highest probability of generation across all permutations, as the question generated from the question keywords QK.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN based Natural Language Question Generator", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In this section we perform experiments to demonstrate how the proposed approach outperforms the existing template based approach for generating questions from the keywords. We also evaluate the quality of the QA pairs generated from knowledge graph.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For training the K2Q-RNN model we require a set of keywords and question pairs. We use a large collection of open-domain questions available from WikiAnswers dataset 2 . This dataset has around 20M questions. We randomly selected 1M questions from this corpus for training and 5k questions for testing (the maximum length of a question was restricted to 50 words). We extract keywords from the selected questions by retaining only Nouns, Verbs and Adjectives in the question. The parts of speech tags were identified using Stanford Tagger (Toutanova et al., 2003) . We form an ordered sequence of keywords by retaining these extracted words in the same order in which they appear in the original question. This sequence of keywords along with the original question forms one input-output sequence pair for training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 539, |
|
"end": 563, |
|
"text": "(Toutanova et al., 2003)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We evaluate and compare the following methods: K2Q-RNN: This is our approach proposed in the paper. For the encoder we use a bi-directional RNN containing one hidden layer of 1000 units. Each word in the input vocabulary is represented using a word vector which is randomly initialized and learnt during the training process. The decoder also contains one hidden layer comprising of 1000 units. At the output layer of the decoder a softmax function gives the distribution over the entire target vocabulary. We use the top 30,000 most frequent words in the 1M training questions as the target vocabulary. If any sequence contains a word not belonging to this list then that word is mapped to a special token ([UNK] ) that is also considered a part of the output vocabulary. We use a mini batch stochastic gradient descent algorithm together with Adadelta (Zeiler, 2012) to train our model. We used a mini-batch size of 50 and trained the model for 10 epochs. We used the beam search with the beam size to 12 to generate the question that approximately maximizes conditional probability defined in Equation 2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 707, |
|
"end": 713, |
|
"text": "([UNK]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "K2Q-PBSMT: As mentioned earlier, we treat the problem of generating questions from keywords as a sequence to sequence translation problem. A Phrase Based Machine Translation System (PBSMT) can also be employed for this task by considering that the keyword sequences belong to a source language and the question sequences belong to a target language. We compare our approach with a standard phrase-based MT system, MOSES (Koehn et al., 2007) trained using the same 1M sequence pairs constructed from the WikiAnswers dataset. We used a 5-gram language model trained on the 1M target question sequences and tuned the parameters of the decoder using 1000 held-out sequence (these were held out from the 1M training pairs).", |
|
"cite_spans": [ |
|
{ |
|
"start": 420, |
|
"end": 440, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "K2Q-Template: For template based approach we use the method proposed by (Zhao et al., 2011) along with the Word2Vec (Mikolov et al., 2015) ranking as proposed by (Raghu et al., 2015) . The Word2Vec ranking provides better generalization than the ranking proposed by (Zhao et al., 2011) . We learn the templates using the same 1M training pairs extracted from WikiAnswers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 91, |
|
"text": "(Zhao et al., 2011)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 116, |
|
"end": 138, |
|
"text": "(Mikolov et al., 2015)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 182, |
|
"text": "(Raghu et al., 2015)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 285, |
|
"text": "(Zhao et al., 2011)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We evaluate the performance of K2Q RNN with other baselines to compare the K2Q approaches, we use BLEU score (Papineni et al., 2002) between the generated question and the reference question. BLEU score is typically used in evaluating the performance of MT systems and captures the average n-gram overlap between the generated sequence and the reference sequence. We consider n-grams upto length 4. BLEU score does not capture the true performance of the system. For example, if the trained model simply reproduces all keywords in the generated question then also the unigram overlap will be high resulting in a higher Table 3 : Automatic Evaluation (column 2): The BLEU scores of generated questions for the test set. Human Evaluation (Column 3): Percentage of perfect questions generated by K2Q-Template, K2Q-PBSMT, and K2Q-RNN BLEU score. Further, we had only one reference question (ground truth) per test instance which is not sufficient to capture the different ways of expressing the question. In this case, BLEU score will be unnecessarily harsh on the model even if it generates a valid paraphrase of the reference question. To account for this we also perform a manual evaluation. We show the generated output to four human annotator and ask him/her to assign following ratings to the generated question, Rating 4 : Perfect without error, Rating 3 : Good with one error, missing/addition of article or preposition, but still meaningful, Rating 2 : Many errors, Rating 1 : Failure.", |
|
"cite_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 132, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 619, |
|
"end": 626, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We first evaluate the performance of K2Q approaches using 5000 test instances from the WikiAnswers dataset. We extract the keyword sequence from these test questions using the same method described above. We compute the BLEU score by comparing the generated question with the original question. The results are presented in Table 3 . Both K2Q-RNN and K2Q-PBSMT clearly outperform the template based method which shows that there is merit in formulating this problem as a sequence to sequence learning problem. To be sure that the results are not misleading due to some of the drawbacks of BLEU score as described earlier, we also do a manual evaluation. For this, we randomly selected 700 questions from the test set. We showed the questions generated by the three methods to different human annotators and asked them to assign a score of 1 to 4 to each question (based on the guidelines described earlier). The evaluators had no knowledge about the method used to generate each question shown to them. We only consider questions with rating 4 (perfect without any errors) for each method and calculate the accuracy, shown in Table 3. Figure 2 shows the distribution of ratings assigned by the annotators. Once gain K2Q-RNN and K2Q-PBSMT outperform the template based approach. Further, the human evaluation shows K2Q-RNN performs better than K2Q-PBSMT. Table 4 shows example questions that may have a high BLEU score for K2Q-PBSMT, however the K2Q-RNN has a better human judgement.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 324, |
|
"end": 331, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1135, |
|
"end": 1143, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1354, |
|
"end": 1361, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "RNN based Natural Language Question Generator", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "Next, we also compare the performance of these methods for input keyword sequences of different lengths. For this, we consider all test instances having k keywords and mark the generated question as correct if it was given a rating of 4 by the human annotator. The results of this experiment are plotted in Figure 3 where the x-axis represents number of keywords and y-axis represents the percentage of test instances for which correct (rating 4) questions were generated. Once again we see that K2Q-RNN clearly outperforms the other methods at all input sequence sizes. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 307, |
|
"end": 315, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "RNN based Natural Language Question Generator", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "In this section we describe the performance of K2Q-RNN for generating QA pairs from a Knowledge Graph. For our evaluation purpose, we use Freebase as the Knowledge Graph. We randomly picked 27 Freebase entities of various types (person, location, organization, etc) and extracted all 5-tuples containing them. To create a diverse QA pairs we retained only two instances (5-tuples) for each predicate or relation type. Some predicates (like summary, quotations) have long text as their objects, some predicates (like Daylife Topic ID, Hero image ID) are difficult for annotator to validate. So, we filtered the list further by removing above mentioned predicates and generated a total of 485 QKA pairs. We manually evaluated these generated QA pairs and marked them as correct only if generated question along with the answer together convey the information represented in the 5-tuple. A few QA pairs were marked correct by the annotators, even though the question was not grammatically correct but convey the right intent. Some examples of such questions are melting point of propyl alcohol?, stanford university student radio station?. Overall, 33.61% of the QA pairs generated by our method were annotated correct. Table 5 shows some correct and incorrect QA pairs generated by our method.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1217, |
|
"end": 1224, |
|
"text": "Table 5", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generating Question-Answers pairs from Freebase", |
|
"sec_num": "4.4.2" |
|
}, |
|
{ |
|
"text": "As an extrinsic evaluation of the quality of our QA generation model, we use QA pairs generated by our model to improve the performance of a state of the art QA system called PARASEM-PRE (Jonathan Berant, 2014). PARASEMPRE is a semantic parser, which maps natural language questions to intermediate logical forms which in turn are used to answer the question. The standard training set used for training PARASEMPRE is a part of the WebQuestions and contains 3778 QA pairs. We appended this train with 7556 automatically generated QA pairs (resulting in tripling of the training set). Table 6 then compares the same system trained on the following different training sets: (i) Only Web Questions (WQ) dataset (ii) WQ + Generated Question Answers (GQA) and (iii) WQ + Ground Truth (GT) QA pairs. The GT QA pairs were obtained from the SimpleQuestions (Bordes et al., 2015) test data and have a one-toone correspondence to the GQA data (hence the results are comparable). We see a relative improvement of 5.5% in the F1-score of the system by adding GQA. Further, the performance gains are comparable to those obtained by using GT QA pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 849, |
|
"end": 870, |
|
"text": "(Bordes et al., 2015)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 584, |
|
"end": 591, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Extrinsic Evaluation", |
|
"sec_num": "4.4.3" |
|
}, |
|
{ |
|
"text": "We inspected all the QA pairs generated by our method to identify some common mistakes. We found that most errors corresponded to (i) con-fusion between is/are and do/does (ii) incorrect use of determiners (missing articles, confusion between a/the and addition of extra articles). Another problem occurs when the extracted keyword sequence contains a stop word. This happens when dealing with triples such as ({also known as, Andre Agassi}, Agassi). Since, during training we retain only content words (nouns, adjectives, verbs) in the input sequence, the model fails to deal with such stop words at test time and simply produces unknown token (UNK) in the output. Another set of errors corresponds to mismatch between the subject type and question type. For example, we observed that in a few cases, the model incorrectly generates a what question instead of a who question when the answer type is a person.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "4.4.4" |
|
}, |
|
{ |
|
"text": "In this paper we propose a method for generating QA pairs for an given entity using a knowledge graph. We also propose an RNN based approach for generating natural language questions from an input keyword sequence. The proposed method performs significantly better than previously proposed template based method. We also do an extrinsic evaluation to show that the generated QA pairs help in improving the performance of a downstream QA system. In future, we plan to extend this work to support predicates with stop words and support predicates in various tenses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "https://www.freebase.com/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Available at http://knowitall.cs.washington.edu/oqa/data/wikianswers/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Automation of question generation from sentences", |
|
"authors": [ |
|
{ |
|
"first": "Husam", |
|
"middle": [], |
|
"last": "Ali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yllias", |
|
"middle": [], |
|
"last": "Chali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sadid A", |
|
"middle": [], |
|
"last": "Hasan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of QG2010: The Third Workshop on Question Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "58--67", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Husam Ali, Yllias Chali, and Sadid A Hasan. 2010. Automation of question generation from sentences. In Proceedings of QG2010: The Third Workshop on Question Generation, pages 58-67.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Large-scale simple question answering with memory networks", |
|
"authors": [ |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Usunier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sumit", |
|
"middle": [], |
|
"last": "Chopra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple ques- tion answering with memory networks. CoRR, abs/1506.02075.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Automatic question generation for vocabulary assessment", |
|
"authors": [ |
|
{ |
|
"first": "Gwen", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Jonathan C Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxine", |
|
"middle": [], |
|
"last": "Frishkoff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Eskenazi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "819--826", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan C Brown, Gwen A Frishkoff, and Maxine Es- kenazi. 2005. Automatic question generation for vocabulary assessment. In Proceedings of the con- ference on Human Language Technology and Em- pirical Methods in Natural Language Processing, pages 819-826. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Nlgml: A markup language for question generation", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Graesser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2747--2752", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Graesser. 2006. Nlgml: A markup language for question generation. In World Conference on E- Learning in Corporate, Government, Healthcare, and Higher Education, volume 2006, pages 2747- 2752.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "On the properties of neural machine translation: Encoder-decoder approaches", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merrienboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "KyungHyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In Proceedings of SSST-8, Eighth Work- shop on Syntax, Semantics and Structure in Statisti- cal Translation.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "From query to question in one click: suggesting synthetic questions to searchers", |
|
"authors": [ |
|
{ |
|
"first": "Gideon", |
|
"middle": [], |
|
"last": "Dror", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoelle", |
|
"middle": [], |
|
"last": "Maarek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Avihai", |
|
"middle": [], |
|
"last": "Mejer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Idan", |
|
"middle": [], |
|
"last": "Szpektor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 22nd international conference on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "391--402", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gideon Dror, Yoelle Maarek, Avihai Mejer, and Idan Szpektor. 2013. From query to question in one click: suggesting synthetic questions to searchers. In Proceedings of the 22nd international conference on World Wide Web, pages 391-402. International World Wide Web Conferences Steering Committee.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Introduction to this is watson", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ferrucci", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "IBM Journal of Research and Development", |
|
"volume": "56", |
|
"issue": "3.4", |
|
"pages": "1--1", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David A Ferrucci. 2012. Introduction to this is wat- son. IBM Journal of Research and Development, 56(3.4):1-1.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Question answering over linked data using first-order logic", |
|
"authors": [ |
|
{ |
|
"first": "Shizhu", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuanzhe", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liheng", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shizhu He, Kang Liu, Yuanzhe Zhang, Liheng Xu, and Jun Zhao. 2014. Question answering over linked data using first-order logic. In Proceedings of Em- pirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Application of automatic thesaurus extraction for computer generation of vocabulary questions", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Heilman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxine", |
|
"middle": [], |
|
"last": "Eskenazi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "SLaTE", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "65--68", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Heilman and Maxine Eskenazi. 2007. Ap- plication of automatic thesaurus extraction for com- puter generation of vocabulary questions. In SLaTE, pages 65-68.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Good question! statistical ranking for question generation", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Heilman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Noah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human Language Technologies: The", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Heilman and Noah A Smith. 2010. Good question! statistical ranking for question genera- tion. In Human Language Technologies: The 2010", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "609--617", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 609-617. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural Computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Semantic parsing via paraphrasing", |
|
"authors": [ |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Percy Liang Jonathan Berant. 2014. Semantic parsing via paraphrasing. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Natural language question generation using syntax and keywords", |
|
"authors": [ |
|
{ |
|
"first": "Saidalavi", |
|
"middle": [], |
|
"last": "Kalady", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ajeesh", |
|
"middle": [], |
|
"last": "Elikkottil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajarshi", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of QG2010: The Third Workshop on Question Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saidalavi Kalady, Ajeesh Elikkottil, and Rajarshi Das. 2010. Natural language question generation using syntax and keywords. In Proceedings of QG2010: The Third Workshop on Question Generation, pages 1-10. questiongeneration. org.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Moses: Open source toolkit for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Bertoldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brooke", |
|
"middle": [], |
|
"last": "Cowan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wade", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Moran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Zens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Constantin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evan", |
|
"middle": [], |
|
"last": "Herbst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07, pages 177-180, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Automatic question generation from queries", |
|
"authors": [ |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Workshop on the Question Generation Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "156--164", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chin-Yew Lin. 2008. Automatic question generation from queries. In Workshop on the Question Genera- tion Shared Task, pages 156-164.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Question generation from paragraphs at upenn: Qgstec system description", |
|
"authors": [ |
|
{ |
|
"first": "Prashanth", |
|
"middle": [], |
|
"last": "Mannem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rashmi", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aravind", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of QG2010: The Third Workshop on Question Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "84--91", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Prashanth Mannem, Rashmi Prasad, and Aravind Joshi. 2010. Question generation from paragraphs at upenn: Qgstec system description. In Proceedings of QG2010: The Third Workshop on Question Gen- eration, pages 84-91.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2015. Efficient estimation of word represen- tations in vector space. In ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Introduction to wordnet: An on-line lexical database*", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "George", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christiane", |
|
"middle": [], |
|
"last": "Beckwith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Derek", |
|
"middle": [], |
|
"last": "Fellbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katherine", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "International journal of lexicography", |
|
"volume": "3", |
|
"issue": "4", |
|
"pages": "235--244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George A Miller, Richard Beckwith, Christiane Fell- baum, Derek Gross, and Katherine J Miller. 1990. Introduction to wordnet: An on-line lexical database*. International journal of lexicography, 3(4):235-244.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "A statistical approach for non-sentential utterance resolution for interactive qa system", |
|
"authors": [ |
|
{ |
|
"first": "Dinesh", |
|
"middle": [], |
|
"last": "Raghu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sathish", |
|
"middle": [], |
|
"last": "Indurthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jitendra", |
|
"middle": [], |
|
"last": "Ajmera", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sachindra", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "16th Annual Meeting of the Special Interest Group on Discourse and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dinesh Raghu, Sathish Indurthi, Jitendra Ajmera, and Sachindra Joshi. 2015. A statistical approach for non-sentential utterance resolution for interactive qa system. In 16th Annual Meeting of the Special In- terest Group on Discourse and Dialogue, page 335.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Neurocomputing: Foundations of research", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Rumelhart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ronald", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Williams", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "696--699", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1988. Neurocomputing: Foundations of research. pages 696-699.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Generating quiz questions from knowledge graphs", |
|
"authors": [ |
|
{ |
|
"first": "Dominic", |
|
"middle": [], |
|
"last": "Seyler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Yahya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Berberich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 24th International Conference on World Wide Web Companion", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "113--114", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dominic Seyler, Mohamed Yahya, and Klaus Berberich. 2015. Generating quiz questions from knowledge graphs. In Proceedings of the 24th International Conference on World Wide Web Companion, pages 113-114. International World Wide Web Conferences Steering Committee.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3104--3112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In Advances in Neural Information Process- ing Systems 27: Annual Conference on Neural In- formation Processing Systems 2014, December 8- 13 2014, Montreal, Quebec, Canada, pages 3104- 3112.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Feature-rich part-ofspeech tagging with a cyclic dependency network", |
|
"authors": [ |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoram", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kristina Toutanova, Dan Klein, Christopher D. Man- ning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. In HLT-NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Le an ha 2010 wlv: A question generation system for the qgstec 2010 task b", |
|
"authors": [ |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Varga", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of QG2010: The Third Workshop on Question Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "80--83", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrea Varga. 2010. Le an ha 2010 wlv: A ques- tion generation system for the qgstec 2010 task b. In Proceedings of QG2010: The Third Workshop on Question Generation, pages 80-83.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Backpropagation through time: what it does and how to do it", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Werbos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings of the IEEE", |
|
"volume": "78", |
|
"issue": "10", |
|
"pages": "1550--1560", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P.J. Werbos. 1990. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550-1560, Oct.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Robust question answering over the web of linked data", |
|
"authors": [ |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Yahya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Berberich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shady", |
|
"middle": [], |
|
"last": "Elbassuoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 22nd ACM international conference on Conference on information & knowledge management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1107--1116", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohamed Yahya, Klaus Berberich, Shady Elbassuoni, and Gerhard Weikum. 2013. Robust question an- swering over the web of linked data. In Proceedings of the 22nd ACM international conference on Con- ference on information & knowledge management, pages 1107-1116. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "ADADELTA: an adaptive learning rate method", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zeiler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. arxiv:, 1212.5701.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Automatically generating questions from queries for community-based question answering", |
|
"authors": [ |
|
{ |
|
"first": "Shiqi", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haifeng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Guan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "929--937", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shiqi Zhao, Haifeng Wang, Chao Li, Ting Liu, and Yi Guan. 2011. Automatically generating questions from queries for community-based question answer- ing. In IJCNLP, pages 929-937.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "K2q: Generating natural language questions from keywords with user refinements", |
|
"authors": [ |
|
{ |
|
"first": "Zhicheng", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiance", |
|
"middle": [], |
|
"last": "Si", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Edward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyan", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "947--955", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhicheng Zheng, Xiance Si, Edward Y Chang, and Xi- aoyan Zhu. 2011. K2q: Generating natural lan- guage questions from keywords with user refine- ments. In IJCNLP, pages 947-955.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Natural language question answering over rdf: a graph data driven approach", |
|
"authors": [ |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Zou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruizhe", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haixun", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffer", |
|
"middle": [], |
|
"last": "Xu Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenqiang", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongyan", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 ACM SIGMOD international conference on Management of data", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "313--324", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lei Zou, Ruizhe Huang, Haixun Wang, Jeffer Xu Yu, Wenqiang He, and Dongyan Zhao. 2014. Natural language question answering over rdf: a graph data driven approach. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data, pages 313-324. ACM.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "Ratings given by human judges for generated questions for K2Q-Template, K2Q-PBSMT and K2Q-RNN ( Best viewed in color).", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "The plot shows the performance of 3 methods K2Q-Template, K2Q-PBSMT, and K2Q-RNN as a functions of number of keywords.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "An example set of keywords constructed from the triple CEO(Sundar Pichai, Google)", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"4\">Figure 1: Triples with the entity London in a</td></tr><tr><td colspan=\"2\">knowledge graph</td><td/><td/></tr><tr><td colspan=\"4\">since London is the answer to the above ques-</td></tr><tr><td colspan=\"4\">tion, ({Capital, City, United Kingdom}, London)</td></tr><tr><td colspan=\"4\">together will form a Question Keyword and An-</td></tr><tr><td colspan=\"4\">swer (QKA) pair. One important note is that Cap-</td></tr><tr><td colspan=\"2\">ital, City, Column A</td><td>Column B</td><td>Column C</td></tr><tr><td>Subject</td><td colspan=\"3\">United Kingdom Stephen Wolfram London</td></tr><tr><td>Domain</td><td>Country</td><td>Person</td><td>Location</td></tr><tr><td colspan=\"2\">Predicate Capital</td><td>Birth Place</td><td>Contains</td></tr><tr><td>Object</td><td>London</td><td>London</td><td>Buckingham Palace</td></tr><tr><td>Range</td><td>City</td><td>Location</td><td>Location</td></tr></table>", |
|
"text": "United Kingdom and London are the English labels of the node that represent these entities in the KG.", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "Examples of 5-tuples (subject, domain, predicate, object, range) for the entity London", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "generate different semantically valid questions from the same set K based on the order of the words in the set. For example, given the question keywords, QK = {King, Sweden} we can generate two semantically valid questions by changing the order of King and Sweden: (i) Who is the King of Sweden? and (ii) Does Sweden have a King?", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"content": "<table><tr><td>Ground truth</td><td>K2Q-PBSMT</td><td>K2Q-RNN</td></tr><tr><td>pitching in baseball ?</td><td>pitching in baseball ?</td><td>what is pitching in baseball ?</td></tr><tr><td>difference between mergeracqs and amalgamation ?</td><td>what is the</td><td/></tr></table>", |
|
"text": "difference between mergeracqs amalgamation ? what is the difference between mergeracqs and amalgamation ? did great britain control iraq ? great britain control in iraq ? how did the great britain control the iraq ? what is the critical analysis of the poem a river ? critical analysis of the poem a river ? what is the most critical analysis of the poem a river ? global warming affect population growth ? global warming affect the population growth ? can global warming affect the population growth ?", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"content": "<table><tr><td>Entity</td><td>Keyword Query</td><td>Generated Question</td><td>Answer</td></tr><tr><td/><td>birth place alan turing</td><td>where is the birth place of alan turing ? ( )</td><td>maida vale ( )</td></tr><tr><td>Alan Turing</td><td>inventor lu decomposition tv episodes alan turing</td><td>who was the inventor of lu decomposition ? ( ) tv episodes of alan turing ? ( )</td><td>alan turing ( ) dangerous knowledge ( )</td></tr><tr><td/><td>author mathematical logic</td><td>what is the author of the mathematical logic ? (x)</td><td>alan turing ( )</td></tr><tr><td/><td>ioc code france</td><td>what is the ioc code for france ? ( )</td><td>fr ( )</td></tr><tr><td>France</td><td>capital france location lake annecy</td><td>what is the capital of france ? ( ) what is the location of lake annecy ? ( )</td><td>paris ( ) france ( )</td></tr><tr><td/><td>albin haller country</td><td>is albin haller a country ? (x)</td><td>france (x)</td></tr><tr><td/><td>wimbledon first date occurrence</td><td>what was the wimbledon first date of occurrence ? ( )</td><td>1877-07-09 ( )</td></tr><tr><td>Wimbledon</td><td>current frequency wimbledon</td><td>what is the current frequency of wimbledon ? ( )</td><td>yearly ( )</td></tr><tr><td/><td>official website wimbledon</td><td>what is the official website for wimbledon ? ( )</td><td>http://www.wimbledon.com/ ( )</td></tr></table>", |
|
"text": "Example questions from Ground truth, K2Q-PBSMT and K2Q-RNN", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"content": "<table><tr><td>K2Q-Template</td><td>K2Q-PBSMT</td><td>K2Q-RNN</td></tr></table>", |
|
"text": "Example question-answer pairs extracted for different entities by using Freebase and K2Q-RNN. Question-Answer pairs are considered correct if and only if both are marked with by human judges.", |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |