ACL-OCL / Base_JSON /prefixD /json /dialdoc /2021.dialdoc-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:31:12.650027Z"
},
"title": "Summary-Oriented Question Generation for Informational Queries",
"authors": [
{
"first": "Xusen",
"middle": [],
"last": "Yin",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Li",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Kevin",
"middle": [],
"last": "Small",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Users frequently ask simple factoid questions for question answering (QA) systems, attenuating the impact of myriad recent works that support more complex questions. Prompting users with automatically generated suggested questions (SQs) can improve user understanding of QA system capabilities and thus facilitate more effective use. We aim to produce self-explanatory questions that focus on main document topics and are answerable with variable length passages as appropriate. We satisfy these requirements by using a BERT-based Pointer-Generator Network trained on the Natural Questions (NQ) dataset. Our model shows SOTA performance of SQ generation on the NQ dataset (20.1 BLEU-4). We further apply our model on out-of-domain news articles, evaluating with a QA system due to the lack of gold questions and demonstrate that our model produces better SQs for news articles-with further confirmation via a human evaluation.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Users frequently ask simple factoid questions for question answering (QA) systems, attenuating the impact of myriad recent works that support more complex questions. Prompting users with automatically generated suggested questions (SQs) can improve user understanding of QA system capabilities and thus facilitate more effective use. We aim to produce self-explanatory questions that focus on main document topics and are answerable with variable length passages as appropriate. We satisfy these requirements by using a BERT-based Pointer-Generator Network trained on the Natural Questions (NQ) dataset. Our model shows SOTA performance of SQ generation on the NQ dataset (20.1 BLEU-4). We further apply our model on out-of-domain news articles, evaluating with a QA system due to the lack of gold questions and demonstrate that our model produces better SQs for news articles-with further confirmation via a human evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Question answering (QA) systems have experienced dramatic recent empirical improvements due to several factors including novel neural architectures (Chen and Yih, 2020), access to pre-trained contextualized embeddings (Devlin et al., 2019) , and the development of large QA training corpora (Rajpurkar et al., 2016; Trischler et al., 2017; Yu et al., 2020) . However, despite technological advancements that support more sophisticated questions (Yang et al., 2018; Joshi et al., 2017; Choi et al., 2018; Reddy et al., 2019) , many consumers of QA technology in practice tend to ask simple factoid questions when engaging with these systems. Potential explanations for this phenomenon include low expectations set by previous QA systems, limited coverage for more complex questions * Work was done as an intern at Amazon. not changing these expectations, and users simply not possessing sufficient knowledge of the subject of interest to ask more challenging questions. Irrespective of the reason, one potential solution to this dilemma is to provide users with automatically generated suggested questions (SQs) to help users better understand QA system capabilities.",
"cite_spans": [
{
"start": 218,
"end": 239,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 291,
"end": 315,
"text": "(Rajpurkar et al., 2016;",
"ref_id": "BIBREF30"
},
{
"start": 316,
"end": 339,
"text": "Trischler et al., 2017;",
"ref_id": "BIBREF37"
},
{
"start": 340,
"end": 356,
"text": "Yu et al., 2020)",
"ref_id": null
},
{
"start": 445,
"end": 464,
"text": "(Yang et al., 2018;",
"ref_id": "BIBREF41"
},
{
"start": 465,
"end": 484,
"text": "Joshi et al., 2017;",
"ref_id": "BIBREF8"
},
{
"start": 485,
"end": 503,
"text": "Choi et al., 2018;",
"ref_id": null
},
{
"start": 504,
"end": 523,
"text": "Reddy et al., 2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Generating SQs is a specific form of question generation (QG), a long-studied task with many applied use cases -the most frequent purpose being data augmentation for mitigating the high sample complexity of neural QA models (Alberti et al., 2019a) . However, the objective of such existing QG systems is to produce large quantities of question/answer pairs for training, which is contrary to that of SQs. The latter seeks to guide users in their research of a particular subject by producing engaging and understandable questions. To this end, we aim to generate questions that are self-explanatory and introductory.",
"cite_spans": [
{
"start": 224,
"end": 247,
"text": "(Alberti et al., 2019a)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Self-explanatory questions require neither significant background knowledge nor access to documents used for QG to understand the SQ context. For example, existing QG systems may use the text \"On December 13, 2013, Beyonc\u00e9 unexpectedly released her eponymous fifth studio album on the iTunes store without any prior announcement or promotion.\" to produce the question \"Where was the album released?\" This kind of question is not uncommon in crowd-sourced datasets (e.g., SQuAD (Rajpurkar et al., 2016) ) but do not satisfy the self-explanatory requirement. Clark and Gardner (2018) estimate that 33 % of SQuAD questions are context-dependent. This context-dependency is not surprising, given that annotators observe the underlying documents when generating questions.",
"cite_spans": [
{
"start": 477,
"end": 501,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Introductory questions are best answered by a larger passage than short spans such that users can learn more about the subject, possibly inspiring follow-up questions (e.g., \"Can convalescent plasma help COVID patients?\"). However, existing QG methods mostly generate questions while reading the text corpus and tend to produce narrowly focused questions with close syntactic relations to associated answer spans. TriviaQA (Joshi et al., 2017) and HotpotQA (Yang et al., 2018) also provide fine-grained questions, even though reasoning from a larger document context via multi-hop inference. This narrower focus often produces factoid questions peripheral to the main topic of the underlying document and is less useful to a human user seeking information about a target concept.",
"cite_spans": [
{
"start": 423,
"end": 443,
"text": "(Joshi et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 457,
"end": 476,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Conversely, the Natural Question (NQ) dataset (Kwiatkowski et al., 2019) (and similar ones such as MS Marco (Bajaj et al., 2016) , GooAQ (Khashabi et al., 2021) ) is significantly closer to simulating the desired informationseeking behavior. Questions are generated independently of the corpus by processing search query logs, and the resulting answers can be entities, spans in texts (aka short answers), or entire paragraphs (aka long answers). Thus, the NQ dataset is more suitable as QG training data for generating SQs as long-answer questions that tend to satisfy our self-explanatory and introductory requirements.",
"cite_spans": [
{
"start": 46,
"end": 72,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 108,
"end": 128,
"text": "(Bajaj et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 137,
"end": 160,
"text": "(Khashabi et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To this end, we propose a novel BERT-based Pointer-Generator Network (BERTPGN) trained with the NQ dataset to generate introductory and self-explanatory questions as SQs. Using NQ, we start by creating a QG dataset that contains questions with both short and long answers. We train our BERTPGN model with these two types of context-question pairs together. During inference, the model can generate either short-or long-answer questions as determined by the context. With automatic evaluation metrics such as BLEU (Papineni et al., 2002) , we show that for long-answer question generation, our model can produce state-of-the-art performance with 20.1 BLEU-4, 6.2 higher than (Mishra et al., 2020) , the current state-of-the-art on this dataset. The short answer question generation performance can reach 28.1 BLEU-4.",
"cite_spans": [
{
"start": 513,
"end": 536,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF25"
},
{
"start": 674,
"end": 695,
"text": "(Mishra et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We further validate the generalization ability of our BERTPGN model by creating an out-of-domain test set with the CNN/Daily Mail (Hermann et al., 2015) . Without human-generated reference questions, automatic evaluation metrics such as BLEU are not usable. We propose to evaluate these questions with a pretrained QA system that produces two novel metrics. The first is answerability, mea-suring the possibility to find answers from given contexts. The second is granularity, indicating whether the answer would be passages or short spans. Finally, we conduct a human evaluation with generated questions of the test set and demonstrate that our BERTPGN model can produce introductory and self-explanatory questions for informationseeking scenarios, even for a new domain that differs from the training data.",
"cite_spans": [
{
"start": 130,
"end": 152,
"text": "(Hermann et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The novel contributions of our paper include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We generate questions, aiming to be both introductory and self-explanatory, to support human information seeking QA sessions. \u2022 We propose to use the BERT-based Pointer-Generator Network to generate questions by encoding larger contexts capable of resulting in answer forms including entities, short text spans, and even whole paragraphs. \u2022 We evaluate our method, both automatically and with human evaluation, on in-domain Natural Questions and out-of-domain news datasets, providing insights into question generation for information seeking. \u2022 We propose a novel evaluation metric with a pretrained QA system for generated SQs when there is no reference question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "QG has been studied in multiple application contexts (e.g., generating questions for reading comprehension tests (Heilman and Smith, 2010) , generating questions about an image (Mostafazadeh et al., 2016) , recommending questions with respect to a news article (Laban et al., 2020) ), evaluating summaries (Deutsch et al., 2020; , and using multiple methods (see (Pan et al., 2019) for a recent survey). Early neural models focused on sequence-to-sequence generation based solutions (Serban et al., 2016; Du et al., 2017) . The primary directions for improving these early works generally fall into the categories of providing mechanisms to inject answer-aware information into the neural encoder-decoder architectures (Du and Cardie, 2018; Liu et al., 2019; Sun et al., 2018) , encoding larger portions of the answer document as context (Zhao et al., 2018; Tuan et al., 2020) , and incorporating richer knowledge sources (Elsahar et al., 2018) . These QG methods and the work described in this paper focus on using single-hop QA datasets such as SQuAD (Rajpurkar et al., 2016 (Rajpurkar et al., , 2018 , NewsQA (Trischler et al., 2017; Hermann et al., 2015) , and MS Marco (Bajaj et al., 2016) . However, there has also been recent interest in multi-hop QG settings (Yu et al., 2020; Gupta et al., 2020; Malon and Bai, 2020) by using multi-hop QA datasets including HotPotQA (Yang et al., 2018) , Trivi-aQA (Joshi et al., 2017) , and FreebaseQA (Jiang et al., 2019) . Finally, there has been some recent interesting work regarding unsupervised QG, where the goal is to generate QA training data without an existing QG corpus to train better QA models (Lewis et al., 2019; Li et al., 2020) .",
"cite_spans": [
{
"start": 113,
"end": 138,
"text": "(Heilman and Smith, 2010)",
"ref_id": "BIBREF3"
},
{
"start": 177,
"end": 204,
"text": "(Mostafazadeh et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 261,
"end": 281,
"text": "(Laban et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 306,
"end": 328,
"text": "(Deutsch et al., 2020;",
"ref_id": null
},
{
"start": 363,
"end": 381,
"text": "(Pan et al., 2019)",
"ref_id": null
},
{
"start": 483,
"end": 504,
"text": "(Serban et al., 2016;",
"ref_id": "BIBREF34"
},
{
"start": 505,
"end": 521,
"text": "Du et al., 2017)",
"ref_id": null
},
{
"start": 719,
"end": 740,
"text": "(Du and Cardie, 2018;",
"ref_id": null
},
{
"start": 741,
"end": 758,
"text": "Liu et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 759,
"end": 776,
"text": "Sun et al., 2018)",
"ref_id": "BIBREF36"
},
{
"start": 838,
"end": 857,
"text": "(Zhao et al., 2018;",
"ref_id": null
},
{
"start": 858,
"end": 876,
"text": "Tuan et al., 2020)",
"ref_id": "BIBREF38"
},
{
"start": 922,
"end": 944,
"text": "(Elsahar et al., 2018)",
"ref_id": null
},
{
"start": 1053,
"end": 1076,
"text": "(Rajpurkar et al., 2016",
"ref_id": "BIBREF30"
},
{
"start": 1077,
"end": 1102,
"text": "(Rajpurkar et al., , 2018",
"ref_id": "BIBREF29"
},
{
"start": 1112,
"end": 1136,
"text": "(Trischler et al., 2017;",
"ref_id": "BIBREF37"
},
{
"start": 1137,
"end": 1158,
"text": "Hermann et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 1174,
"end": 1194,
"text": "(Bajaj et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 1267,
"end": 1284,
"text": "(Yu et al., 2020;",
"ref_id": null
},
{
"start": 1285,
"end": 1304,
"text": "Gupta et al., 2020;",
"ref_id": null
},
{
"start": 1305,
"end": 1325,
"text": "Malon and Bai, 2020)",
"ref_id": "BIBREF20"
},
{
"start": 1376,
"end": 1395,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF41"
},
{
"start": 1408,
"end": 1428,
"text": "(Joshi et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 1446,
"end": 1466,
"text": "(Jiang et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 1652,
"end": 1672,
"text": "(Lewis et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 1673,
"end": 1689,
"text": "Li et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Most directly related to our work from a motivation perspective is recent research regarding providing SQs in the context of supporting a news chatbot (Laban et al., 2020) . However, the focus of this work is not QG, where they essentially use a GPT-2 language model (Radford et al., 2019) trained on SQuAD data for QG and do not evaluate this component independently. Qi et al. (2020) generates questions for information-seeking but not focuses on introductory questions. Most directly related to our work from a conceptual perspective is regarding producing questions for long answer targets (Mishra et al., 2020), which we contrast directly in Section 3. As QG is a generation task, automated evaluation frequently uses metrics such as BLEU (Papineni et al., 2002) , METEOR (Lavie and Agarwal, 2007) , and ROUGE (Lin, 2004) . As these do not explicitly evaluate the requirements of our information-seeking use case, we also evaluate using the output of a trained QA system and conduct human annotator evaluations.",
"cite_spans": [
{
"start": 151,
"end": 171,
"text": "(Laban et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 267,
"end": 289,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF28"
},
{
"start": 369,
"end": 385,
"text": "Qi et al. (2020)",
"ref_id": "BIBREF27"
},
{
"start": 744,
"end": 767,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF25"
},
{
"start": 777,
"end": 802,
"text": "(Lavie and Agarwal, 2007)",
"ref_id": "BIBREF13"
},
{
"start": 815,
"end": 826,
"text": "(Lin, 2004)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Given a context X and an answer A, we want to generate a questionQ that satisfies",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "Q = arg max Q P (Q|X, A),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "where the context X could be a paragraph or a document that contains answers, rather than sentences as used in (Du and Cardie, 2018; Tuan et al., 2020) , while A could be either short spans in X such as entities or noun phrases (referred to as a short answer), or the entire context X (referred to as a long answer).",
"cite_spans": [
{
"start": 111,
"end": 132,
"text": "(Du and Cardie, 2018;",
"ref_id": null
},
{
"start": 133,
"end": 151,
"text": "Tuan et al., 2020)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "The long answer QG task targets generating questions that are best answered by the entire context (i.e., paragraph or document) or a summary of the context, which is notably different from Figure 1 : The BERTPGN architecture. The input for the BERT encoder is the context (w/p: word and position embeddinngs) with answer spans (or the whole context in the long answer setting) marked with the answer tagging (t: answer tagging embeddings). The decoder is a combination of BERT as a language model (i.e. has only self-attentions) and a Transformer-based pointergenerator network.",
"cite_spans": [],
"ref_spans": [
{
"start": 189,
"end": 197,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "w p t BERT Encoder Transformer ? BERT as a LM Copy w p t w p t w p t w p w p w p Attn",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "most QG settings where the answer is a short text span and the context is frequently a single sentence. Mishra et al. (2020) also work on the long answer QG setting using the NQ dataset, but their task definition is arg max Q P (Q|X) where they refer to the context X as the long answer. We use their models as baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "We use the BERT-based Pointer-Generator Network (BERTPGN) to generate questions. Tuan et al. (2020) use two-layer cross attentions between contexts and answers to encode contexts such as paragraphs when generating questions and show improved results. However, they show that threelayer cross attentions produce worse results. We will show later in the experiment that this is due to a lack of better initialization and that a higher layer is better for long answer question generation. Zhao et al. (2018) use answer tagging from the context instead of combining context and answer. Our model is motivated by these two works (Figure 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 624,
"end": 633,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "Given context X = {x i } L i=1 , we add positional embeddings P = {p i } L i=1 and type embeddings T = {t i } L i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context and Answer Encoding",
"sec_num": "4.1"
},
{
"text": "as the input for BERT. We use type embeddings to discriminate between a context and an answer, following Zhao et al. (2018); Tuan et al. (2020). We use t i = 0 to represent 'context-only' and t i = 1 to represent 'both context and answer' for token x i . We do not apply the [CLS] in the beginning since we do not need the pooled output from BERT. We do not use the [SEP] to combine contexts and answers as inputs for BERT since we mark answers in the context with type embeddings.",
"cite_spans": [
{
"start": 275,
"end": 280,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context and Answer Encoding",
"sec_num": "4.1"
},
{
"text": "The sequence output from BERT which forms our context-answer encoding is given by H = f BERT (X + P + T ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context and Answer Encoding",
"sec_num": "4.1"
},
{
"text": "The transformer-based Pointer-Generator Network is derived from (See et al., 2017) with adaptations to support transformers (Vaswani et al., 2017) . Denoting LN(\u2022) as layer normalization, MHA(Q, K, V ) as the multi-head attention with three parameters-query, key, and value, FFN(\u2022) as a linear function, and the decoder input at time t: Y (t) = {y j } t j=1 , the decoder self-attention at time t is given by (illustrated with a single-layer transformer simplification)",
"cite_spans": [
{
"start": 64,
"end": 82,
"text": "(See et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 124,
"end": 146,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Decoding",
"sec_num": "4.2"
},
{
"text": "A (t) S = LN MHA Y (t) , Y (t) , Y (t) + Y (t) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Decoding",
"sec_num": "4.2"
},
{
"text": "the cross-attention between encoder and decoder is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Decoding",
"sec_num": "4.2"
},
{
"text": "A (t) C = LN MHA A (t) S , H, H + A (t) S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Decoding",
"sec_num": "4.2"
},
{
"text": ", and the final decoder output is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Decoding",
"sec_num": "4.2"
},
{
"text": "O (t) = LN FFN A (t) C + A (t) C .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Decoding",
"sec_num": "4.2"
},
{
"text": "Using the LSTM (Hochreiter and Schmidhuber, 1997 ) encoder-decoder model, See et al. (2017) compute a generation probability using the encoder context, decoder state, and decoder input. While the transformer decoder cross-attention A (t) C already contains a linear combination between selfattention of decoder input and encoder-decoder cross attention. Thus, we use the combination of the decoder input and cross-attention to compute the generation probability",
"cite_spans": [
{
"start": 15,
"end": 48,
"text": "(Hochreiter and Schmidhuber, 1997",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Decoding",
"sec_num": "4.2"
},
{
"text": "P (t) G = FFN Y (t) , A (t) C .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Decoding",
"sec_num": "4.2"
},
{
"text": "To improve generalization, we also use a separate BERT model as a language model (LM) for the decoder. Even though BERT is not trained to predict the next token (Devlin et al., 2019) as with typical language models (e.g., GPT-2), we still choose BERT as our LM to ensure the COPY mechanism shares the same vocabulary between the encoder and the decoder. 1 We also do not need to process out-of-vocabulary words because we use the BPE (Sennrich et al., 2016; Devlin et al., 2019) tokenization in both the encoder and decoder.",
"cite_spans": [
{
"start": 354,
"end": 355,
"text": "1",
"ref_id": null
},
{
"start": 434,
"end": 457,
"text": "(Sennrich et al., 2016;",
"ref_id": "BIBREF33"
},
{
"start": 458,
"end": 478,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Decoding",
"sec_num": "4.2"
},
{
"text": "We use Natural Questions dataset (Kwiatkowski et al., 2019) for training as NQ questions are independent of their supporting documents. NQ has 307,000 training examples, answered and annotated from Wikipedia pages, in a format of a question, a Wikipedia link, long answer candidates, and short answer annotations. 51 % of these questions have no answer for either being invalid or nonevidence in their supporting documents. Another 36 % have long answers that are paragraphs and have corresponding short answers that either spans long answers or being masked as yes-or-no. The remaining 13 % questions only have long answers. We are most interested in the last portion of questions as they are best answered by summaries of their long answers, reflecting the coarse-grained information-seeking behavior. 2 We use paragraphs that contain long answers or short answers in NQ as the context. We do not consider using the whole Wikipedia page, i.e., the document, as the context as most Wikipedia pages are too long to encode: In the NQ training set, there are 8407 tokens at document level on average, while for news articles in the CNN/Daily Mail that we will discuss in Section 5.2, the average document size is 583 (Tuan et al., 2020), which is not much larger than the average size of long answers in NQ (384 tokens).",
"cite_spans": [
{
"start": 33,
"end": 59,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 804,
"end": 805,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Questions dataset",
"sec_num": "5.1"
},
{
"text": "We also consider the ratio between questions and the context-answer pairs to avoid generating multiple questions based on the same context-answer. After removing questions that have no answers, there are 152,148 questions and 136,450 unique long answers. The average ratio between questions and long answers is around 1.1 questions per paragraph (ratios are in a range of 1 to 47). The average ratio is more reasonable for question generation, comparing to the SQuAD where there are 1.4 questions per sentence on average (Du et al., 2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Questions dataset",
"sec_num": "5.1"
},
{
"text": "We extract questions, long answers, and short answer spans from the NQ dataset. We also extract the Wikipedia titles since long answers alone do not 2 Data annotation is a subjective task where different annotators could have different opinions for whether there is a short answer or not. NQ uses multi-fold annotations (e.g., a 5-fold annotation for the dev set). However, the training data only has the 1-fold annotation, so whether there is a short answer is not 100 % accurate. always contain the words from their corresponding titles. We add brackets ('[' and ']') for all possible short answer spans such that we can later extract these spans accordingly to avoid potential position changes due to context preprocessing (e.g., different tokenization). 3 When there is no short answer, we add brackets to the whole long answer. We then concatenate the titles with long answers as contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NQ Preprocessing",
"sec_num": "5.1.1"
},
{
"text": "For details, see examples from Figure 5 and Figure 6 in Appendix A.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 5",
"ref_id": null
},
{
"start": 44,
"end": 53,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "NQ Preprocessing",
"sec_num": "5.1.1"
},
{
"text": "As in (Mishra et al., 2020), we only keep questions with long answers starting from the HTML paragraph tag. After preprocessing (Table 1) , we get 110,865 question-context pairs, while Mishra et al. (2020) gets 77,501 pairs since they only keep long answer questions. We split the dataset with a 90/10 ratio for training/validation. We use the original NQ dev set, which contains 7830 questions, as our test set. We follow the same extraction procedure as with the training and validation data modulo two new steps. First, noting that 79 % of Wikipedia pages appearing in the NQ dev set are also present in the NQ training set, we filter all overlapped contexts from the NQ dev set when creating our test set. Second, the original NQ dev set is 5-way annotated; thus, each question may have up to five different long/short answers. We treat each annotation as an independent context, even though they are associated with the same target question. To separately evaluate the QG performance for long answers and short answers, we split test data into long-answer questions (NQ-LA) and short-answer questions (NQ-SA). Finally, we get 4859 test data in total, with 1495 of them only have long answers while the remaining 3364 have both long and short answers while Mishra et al. (2020) gets 2136 test data from the original dev set.",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 137,
"text": "(Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "NQ Preprocessing",
"sec_num": "5.1.1"
},
{
"text": "3 Using brackets here is an arbitrary but functional choice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NQ Preprocessing",
"sec_num": "5.1.1"
},
{
"text": "We use the 12,744 CNN news articles from the CNN/Daily Mail dataset (Hermann et al., 2015)) for the out-of-domain evaluation. We apply the same preprocessing method as in the NQ dataset to create a long-answer test set -News-LA. We use whole news articles, instead of paragraphs, as contexts, considering to generate questions that lead to entire news articles as answers. For each news article, we first remove highlights, which is a human-generated summary, and datelines (e.g., NEW DELHI, India (CNN)). We filter out those news articles that are longer than 490 tokens with the BEP tokenization and those overlapped contextquestion pairs. Finally, we get 3048 data in the News-LA test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "News dataset",
"sec_num": "5.2"
},
{
"text": "6 In-Domain Evaluation with Generation Metrics",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "News dataset",
"sec_num": "5.2"
},
{
"text": "We use a BERT-base uncased model (Devlin et al., 2019) that contains 12 hidden layers. The vocabulary contains 30,522 tokens. We create the PGN decoder with another BERT model from the same setting, followed by a 2-layer transformer with 12 heads and 3072 intermediate sizes. The maximum allowed context length is 500, while the maximum question length is 50. We train our model on an Amazon EC2 P3 machine with one Tesla V100 GPU, with the batch size 10, and the learning rate 5 \u00d7 10 \u22125 with the Adam optimizer (Kingma and Ba, 2015) on all parameters of the BERTPGN model (both BERT models are trainable). We train 20 epochs of our model and evaluate with the dev set to select the model according to perplexity. Each epoch takes around 20 minutes to finish. Throughout the paper, we use the implementation of BLEU, METEOR, and ROUGE L by Sharma et al. (2017) .",
"cite_spans": [
{
"start": 840,
"end": 860,
"text": "Sharma et al. (2017)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup and Training",
"sec_num": "6.1"
},
{
"text": "We first evaluate our model using BLEU, ME-TEOR, and ROUGE L to compare with Mishra et al. (2020) on long answers (first two rows in Table 2). The transformer-based iwslt de en is a German to English translation model with 6 encoder and decoder layers, 16 encoder and decoder attention heads, 1024 embedding dimension, and 4096 embedding dimension of feed forward network. The other transformer-based multi-source method, which is based on (Libovick\u00fd et al., 2018) , combines each context with a retrieval-based summary as input. We decode questions from our model using beam search (beam=3). 4 Evaluating on NQ-LA, our BERTPGN model outperforms both existing models substantially with near seven points for all metrics. The performance for short answer questions NQ-SA is even better, with near eight more BLEU-4 points than NQ-LA.",
"cite_spans": [
{
"start": 440,
"end": 464,
"text": "(Libovick\u00fd et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 593,
"end": 594,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "In-Domain Evaluation",
"sec_num": "6.2"
},
{
"text": "We first examine the effect of the pointer network from the BERTPGN. We then run ablation study by first removing BERT-LM in the decoder, and independently \u2022 removing type IDs from BERT encoder \u2022 removing BERT initialization for BERT encoder \u2022 substituting BERT encoder with a 2-layer transformer We train our BERTPGN models from scratch for each setting and conduct these ablation studies for NQ-LA and NQ-SA separately (Table 3) .",
"cite_spans": [],
"ref_spans": [
{
"start": 421,
"end": 430,
"text": "(Table 3)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "6.3"
},
{
"text": "Removing the pointer from the BERTPGN makes the BLEU-4 scores drop for both NQ-LA and NQ-SA more than removing the BERT as the LM in Figure 2 : The BERT-joint architecture (Alberti et al., 2019b) . Input is the combined question and context, and the outputs are an answer-type classification from the [CLS] token and start/end of answer spans for each token from the context. the decoder. Type IDs are more helpful for NQ-SA (approximately a 5-point drop in BLEU-4) than NQ-LA since NQ-SA needs to use type IDs to mark answers. Removing BERT initialization causes notable drops for both NQ-LA (3.6 drops in BLEU-4) and NQ-SA (7.2 in BLEU-4), which implies that BERT achieves better generalization when encoding these considerably long contexts. Another interesting finding is that the NQ-LA is more sensitive to the number of layers of the encoder than NQ-SA. When decreasing the layers to two from twelve, NQ-LA drops by 0.4 in BLEU-4 while NQ-SA drops by 0.2.",
"cite_spans": [
{
"start": 172,
"end": 195,
"text": "(Alberti et al., 2019b)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 133,
"end": 141,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "6.3"
},
{
"text": "7 Out-of-Domain Evaluation with QA Systems",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "6.3"
},
{
"text": "We use a well-trained question answering system as the evaluation method, given that the automated scoring metrics have two notable drawbacks when evaluating long-answer questions: (1) There are usually multiple valid questions for long-answer question generation as contexts are much longer than previous work. However, most datasets only have one gold question for each context; (2) They cannot measure generated questions when there is no gold question, which is the right problem that we encountered for our News-LA dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "6.3"
},
{
"text": "We use the BERT-joint model (Alberti et al., 2019b) ( Figure 2 ) for NQ question answering to evaluate our long answer question generation. The BERTjoint model takes the combination a question and the corresponding context as an input, outputs the probability of answer spans and the probability of answer types. For a context of size n, it produces p start and p end for each token, indicating whether this token is a start or end token of an answer span. It then chooses the answer span (i, j) where i < j on NQ long answer test set, which is 10 % better compared to models used in (Kwiatkowski et al., 2019; Parikh et al., 2016) . We define the answerability score (s ans ) as log (p ans /p no ans ), and the granularity score (s gra ) as log (p la /p sa ) when evaluating our long answer question generation with the BERTjoint model.",
"cite_spans": [
{
"start": 584,
"end": 610,
"text": "(Kwiatkowski et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 611,
"end": 631,
"text": "Parikh et al., 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 54,
"end": 62,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The QA Metrics",
"sec_num": "7.1"
},
{
"text": "We construct a baseline model to compare as follows. Using the same BERTPGN architecture, we train a model on the SQuAD sentence-question pairs prepared by Du et al. (2017). When generating questions for news articles, we use the first line of each news article as the context, with the assumption that the first line is a genuine summary produced by humans. Notice that the resulting baseline is the state-of-the-art for answer-free (the model does not know the whereabouts of answer spans) question generation with SQuAD (Table 4) .",
"cite_spans": [],
"ref_spans": [
{
"start": 523,
"end": 532,
"text": "(Table 4)",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "QG Models to Compare",
"sec_num": "7.2"
},
{
"text": "We refer to the model as M SD hereafter. Similarly, we call our BERTPGN model trained on the NQ dataset as M N Q . We use beam search (beam=3) for both models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "QG Models to Compare",
"sec_num": "7.2"
},
{
"text": "We show the QA evaluation results in Figure 3 . In the context column, M N Q shows a lower answerability score than the baseline model M SD . While granularity scores show a reverse trend, i.e., higher scores for M N Q than those of M SD . This result implies that M N Q generates more coarse-style questions that have long answers, but these questions are considerably more difficult to answer by the QA model, comparing to short-answer questions. It is also reasonable to assume that news articles' summaries are proper answer-candidates for Figure 3: Answerability and granularity scores of generated questions for News-LA with the BERT-joint model (Alberti et al., 2019b) as the evaluation QA model by answering generated questions from either news article context or news article highlights. We compare two models: (1) NQ: BERTPGN trained with NQ dataset and generate on whole news articles;",
"cite_spans": [
{
"start": 652,
"end": 675,
"text": "(Alberti et al., 2019b)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 37,
"end": 45,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "7.3"
},
{
"text": "(2) SD: BERTPGN trained with SQuAD dataset and generate on the first line of each news article. long-answer questions. Highlights in news articles are human-generated summaries, so we also combine the same set of questions with their corresponding highlights as input for the BERT-joint QA system with results shown as the highlights column in Figure 3 . The answerability scores drop for both models comparing the column highlights to the column of context, which is reasonable as the models never see highlights when generating questions. However, the baseline method M SD drops more significantly than M N Q , suggesting that the baseline model is more context-dependent while our model M N Q generates more self-explanatory questions. From the granularity scores of highlights, we find that confidence to determine answer types is lower for both models than that of the context column. However, the M N Q still shows higher granularity scores than the M SD .",
"cite_spans": [],
"ref_spans": [
{
"start": 344,
"end": 352,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "7.3"
},
{
"text": "We map generated questions for the News-LA on a 2D plot with x-axis the answerability score and y-axis the granularity score for both models in Figure 4 . They also confirm the negative correlation between answerability and granularity of generated questions. However, the M N Q generates more questions with both positive s ans and s gra than those from M SD , indicating the effectiveness of our model to generate introductory and self-explanatory questions. We further conduct a human evaluation using MTurk for the News-LA test set to verify that we can generate self-explanatory and introductory questions and that the automatic evaluation in Section 7 agrees with human evaluation. We ask annotators to read news articles and mark true or false for seven statements regarding generated questions. For each context-question pair, these statements include (see examples in Appendix B)",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 152,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "7.3"
},
{
"text": "\u2022 Question is context dependent \u2022 Question is irrelevant to the article \u2022 Question implies a contradiction to facts present in the article \u2022 Question focuses on a peripheral topic \u2022 There is a short span to answer the question \u2022 The entire article can be an answer \u2022 None answer in the article We randomly select 1000 news articles in News-LA to perform our human evaluation with three different annotators per news article. We received three valid annotations for 943 news articles from a set of 224 annotators. We first consider true/false results regarding three metrics -Context, Span, and Entire -considering only when unanimity is reached among annotators ( of Entire vs. 40 %) while less likely to be answered by spans from news articles (77 % true of Span vs. 89 %) comparing with M SD questions. These human evaluation results confirm that M N Q questions are more self-explanatory and introductory than M SD . We compute the s ans and s gra for the 943 generated questions (Section 7). We then normalize these two scores and conduct a Pearson correlation analysis (Benesty et al., 2009) with human evaluation results. We use all human evaluation results, regardless of agreements among annotators. From Table 6 , we find that Span has the strongest positive correlation with the s ans , while None shows the strongest negative correlation -aligning with the findings for answerability. Span also shows the strongest negative correlation with the s gra for both M N Q and M SD , but the highest positive correlation with granularity varies, with Irrelevant for M N Q questions and None for M SD questions.",
"cite_spans": [
{
"start": 1074,
"end": 1096,
"text": "(Benesty et al., 2009)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1213,
"end": 1220,
"text": "Table 6",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "7.3"
},
{
"text": "We tackle the problem of question generation targeted for human information seeking using automatic question answering technology. We focus on generating questions for news articles that can be answered by longer passages rather than short text spans as suggested questions. We build a BERT-based Pointer-Generator Network as the QG model, trained with the Natural Questions dataset. Our method shows state-of-the-art performance in terms of BLEU, METEOR, and ROUGE L scores on our NQ question generation dataset. We then apply our model to the out-of-domain news articles without further training. We use a QA system to evaluate our QG models as there are no gold questions for comparison. We also conduct a human evaluation to confirm the QA evaluation results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "We describe a method for an autonomous agent to suggest questions based on machine-reading and question generation technology. Operationally, this work focuses on newswire-sourced data where the generated questions are answered by the text -and is applicable to multi-turn search settings. Thus, there are several potentially positive social impacts. By presenting questions with known answers in the text, users can more efficiently learn about topics in the source documents. Our focus on selfexplanatory and introductory questions increases the utility of questions for this purpose.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Broader Impact",
"sec_num": null
},
{
"text": "Conversely, there is potential to bias people toward a subset of the news chosen by a purported fair search engine, which may be more difficult to detect as the provided questions remove some of the article contexts. In principle, this is mitigated by selecting content that maintains high journalistic standards -but such a risk remains if the technology is deployed by bad-faith actors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Broader Impact",
"sec_num": null
},
{
"text": "The data for our experiments was derived from the widely used Natural Questions (Kwiatkowski et al., 2019) and CNN/Daily Mail (Hermann et al., 2015) datasets, which in turn were derived from public news sourced data. Our evaluation annotations were performed on Amazon Mechanical Turk, where three authors completed a sample task and set a wage corresponding to an expected rate of 15 $/h. ",
"cite_spans": [
{
"start": 80,
"end": 106,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Broader Impact",
"sec_num": null
},
{
"text": "We show several generated questions here. Each frame box contains a news article, with two questions generated by M N Q (showing in bold texts) and M SD respectively. News articles are selected from the CNN/Daily Mail dataset with preprocessing described in Section 5.2. We also compare these generated questions in Table 7 . \u2022 who are the new astronauts on the moon \u2022 how many italians walk into a space station in 2013",
"cite_spans": [],
"ref_spans": [
{
"start": 316,
"end": 323,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "After several delays, NASA said Friday that space shuttle Discovery is scheduled for launch in five days. The space shuttle Discovery, seen here in January, is now scheduled to launch Wednesday. Commander Lee Archambault and his six crewmates are now scheduled to lift off to the International Space Station at 9:20 p.m. ET Wednesday. NASA said its managers had completed a readiness review for Discovery, which will be making the 28th shuttle mission to the ISS. The launch date had been delayed to allow \"additional analysis and particle impact testing associated with a flow-control valve in the shuttle's main engines,\" the agency said. According to NASA, the readiness review was initiated after damage was found in a valve on the shuttle Endeavour during its November 2008 flight. Three valves have been cleared and installed on Discovery, it said. Discovery is to deliver the fourth and final set of \"solar array wings\" to the ISS. With the completed array the station will be able to provide enough electricity when the crew size is doubled to six in May, NASA said. The Discovery also will carry a replacement for a failed unit in a system that converts urine to drinkable water, it said. Discovery's 14-day mission will include four spacewalks, NASA said.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "\u2022 when is the space shuttle discovery coming out \u2022 how many days is the space shuttle discovery scheduled to launch Unemployment in Spain has reached 20 percent, meaning 4.6 million people are out of work, the Spanish government announced Friday. The figure, from the first quarter, is up from 19 percent and 4.3 million people in the previous quarter. It represents the secondhighest unemployment rate in the European Union, after Latvia, according to figures Friday from Eurostat, the EU's statistics service. Spanish Prime Minister Jose Luis Rodriguez Zapatero told Parliament on Wednesday he believes the jobless rate has peaked and will now start to decline. The first quarter of the year is traditionally poor for Spain because of a drop in labor-intensive activity like construction, agriculture and tourism. This week, Standard & Poor's downgraded Spain's long-term credit rating and said the outlook is negative. \"We now believe that the Spanish economy's shift away from credit-fuelled economic growth is likely to result in a more protracted period of sluggish activity than we previously assumed,\" Standard & Poor's credit analyst Marko Mrsnik said. Gross domestic product growth in Spain is expected to average 0.7 percent annually through 2016, compared with previous expectations of 1 percent annually, he said. Spain's economic problems are closely tied to the housing bust there, according to The Economist magazine. Many of the newly unemployed worked in construction, it said. The recession revealed how dependent public finances were on housing-related tax revenues, it said. Another problem in Spain is that wages are set centrally and most jobs are protected, making it hard to shift skilled workers from one industry to another, the magazine said. Average unemployment for the 27-member European Union stayed stable in March at 9.6 percent, Eurostat said Friday. That percentage represents 23 million people, it said. The lowest national unemployment rates were in the Netherlands and Austria, which had 4.1 and 4.9 percent respectively, Eurostat said.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "\u2022 what is the use of jdk in java Figure 5 : Examples of the NQ data preprocessing from the training set. Orange texts are Wikipedia titles that added in the front the each long answers. In first two examples, annotators mark there are short answers represented in cyan; while for the last example, there is no short answer marked by annotators so we mark the whole paragraph as the answer. Cyan texts are tagged with type ID '1' during preprocessing. [a logical consequence , such as the conclusion of a syllogism] Predicted when is the therefore sign used in a syllogism Figure 6 : Example of the question generation from Natural Questions dataset with BERTPGN. We use '[ (i)' and '(/i) ]' to represent the start and end position of the i-th answer span. The context is the long answer for the question what do the 3 dots mean in math. Five short answers (SA) marked by five different annotators. Our BERTPGN model with nucleus sampling (Holtzman et al., 2019) with temperature of 0.1 produces different but related questions for each short answers as well as the whole context with brackets over each of them. ran away, including Way, Bolton said. The ones who remained told officers they were at the home to film a video. Way was arrested when he returned to the house to get his car, Bolton said. He said the house was dark inside and looked abandoned. \"He just ran from the police, and then he decided to come back,\" according to Bolton. The second man who returned for his vehicle was arrested after police found eight $100 counterfeit bills inside, according to the officer. Way broke into the music scene two years ago with his hit \"Crank That (Soulja Boy).\" The rapper also describes himself as a producer and entrepreneur.",
"cite_spans": [
{
"start": 938,
"end": 961,
"text": "(Holtzman et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 33,
"end": 41,
"text": "Figure 5",
"ref_id": null
},
{
"start": 572,
"end": 580,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "\u2022 what is the meaning of soulja boy tell em \u2022 what was deandre cortez way known as The U.S. military is gearing up for a possible influx of Haitians fleeing their earthquake-stricken country at an Army facility not widely known for its humanitarian missions: Guantanamo Bay. Soldiers at the base have set up tents, beds and toilets, awaiting possible orders from the secretary of defense to proceed, according to Maj. Diana Haynie, a spokeswoman for Joint Task Force Guantanamo Bay. \"There's no indication of any mass migration from Haiti,\" Haynie stressed. \"We have not been told to conduct migrant operations.\" But the base is getting ready \"as a prudent measure,\" Haynie said, since \"it takes some time to set things up.\" Guantanamo Bay is about 200 miles from Haiti. Currently, military personnel at the base are helping the earthquake relief effort by shipping bottled water and food from its warehouse. In addition, Gen. Douglas Fraser, commander of U.S. Southern Command, said the Navy has set up a \"logistics field,\" an area to support bigger ships in the region. The military can now use that as a \"lily pad\" to fly supplies from ships docked at Guantanamo over to Haiti, he said. \"Guantanamo Bay proves its value as a strategic hub for the movement of supplies and personnel to the affected areas in Haiti,\" Haynie said. As part of the precautionary measures to prepare for possible refugees, the Army has BERTPGN-NQ-whole-article BERTPGN-SQuAD-first-line who are the new astronauts on the moon how many italians walk into a space station in 2013 when is the space shuttle discovery coming out how many days is the space shuttle discovery scheduled to launch what is the average unemployment rate in spain what percentage of spain's population is out of work what is the meaning of soulja boy tell em what was deandre cortez way known as where does the us refugees at guantanamo bay come from what is the name of the us military facility in the us what happened to the girl in the texas polygamist ranch what was the name of the texas polygamist ranch who scored the first goal in the premier league which team did everton fc beat to win the premier league's home draw with tottenham on sunday Table 7 : Comparing generated questions side-by-side. Our model uses uncased vocabulary and omits the final question mark.",
"cite_spans": [],
"ref_spans": [
{
"start": 2204,
"end": 2211,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "erected 100 tents, each holding 10 beds, according to Haynie. Toilet facilities are nearby. If needed, hundreds more tents are stored in Guantanamo Bay and can be erected, she said. The refugees would be put on the leeward side of the island, more than 2 miles from some 200 detainees being held on the other side, Haynie said. The refugees would not mix with the detainees. Joint Task Force Guantanamo Bay is responsible for planning for any kind of Caribbean mass immigration, according to Haynie. In the early 1990s, thousands of Haitian refugees took shelter on the island, she said.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "\u2022 where does the us refugees at guantanamo bay come from \u2022 what is the name of the us military facility in the us A Colorado woman is being pursued as a \"person of interest\" in connection with phone calls that triggered the raid of a Texas polygamist ranch, authorities said Friday. Rozita Swinton, 33, has been arrested in a case that is not directly related to the Texas raid. Texas Rangers are seeking Rozita Swinton of Colorado Springs, Colorado, \"regarding telephone calls placed to a crisis center hot line in San Angelo, Texas, in late March 2008,\" the Rangers said in a written statement. The raid of the YFZ (Yearning for Zion) Ranch in Eldorado, Texas, came after a caller -who identified herself as a 16-year-old girl -said she had been physically and sexually abused by an adult man with whom she was forced into a \"spiritual marriage.\" The release said a search of Swinton's home in Colorado uncovered evidence that possibly links her to phone calls made about the ranch, run by the Fundamentalist Church of Jesus Christ of Latter-day Saints. \"The possibility exists that Rozita Swinton, who has nothing to do with the FLDS church, may have been a woman who made calls and pretended she was the 16-year-old girl named Sarah,\" CNN's Gary Tuchman reported. Swinton, 33, has been charged in Colorado with false reporting to authorities and is in police custody. Police said that arrest was not directly related to the Texas case. Authorities raided the Texas ranch April 4 and removed 416 children. Officials have been trying to identify the 16-year-old girl, referred to as Sarah, who claimed she had been abused in the phone calls. FLDS members have denied the girl, supposedly named Sarah Jessop Barlow, exists. Some of the FLDS women who spoke with CNN on Monday said they believed the calls were a hoax. While the phone calls initially prompted the raid, officers received a second search warrant based on what they said was evidence of sexual abuse found at the compound. In court documents, investigators described seeing teen girls who appeared pregnant, records that showed men marrying multiple women and accounts of girls being married to adult men when they were as young as 13. A court hearing began Thursday to determine custody of children who were removed from the ranch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "\u2022 what happened to the girl in the texas polygamist ranch \u2022 what was the name of the texas polygamist ranch",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "Everton scored twice late on and goalkeeper Tim Howard saved an injury-time penalty as they fought back to secure a 2-2 Premier League home draw with Tottenham on Sunday. Jermain Defoe gave the visitors the lead soon after the interval when nipping in front of Tony Hibbert to convert Aaron Lennon's cross at the near post for his 13th goal of the season. And they doubled their advantage soon after when defender Michael Dawson headed home a Niko Kranjcar corner. But Everton got a foothold back in the game when Seamus Coleman's run and cross was converted by fellow-substitute Louis Saha in the 78th minute. And Tim Cahill rescued a point for the home side with four minutes remaining when he stooped low to head home Leighton Baines' bouncing cross. However, there was still further drama to come when Hibbert was penalized for crashing into Wilson Palacios in the area. However, England striker Defoe smashed his penalty too close to Howard and the keeper pulled off a fine save to give out-of-form Everton a morale-boosting point. The result means Tottenham remain in fourth place, behind north London rivals Arsenal, while Everton have now won just one of their last nine league games. In the day's other match, Bobby Zamora scored the only goal of the game as Fulham beat Sunderland 1-0 to move up to eighth place in the table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "\u2022 who scored the first goal in the premier league \u2022 which team did everton fc beat to win the premier league's home draw with tottenham on sunday",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "Question is context dependent Some questions are context-dependent, e.g.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Human Evaluation Criteria",
"sec_num": null
},
{
"text": "\u2022 \"who intends to boycott the election\" -which election?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Human Evaluation Criteria",
"sec_num": null
},
{
"text": "\u2022 \"where did the hijackers go to\" -what hijackers? \u2022 \"what type of hats did they use\" -who are they? \u2022 \"how many people were killed in the quake\"which quake? Compared to these context-independent, selfcontained questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Human Evaluation Criteria",
"sec_num": null
},
{
"text": "\u2022 \"what was toyota's first-ever net loss\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Human Evaluation Criteria",
"sec_num": null
},
{
"text": "\u2022 \"who is hillary's secretary of state\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Human Evaluation Criteria",
"sec_num": null
},
{
"text": "\u2022 \"what is the name of the motto of the new york times \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Human Evaluation Criteria",
"sec_num": null
},
{
"text": "Question is irrelevant to the article Given a news article:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Human Evaluation Criteria",
"sec_num": null
},
{
"text": "\"Usually when I mention suspended animation people will flash me the Vulcan sign and laugh,\" says scientist Mark Roth. But he's not referring to the plot of a \"Star Trek\" episode. Roth is completely serious about using lessons he's learned from putting some organisms into suspended animation to help people survive medical trauma. He spoke at the TED2010 conference in Long Beach, California, in February. The winner of a MacArthur genius fellowship in 2007, Roth described the thought process that led him and fellow researchers to explore ways to lower animals' metabolism to the point where they showed no signs of life -and yet were not dead. More remarkably, they were able to restore the animals to normal life, with no apparent damage. Read more about Roth on TED.com The Web site of Roth's laboratory at the Fred Hutchinson Cancer Research Center in Seattle, Washington, describes the research this way: \"We use the term suspended animation to refer to a state where all observable life processes (using high resolution light microscopy) are stopped: The animals do not move nor breathe and the heart does not beat.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Human Evaluation Criteria",
"sec_num": null
},
{
"text": "We have found that we are able to put a number of animals (yeast, nematodes, drosophila, frogs and zebrafish) into a state of suspended animation for up to 24 hours through one basic technique: reducing the concentration of oxygen.\" Visit Mark Roth's laboratory Roth is investigating the use of small amounts of hydrogen sulfide, a gas that is toxic in larger quantities, to lower metabolism. In his talk, he imagined that \"in the not too distant future, an EMT might give an injection of hydrogen sulfide, or some related compound, to a person suffering severe injuries, and that person might de-animate a bit ... their metabolism will fall as though you were dimming a switch on a lamp at home. \"That will buy them the time to be transported to the hospital to get the care they need. And then, after they get that care ... they'll wake up. A miracle? We hope not, or maybe we just hope to make miracles a little more common.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Human Evaluation Criteria",
"sec_num": null
},
{
"text": "The question: \"what is the meaning of suspended animation in star trek\" is irrelevant to the news since the news is not talking about Star Trek.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Human Evaluation Criteria",
"sec_num": null
},
{
"text": "However, the question \"what is the meaning of suspended animation\" is related.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Human Evaluation Criteria",
"sec_num": null
},
{
"text": "Given a news article:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question implies a contradiction to facts present in the article",
"sec_num": null
},
{
"text": "At least 6,000 Christians have fled the northern Iraqi city of Mosul in the past week because of killings and death threats, Iraq's Ministry of Immigration and Displaced Persons said Thursday. A Christian family that fled Mosul found refuge in the Al-Sayida monastery about 30 miles north of the city. The number represents 1,424 families, at least 70 more families than were reported to be displaced on Wednesday. The ministry said it had set up an operation room to follow up sending urgent aid to the displaced Christian families as a result of attacks by what it called \"terrorist groups.\" Iraqi officials have said the families were frightened by a series of killings and threats by Muslim extremists ordering them to convert to Islam or face death. Fourteen Christians have been slain in the past two weeks in the city, which is about 260 miles (420 kilometers) north of Baghdad. Mosul is one of the last Iraqi cities where al Qaeda in Iraq has a significant presence and routinely carries out attacks. The U.S. military said it killed the Sunni militant group's No. 2 leader, Abu Qaswarah, in a raid in the northern city earlier this month. In response to the recent attacks on Christians, authorities have ordered more checkpoints in several of the city's Christian neighborhoods. The attacks may have been prompted by Christian demonstrations ahead of provincial elections, which are to be held by January 31, authorities said. Hundreds of Christians took to the streets in Mosul and surrounding villages and towns, demanding adequate representation on provincial councils, whose members will be chosen in the local elections. Thursday, Iraq's minister of immigration and displaced persons discussed building housing complexes for Christian families in northern Iraq and allocating land to build the complexes. Abdel Samad Rahman Sultan brought up the issue when he met with a representative of Iraq's Hammurabi Organization for Human Rights and with the head of the Kojina Organization for helping displaced persons. A curfew was declared Wednesday in several neighborhoods of eastern Mosul as authorities searched for militants behind the attacks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question implies a contradiction to facts present in the article",
"sec_num": null
},
{
"text": "The question \"how many christians fled to mosul in the past\" is contradicted to the fact -6000 christians fled from Mosul -in the news.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question implies a contradiction to facts present in the article",
"sec_num": null
},
{
"text": "Given a news article:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question focuses on a peripheral topic",
"sec_num": null
},
{
"text": "One of the Marines shown in a famous World War II photograph raising the U.S. flag on Iwo Jima was posthumously awarded a certificate of U.S. citizenship on Tuesday. The Marine Corps War Memorial in Virginia depicts Strank and five others raising a flag on Iwo Jima. Sgt. Michael Strank, who was born in Czechoslovakia and came to the United States when he was 3, derived U.S. citizenship when his father was naturalized in 1935. However, U.S. Citizenship and Immigration Services recently discovered that Strank never was given citizenship papers. At a ceremony Tuesday at the Marine Corps Memorial -which depicts the flag-raising -in Arlington, Virginia, a certificate of citizenship was presented to Strank's younger sister, Mary Pero. Strank and five other men became national icons when an Associated Press photographer captured the image of them planting an American flag on top of Mount Suribachi on February 23, 1945. Strank was killed in action on the island on March 1, 1945, less than a month before the battle between Japanese and U.S. forces there ended.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question focuses on a peripheral topic",
"sec_num": null
},
{
"text": "Note that we change the masking for the original BERT when using BERT as a LM, since the decoder at step t should not read inputs at steps t + i where i \u2265 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Mishra et al. (2020) have not described the decoding method and possible beam size, but they use models from(Ott et al., 2018) that uses beam=4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Jonathan Scharfen, the acting director of CIS, presented the citizenship certificate Tuesday. He hailed Strank as \"a true American hero and a wonderful example of the remarkable contribution and sacrifices that immigrants have made to our great republic throughout its history.\"The question \"who presented the american flag raising on iwo jima\" focuses on a peripheral topic -the name of the one raising the flag.While the question \"who was awarded a certificate of citizenship raising the u.s. flag\" focuses on the main topic -getting a citizenship.There is a short span to answer the question Given a news:Los Angeles police have launched an internal investigation to determine who leaked a picture that appears to show a bruised and battered Rihanna. Rihanna was allegedly attacked by her boyfriend, singer Chris Brown, before the Grammys on February 8. The close-up photo -showing a woman with contusions on her forehead and below her eyes, and cuts on her lip -was published on the entertainment Web site TMZ Thursday. TMZ said it was a photo of Rihanna. Twenty-one-year-old Rihanna was allegedly attacked by her boyfriend, singer Chris Brown, on a Los Angeles street before the two were to perform at the Grammys on February 8. \"The unauthorized release of a domestic violence photograph immediately generated an internal investigation,\" an L.A. police spokesman said in a statement. \"The Los Angeles Police Department takes seriously its duty to maintain the confidentiality of victims of domestic violence. A violation of this type is considered serious misconduct, with penalties up to and including termination.\" A spokeswoman for Rihanna declined to comment. The chief investigator in the case had told CNN earlier that authorities had tried to guard against leaks. Detective Deshon Andrews said he had kept the case file closely guarded and that no copies had been made of the original photos and documents. Brown was arrested on February 8 in connection with the case and and booked on suspicion of making criminal threats. Authorities are trying to determine whether Brown should face domestic violence-related charges. Brown apologized for the incident this week. \"Words cannot begin to express how sorry and saddened I am over what transpired,\" the 19year-old said in a statement released by his spokesman. \"I am seeking the counseling of my pastor, my mother and other loved ones and I am committed, with God's help, to emerging a better person.\"The question \"who have launched an internal investigation of the leaked rihanna's picture\" can be answered by \"Los Angeles police\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "Given a news:A high court in northern India on Friday acquitted a wealthy businessman facing the death sentence for the killing of a teen in a case dubbed \"the house of horrors.\" Moninder Singh Pandher was sentenced to death by a lower court in February. The teen was one of 19 victims -children and young women -in one of the most gruesome serial killings in India in recent years. The Alla-habad high court has acquitted Moninder Singh Pandher, his lawyer Sikandar B. Kochar told CNN. Pandher and his domestic employee Surinder Koli were sentenced to death in February by a lower court for the rape and murder of the 14-year-old. The high court upheld Koli's death sentence, Kochar said. The two were arrested two years ago after body parts packed in plastic bags were found near their home in Noida, a New Delhi suburb. Their home was later dubbed a \"house of horrors\" by the Indian media. Pandher was not named a main suspect by investigators initially, but was summoned as co-accused during the trial, Kochar said. Kochar said his client was in Australia when the teen was raped and killed. Pandher faces trial in the remaining 18 killings and could remain in custody, the attorney said.The question \"what was the case of the house of horrors in northern india\" can be answered by the whole news article. There is no short span can be extracted as an answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The entire article can be an answer",
"sec_num": null
},
{
"text": "Given a news:Buy a $175,000 package to attend the Oscars and you might buy yourself trouble, lawyers for the Academy Awards warn. The 81st annual Academy Awards will be held on February 22 from Hollywood's Kodak Theatre.The advertising of such packages -including four tickets to the upcoming 81st annual Academy Awards and a hotel stay in Los Angeles, California -has prompted the Academy of Motion Picture Arts and Sciences to sue an Arizona-based company. The Academy accused the company Experience 6 of selling \"black-market\" tickets, because tickets to the lavish movie awards show cannot be transferred or sold. Selling tickets could become a security issue that could bring celebrity stalkers or terrorists to the star-studded event, says the lawsuit, which was filed Monday in federal court in the Central District of California. \"Security experts have advised the Academy that it must not offer tickets to members of the public and must know identities of the event attendees,\" the lawsuit says. \"In offering such black-market tickets, defendants are misleading the public and the ticket buyers into thinking that purchasers will be welcomed guests, rather than as trespassers, when they arrive for the ceremony.\" Experience 6 did not return calls from CNN for comment. On Tuesday morning, tickets to the event were still being advertised on the company's Web site. The Oscars will be presented February 22 from Hollywood's Kodak Theatre. The Academy Awards broadcast will air on ABC. Hugh Jackman is scheduled to host.The questions \"where does the 81st annual academy awards come from\" and \"how much did the academy pay to attend the oscars\" cannot be answered from the news.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "None answer in the article",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Synthetic QA corpora generation with roundtrip consistency",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Andor",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6168--6173",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1620"
]
},
"num": null,
"urls": [],
"raw_text": "Chris Alberti, Daniel Andor, Emily Pitler, Jacob De- vlin, and Michael Collins. 2019a. Synthetic QA corpora generation with roundtrip consistency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6168-6173, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A bert baseline for the natural questions",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Alberti, Kenton Lee, and Michael Collins. 2019b. A bert baseline for the natural questions.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Ms marco: A human generated machine reading comprehension dataset",
"authors": [
{
"first": "Payal",
"middle": [],
"last": "Bajaj",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Campos",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Craswell",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Rangan",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mcnamara",
"suffix": ""
},
{
"first": "Bhaskar",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Tri",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Mir",
"middle": [],
"last": "Rosenberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Ti- wary, and Tong Wang. 2016. Ms marco: A human generated machine reading comprehension dataset.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Good question! statistical ranking for question generation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "609--617",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Heilman and Noah A Smith. 2010. Good ques- tion! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the As- sociation for Computational Linguistics, pages 609- 617.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u00fd",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 28th International Conference on Neural Information Processing Systems",
"volume": "1",
"issue": "",
"pages": "1693--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u00fd, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of the 28th Inter- national Conference on Neural Information Process- ing Systems -Volume 1, NIPS'15, page 1693-1701. MIT Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "",
"pages": "1735--80",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9:1735- 80.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The curious case of neural text degeneration",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Free-baseQA: A new factoid QA data set matching triviastyle question-answer pairs with Freebase",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Dekun",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "318--323",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1028"
]
},
"num": null,
"urls": [],
"raw_text": "Kelvin Jiang, Dekun Wu, and Hui Jiang. 2019. Free- baseQA: A new factoid QA data set matching trivia- style question-answer pairs with Freebase. In Pro- ceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers), pages 318-323, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Weld",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1601--1611",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1147"
]
},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale dis- tantly supervised challenge dataset for reading com- prehension. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611, Van- couver, Canada. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Ashish Sabharwal, Hannaneh Hajishirzi, and Chris Callison-Burch. 2021. Gooaq: Open question answering with diverse answer types",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Khashabi",
"suffix": ""
},
{
"first": "Amos",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Khashabi, Amos Ng, Tushar Khot, Ashish Sab- harwal, Hannaneh Hajishirzi, and Chris Callison- Burch. 2021. Gooaq: Open question answering with diverse answer types.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Natural questions: A benchmark for question answering research",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Kelcey",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"M"
],
"last": "Dai",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "453--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Al- berti, Danielle Epstein, Illia Polosukhin, Jacob De- vlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question an- swering research. Transactions of the Association for Computational Linguistics, 7:453-466.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "What's the latest? a question-driven news chatbot",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Laban",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Canny",
"suffix": ""
},
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "380--387",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-demos.43"
]
},
"num": null,
"urls": [],
"raw_text": "Philippe Laban, John Canny, and Marti A. Hearst. 2020. What's the latest? a question-driven news chatbot. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 380-387, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "Abhaya",
"middle": [],
"last": "Agarwal",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation, StatMT '07",
"volume": "",
"issue": "",
"pages": "228--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceed- ings of the Second Workshop on Statistical Machine Translation, StatMT '07, page 228-231, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Unsupervised question answering by cloze translation",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4896--4910",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1484"
]
},
"num": null,
"urls": [],
"raw_text": "Patrick Lewis, Ludovic Denoyer, and Sebastian Riedel. 2019. Unsupervised question answering by cloze translation. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4896-4910, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improving question generation with to the point context",
"authors": [
{
"first": "Jingjing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Lidong",
"middle": [],
"last": "Bing",
"suffix": ""
},
{
"first": "Irwin",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"R"
],
"last": "Lyu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3216--3226",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1317"
]
},
"num": null,
"urls": [],
"raw_text": "Jingjing Li, Yifan Gao, Lidong Bing, Irwin King, and Michael R. Lyu. 2019. Improving question gener- ation with to the point context. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3216-3226, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Harvesting and refining questionanswer pairs for unsupervised qa",
"authors": [
{
"first": "Zhongli",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wenhui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongli Li, Wenhui Wang, Li Dong, Furu Wei, and Ke Xu. 2020. Harvesting and refining question- answer pairs for unsupervised qa. Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Input combination strategies for multi-source transformer decoder",
"authors": [
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Libovick\u00fd",
"suffix": ""
},
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Helcl",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mare\u010dek",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "253--260",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6326"
]
},
"num": null,
"urls": [],
"raw_text": "Jind\u0159ich Libovick\u00fd, Jind\u0159ich Helcl, and David Mare\u010dek. 2018. Input combination strategies for multi-source transformer decoder. In Proceedings of the Third Conference on Machine Translation: Research Pa- pers, pages 253-260, Brussels, Belgium. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning to generate questions by learningwhat not to generate",
"authors": [
{
"first": "Bang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mingjun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Kunfeng",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Yancheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Haojie",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2019,
"venue": "The World Wide Web Conference, WWW '19",
"volume": "",
"issue": "",
"pages": "1106--1118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bang Liu, Mingjun Zhao, Di Niu, Kunfeng Lai, Yancheng He, Haojie Wei, and Yu Xu. 2019. Learn- ing to generate questions by learningwhat not to gen- erate. In The World Wide Web Conference, WWW '19, page 1106-1118, New York, NY, USA. Associ- ation for Computing Machinery.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Generating followup questions for interpretable multi-hop question answering",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Malon",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Bai",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Malon and Bing Bai. 2020. Generating fol- lowup questions for interpretable multi-hop question answering.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Generating natural questions about an image",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Mostafazadeh",
"suffix": ""
},
{
"first": "Ishan",
"middle": [],
"last": "Misra",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1802--1813",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1170"
]
},
"num": null,
"urls": [],
"raw_text": "Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Mar- garet Mitchell, Xiaodong He, and Lucy Vander- wende. 2016. Generating natural questions about an image. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1802-1813, Berlin, Germany. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Scaling neural machine translation",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6301"
]
},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine trans- lation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1-9, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Bleu: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics, ACL '02, page 311-318, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A decomposable attention model for natural language inference",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2249--2255",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1244"
]
},
"num": null,
"urls": [],
"raw_text": "Ankur Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249-2255, Austin, Texas. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Stay hungry, stay focused: Generating informative and specific questions in information-seeking conversations",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qi, Yuhao Zhang, and Christopher D. Manning. 2020. Stay hungry, stay focused: Generating infor- mative and specific questions in information-seeking conversations.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Know what you don't know: Unanswerable questions for SQuAD",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "784--789",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2124"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for SQuAD. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784- 789, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "SQuAD: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1264"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "CoQA: A conversational question answering challenge",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "249--266",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00266"
]
},
"num": null,
"urls": [],
"raw_text": "Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Com- putational Linguistics, 7:249-266.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1073--1083",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1099"
]
},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Generating factoid questions with recurrent neural networks: The 30M factoid question-answer corpus",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Garc\u00eda-Dur\u00e1n",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Sungjin",
"middle": [],
"last": "Ahn",
"suffix": ""
},
{
"first": "Sarath",
"middle": [],
"last": "Chandar",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "588--598",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1056"
]
},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Alberto Garc\u00eda-Dur\u00e1n, Caglar Gulcehre, Sungjin Ahn, Sarath Chandar, Aaron Courville, and Yoshua Bengio. 2016. Generating factoid questions with recurrent neural networks: The 30M factoid question-answer corpus. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 588-598, Berlin, Germany. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Layla",
"middle": [
"El"
],
"last": "Asri",
"suffix": ""
},
{
"first": "Hannes",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "Jeremie",
"middle": [],
"last": "Zumer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shikhar Sharma, Layla El Asri, Hannes Schulz, and Jeremie Zumer. 2017. Relevance of unsupervised metrics in task-oriented dialogue for evaluating nat- ural language generation. CoRR, abs/1706.09799.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Answer-focused and position-aware neural question generation",
"authors": [
{
"first": "Xingwu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yajuan",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yanjun",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Shi",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3930--3939",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1427"
]
},
"num": null,
"urls": [],
"raw_text": "Xingwu Sun, Jing Liu, Yajuan Lyu, Wei He, Yanjun Ma, and Shi Wang. 2018. Answer-focused and position-aware neural question generation. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 3930- 3939, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "NewsQA: A machine comprehension dataset",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xingdi",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Bachman",
"suffix": ""
},
{
"first": "Kaheer",
"middle": [],
"last": "Suleman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "191--200",
"other_ids": {
"DOI": [
"10.18653/v1/W17-2623"
]
},
"num": null,
"urls": [],
"raw_text": "Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2017. NewsQA: A machine compre- hension dataset. In Proceedings of the 2nd Work- shop on Representation Learning for NLP, pages 191-200, Vancouver, Canada. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Capturing greater context for question generation",
"authors": [
{
"first": "Darsh",
"middle": [],
"last": "Luu Anh Tuan",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "9065--9072",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luu Anh Tuan, Darsh Shah, and Regina Barzilay. 2020. Capturing greater context for question generation. Proceedings of the AAAI Conference on Artificial In- telligence, 34(05):9065-9072.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Asking and answering questions to evaluate the factual consistency of summaries",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5008--5020",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.450"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the fac- tual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics, pages 5008-5020, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "HotpotQA: A dataset",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1259"
]
},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D. Manning. 2018. HotpotQA: A dataset",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "Scatter plots of generated questions of the News-LA from M N Q (left) and M SD (right). s ans and s gra are negatively correlated, but the M N Q model tends to generate more questions with positive anserability and granularity. Straight lines show fitted linear regressions.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "Therefore sign [ (1) In logical argument and mathematical proof, [ (2) [ (3) the [ (4) therefore sign (/4) ] (/3) ] ( \u2234 ) is generally used before [ (5) a logical consequence, such as the conclusion of a syllogism. (/5) ] (/2) ] The symbol consists of three dots placed in an upright triangle and is read therefore.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"text": "QG Data summary. *-LA contains questions that only have long answers, while NQ-SA contains questions having both long and short answers.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF3": {
"text": "Comparing our model (BERTPGN) on NQ-LA and NQ-SA with two models in (Mishra et al.",
"type_str": "table",
"num": null,
"content": "<table><tr><td>, 2020)-</td></tr></table>",
"html": null
},
"TABREF4": {
"text": "Ablation study of the BERTPGN. Removing the pointer network drops BLEU-4 by around 3 points for both test sets. Removing BERT initialization affects both the NQ-LA and NQ-SA substantially but more mildly than removing the pointer. Removing type IDs affects the NQ-SA by 5.7 drop in BLEU-4.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF7": {
"text": "The performance of our answer-free baseline, compared with the best model from(Du et al., 2017). that maximizes p start (i) \u2022 p end (j) as the probability of the answer. It also defines the probability of no answer to be p start ([CLS]) \u2022 p end ([CLS]), i.e., an answer span that starts then stops at the[CLS] token. Furthermore, the BERT-joint model computes the probability of types of the questionundetermined, long answer, short answer, and YESor-NO. This model achieves 66.2 % F1",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF9": {
"text": "",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF10": {
"text": "). M N Q questions are more context-free than M SD ones, with 38 % true and 62 % false towards the Context statement. Second, the M N Q questions are more likely to be answered by entire news articles (49 % true",
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td>sans</td><td/><td>sgra</td><td/></tr><tr><td/><td colspan=\"4\">MNQ MSD MNQ MSD</td></tr><tr><td>Context</td><td>0.1</td><td>\u22120.1</td><td>0.1</td><td>0.5</td></tr><tr><td>Irrelevant</td><td>\u22121.0</td><td>\u22120.6</td><td>0.7</td><td>0.4</td></tr><tr><td>Contradiction</td><td>\u22120.5</td><td>\u22120.3</td><td>0.4</td><td>0.2</td></tr><tr><td>Peripheral</td><td>\u22120.3</td><td>\u22120.3</td><td>0.2</td><td>0.2</td></tr><tr><td>Span</td><td>1.5</td><td>1.1</td><td>\u22120.8</td><td>\u22120.6</td></tr><tr><td>Entire</td><td>0.4</td><td>0.3</td><td>0.4</td><td>0.3</td></tr><tr><td>None</td><td>\u22121.5</td><td>\u22121.2</td><td>0.6</td><td>0.6</td></tr></table>",
"html": null
},
"TABREF11": {
"text": "",
"type_str": "table",
"num": null,
"content": "<table><tr><td>: Pearson correlation (1 \u00d7 10 \u22121 ) between hu-</td></tr><tr><td>man (Section 8) and automatic (Section 7) evaluation.</td></tr><tr><td>For each column, we mark the most positive and nega-</td></tr><tr><td>tive correlated scores in bold text.</td></tr></table>",
"html": null
},
"TABREF12": {
"text": "Chen and Wen-tau Yih. 2020. Open-domain question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 34-37.",
"type_str": "table",
"num": null,
"content": "<table><tr><td>for diverse, explainable multi-hop question answer-ing. In Proceedings of the 2018 Conference on Em-pirical Methods in Natural Language Processing, pages 2369-2380, Brussels, Belgium. Association for Computational Linguistics.</td><td colspan=\"2\">Danqi Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-</td></tr><tr><td>Jianxing Yu, Xiaojun Quan, Qinliang Su, and Jian Yin.</td><td colspan=\"2\">tau Yih, Yejin Choi, Percy Liang, and Luke Zettle-</td></tr><tr><td>2020. Generating multi-hop reasoning questions to</td><td colspan=\"2\">moyer. 2018. QuAC: Question answering in con-</td></tr><tr><td>improve machine reading comprehension. In Pro-</td><td colspan=\"2\">text. In Proceedings of the 2018 Conference on</td></tr><tr><td>ceedings of The Web Conference 2020, WWW '20,</td><td colspan=\"2\">Empirical Methods in Natural Language Processing,</td></tr><tr><td>page 281-291, New York, NY, USA. Association for</td><td colspan=\"2\">pages 2174-2184, Brussels, Belgium. Association</td></tr><tr><td>Computing Machinery.</td><td>for Computational Linguistics.</td></tr><tr><td>Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa</td><td colspan=\"2\">Christopher Clark and Matt Gardner. 2018. Simple</td></tr><tr><td>Ke. 2018. Paragraph-level neural question gener-</td><td colspan=\"2\">and effective multi-paragraph reading comprehen-</td></tr><tr><td>ation with maxout pointer and gated self-attention</td><td colspan=\"2\">sion. In Proceedings of the 56th Annual Meeting of</td></tr><tr><td>networks. In Proceedings of the 2018 Conference</td><td colspan=\"2\">the Association for Computational Linguistics (Vol-</td></tr><tr><td>on Empirical Methods in Natural Language Process-</td><td colspan=\"2\">ume 1: Long Papers), pages 845-855, Melbourne,</td></tr><tr><td>ing, pages 3901-3910, Brussels, Belgium. Associa-</td><td colspan=\"2\">Australia. Association for Computational Linguis-</td></tr><tr><td>tion for Computational Linguistics.</td><td>tics.</td></tr><tr><td/><td colspan=\"2\">Daniel Deutsch, Tania Bedrax-Weiss, and Dan Roth.</td></tr><tr><td/><td colspan=\"2\">2020. Towards question-answering as an automatic</td></tr><tr><td/><td colspan=\"2\">metric for evaluating the content quality of a sum-</td></tr><tr><td/><td>mary.</td></tr><tr><td/><td colspan=\"2\">Jacob Devlin, Ming-Wei Chang, Kenton Lee, and</td></tr><tr><td/><td colspan=\"2\">Kristina Toutanova. 2019. BERT: Pre-training of</td></tr><tr><td/><td colspan=\"2\">deep bidirectional transformers for language under-</td></tr><tr><td/><td colspan=\"2\">standing. In Proceedings of the 2019 Conference</td></tr><tr><td/><td colspan=\"2\">of the North American Chapter of the Association</td></tr><tr><td/><td colspan=\"2\">for Computational Linguistics: Human Language</td></tr><tr><td/><td colspan=\"2\">Technologies, Volume 1 (Long and Short Papers),</td></tr><tr><td/><td colspan=\"2\">pages 4171-4186, Minneapolis, Minnesota. Associ-</td></tr><tr><td/><td>ation for Computational Linguistics.</td></tr><tr><td/><td>Xinya Du and Claire Cardie. 2018.</td><td>Harvest-</td></tr><tr><td/><td colspan=\"2\">ing paragraph-level question-answer pairs from</td></tr><tr><td/><td colspan=\"2\">Wikipedia. In Proceedings of the 56th Annual Meet-</td></tr><tr><td/><td colspan=\"2\">ing of the Association for Computational Linguistics</td></tr><tr><td/><td colspan=\"2\">(Volume 1: Long Papers), pages 1907-1917, Mel-</td></tr><tr><td/><td colspan=\"2\">bourne, Australia. Association for Computational</td></tr><tr><td/><td>Linguistics.</td></tr><tr><td/><td colspan=\"2\">Xinya Du, Junru Shao, and Claire Cardie. 2017. Learn-</td></tr><tr><td/><td colspan=\"2\">ing to ask: Neural question generation for reading</td></tr><tr><td/><td colspan=\"2\">comprehension. In Proceedings of the 55th Annual</td></tr><tr><td/><td colspan=\"2\">Meeting of the Association for Computational Lin-</td></tr><tr><td/><td colspan=\"2\">guistics (Volume 1: Long Papers), pages 1342-1352,</td></tr><tr><td/><td colspan=\"2\">Vancouver, Canada. Association for Computational</td></tr><tr><td/><td>Linguistics.</td></tr><tr><td/><td colspan=\"2\">Hady Elsahar, Christophe Gravier, and Frederique</td></tr><tr><td/><td colspan=\"2\">Laforest. 2018. Zero-shot question generation from</td></tr><tr><td/><td colspan=\"2\">knowledge graphs for unseen predicates and entity</td></tr><tr><td/><td colspan=\"2\">types. In Proceedings of the 2018 Conference of the</td></tr><tr><td/><td colspan=\"2\">North American Chapter of the Association for Com-</td></tr><tr><td/><td colspan=\"2\">putational Linguistics: Human Language Technolo-</td></tr><tr><td/><td colspan=\"2\">gies, Volume 1 (Long Papers), pages 218-228, New</td></tr><tr><td/><td colspan=\"2\">Orleans, Louisiana. Association for Computational</td></tr><tr><td/><td>Linguistics.</td></tr><tr><td/><td colspan=\"2\">Deepak Gupta, Hardik Chauhan, Asif Ekbal, and Push-</td></tr><tr><td/><td colspan=\"2\">pak Bhattacharyya. 2020. Reinforced multi-task ap-</td></tr><tr><td/><td>proach for multi-hop question generation.</td></tr></table>",
"html": null
},
"TABREF14": {
"text": "President of the United Nations General Assembly [ Miroslav Laj\u010d\u00e1k of Slovakia ] has been elected as the United Nations General Assembly President of its 72nd session beginning in September 2017. who is the current president of un general assembly Learner 's permit Typically , a driver operating with a learner 's permit must be accompanied by [ an adult licensed driver who is at least 21 years of age or older and in the passenger seat of the vehicle at all times ] . who needs to be in the car with a permit driver Java development Kit [ The Java Development Kit ( JDK ) is an implementation of either one of the Java Platform , Standard Edition , Java Platform , Enterprise Edition , or Java Platform , Micro Edition platforms released by Oracle Corporation in the form of a binary product aimed at Java developers on Solaris , Linux , macOS or Windows . The JDK includes a private JVM and a few other resources to finish the development of a Java Application . Since the introduction of the Java platform , it has been by far the most widely used Software Development Kit ( SDK ) . On 17 November 2006 , Sun announced that they would release it under the GNU General Public License ( GPL ) , thus making it free software . This happened in large part on 8 May 2007 , when Sun contributed the source code to the OpenJDK . ]",
"type_str": "table",
"num": null,
"content": "<table><tr><td>what is the average unemployment rate in</td></tr><tr><td>spain</td></tr><tr><td>\u2022 what percentage of spain's population is out</td></tr><tr><td>of work</td></tr><tr><td>Atlanta rapper DeAndre Cortez Way, better known by</td></tr><tr><td>his stage name Soulja Boy Tell 'Em or just Soulja Boy,</td></tr><tr><td>was charged with obstruction after running from po-</td></tr><tr><td>lice despite an order to stop, a police spokesman said</td></tr><tr><td>Friday. Rapper Soulja Boy was arrested in Georgia</td></tr><tr><td>after allegedly running from police. The 19-year-old</td></tr><tr><td>singer was among a large group that had gathered at a</td></tr><tr><td>home in Stockbridge, 20 miles south of Atlanta, said</td></tr><tr><td>Henry County, Georgia, police Capt. Jason Bolton. Way</td></tr><tr><td>was arrested Wednesday night along with another man,</td></tr><tr><td>Bolton said. Police said Way left jail Thursday after</td></tr><tr><td>posting a $550 bond. Bolton said officers responded</td></tr><tr><td>to a complaint about a group of youths milling around</td></tr><tr><td>the house, which appeared to be abandoned. When po-</td></tr><tr><td>lice arrived, they saw about 40 people. Half of them</td></tr></table>",
"html": null
}
}
}
}