Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q19-1026",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:09:06.689641Z"
},
"title": "Natural Questions: A Benchmark for Question Answering Research",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Kelcey",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Andrew",
"middle": [
"M"
],
"last": "Dai",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le \u2663\u2666",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present the Natural Questions corpus, a question answering data set. Questions consist of real anonymized, aggregated queries issued to the Google search engine. An annotator is presented with a question along with a Wikipedia page from the top 5 search results, and annotates a long answer (typically a paragraph) and a short answer (one or more entities) if present on the page, or marks null if no long/short answer is present. The public release consists of 307,373 training examples with single annotations; 7,830 examples with 5-way annotations for development data; and a further 7,842 examples with 5-way annotated sequestered as test data. We present experiments validating quality of the data. We also describe analysis of 25-way annotations on 302 examples, giving insights into human variability on the annotation task. We introduce robust metrics for the purposes of evaluating question answering systems; demonstrate high human upper bounds on these metrics; and establish baseline results using competitive methods drawn from related literature.",
"pdf_parse": {
"paper_id": "Q19-1026",
"_pdf_hash": "",
"abstract": [
{
"text": "We present the Natural Questions corpus, a question answering data set. Questions consist of real anonymized, aggregated queries issued to the Google search engine. An annotator is presented with a question along with a Wikipedia page from the top 5 search results, and annotates a long answer (typically a paragraph) and a short answer (one or more entities) if present on the page, or marks null if no long/short answer is present. The public release consists of 307,373 training examples with single annotations; 7,830 examples with 5-way annotations for development data; and a further 7,842 examples with 5-way annotated sequestered as test data. We present experiments validating quality of the data. We also describe analysis of 25-way annotations on 302 examples, giving insights into human variability on the annotation task. We introduce robust metrics for the purposes of evaluating question answering systems; demonstrate high human upper bounds on these metrics; and establish baseline results using competitive methods drawn from related literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years there has been dramatic progress in machine learning approaches to problems such as machine translation, speech recognition, and image recognition. One major factor in these successes has been the development of neural methods that far exceed the performance of previous approaches. A second major factor has been the existence of large quantities of training data for these systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Open-domain question answering (QA) is a benchmark task in natural language understanding (NLU), which has significant utility to users, and in addition is potentially a challenge task that can drive the development of methods for NLU. Several pieces of recent work have introduced QA data sets (e.g., Rajpurkar et al., 2016; Reddy et al., 2018) . However, in contrast to tasks where it is relatively easy to gather naturally occurring examples, 1 the definition of a suitable QA task, and the development of a methodology for annotation and evaluation, is challenging. Key issues include the methods and sources used to obtain questions; the methods used to annotate and collect answers; the methods used to measure and ensure annotation quality; and the metrics used for evaluation. For more discussion of the limitations of previous work with respect to these issues, see Section 2 of this paper. This paper introduces Natural Questions 2 (NQ), a new data set for QA research, along with methods for QA system evaluation. Our goals are three-fold: 1) To provide large-scale end-to-end training data for the QA problem. 2) To provide a data set that drives research in natural language understanding. 3) To study human performance in providing QA annotations for naturally occurring questions.",
"cite_spans": [
{
"start": 302,
"end": 325,
"text": "Rajpurkar et al., 2016;",
"ref_id": "BIBREF21"
},
{
"start": 326,
"end": 345,
"text": "Reddy et al., 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In brief, our annotation process is as follows. An annotator is presented with a (question, Wikipedia page) pair. The annotator returns a (long answer, short answer) pair. The long answer (l) can be an HTML bounding box on the Wikipedia page-typically a paragraph or table-that contains the information required to answer the question. Alternatively, the annotator can return l = NULL if there is no answer on the page, or if the information required to answer the question is spread across many paragraphs. The short answer (s) can be a span or set of spans (typically entities) within l that answer the question, a boolean yes or no answer, or NULL. If l = NULL then s = NULL, necessarily. Figure 1 shows examples.",
"cite_spans": [],
"ref_spans": [
{
"start": 692,
"end": 700,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Natural Questions has the following properties:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Source of questions The questions consist of real anonymized, aggregated queries issued to the Google search engine. Simple heuristics are used to filter questions from the query stream. Thus the questions are ''natural'' in that they represent real queries from people seeking information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The public release contains 307,373 training examples with single annotations, 7,830 examples with 5-way annotations for development data, and 7,842 5-way annotated items sequestered as test data. We justify the use of 5-way annotation for evaluation in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of items",
"sec_num": null
},
{
"text": "The input to a model is a question together with an entire Wikipedia page. The target output from the model is: 1) a long-answer (e.g., a paragraph) from the page that answers the question, or alternatively an indication that there is no answer on the page; 2) a short answer where applicable. The task was designed to be close to an end-to-end question answering application.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task definition",
"sec_num": null
},
{
"text": "Ensuring high-quality annotations at scale Comprehensive guidelines were developed for the task. These are summarized in Section 3. Annotation quality was constantly monitored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task definition",
"sec_num": null
},
{
"text": "Evaluation of quality Section 4 describes posthoc evaluation of annotation quality. Long/short answers have 90%/84% precision, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task definition",
"sec_num": null
},
{
"text": "Study of variability One clear finding in NQ is that for naturally occurring questions there is often genuine ambiguity in whether or not an answer is acceptable. There are also often a number of acceptable answers. Section 4 examines this variability using 25-way annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task definition",
"sec_num": null
},
{
"text": "Robust evaluation metrics Section 5 introduces methods of measuring answer quality that account for variability in acceptable answers. We demonstrate a high human upper bound on these measures for both long answers (90% precision, 85% recall), and short answers (79% precision, 72% recall). We propose NQ as a new benchmark for research in QA. In Section 6.4 we present baseline results from recent models developed on comparable data sets (Clark and Gardner, 2018) , as well as a simple pipelined model designed for the NQ task. We demonstrate a large gap between the performance of these baselines and a human upper bound. We argue that closing this gap will require significant advances in NLU.",
"cite_spans": [
{
"start": 440,
"end": 465,
"text": "(Clark and Gardner, 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task definition",
"sec_num": null
},
{
"text": "The SQuAD (Rajpurkar et al., 2016) , SQuAD 2.0 (Rajpurkar et al., 2018) , NarrativeQA (Kocisky et al., 2018) , and HotpotQA (Yang et al., 2018) data sets contain questions and answers written by annotators who have first read a short text containing the answer. The SQuAD data sets contain questions/paragraph/answer triples from Wikipedia. In the original SQuAD data set, annotators often borrow part of the evidence paragraph to create a question. Jia and Liang (2017) showed that systems trained on SQuAD could be easily fooled by the insertion of distractor sentences that should not change the answer, and SQuAD 2.0 introduces questions that are designed to be unanswerable. However, we argue that questions written to be unanswerable can be identified as such with little reasoning, in contrast to NQ's task of deciding whether a paragraph contains all of the evidence required to answer a real question. Both SQuAD tasks have driven significant advances in reading comprehension, but systems now outperform humans and harder challenges are needed. NarrativeQA aims to elicit questions that are not close paraphrases of the evidence by separate summary texts. No human performance upper bound is provided for the full task and, although an extractive system could theoretically perfectly recover all answers, current approaches only just outperform a random baseline. NarrativeQA may just be too hard for the current state of NLU. HotpotQA is designed to contain questions that require reasoning over text from separate Wikipedia pages. As well as answering questions, systems must also identify passages that contain supporting facts. This is similar in motivation to NQ's long answer task, where the selected passage must contain all of the information required to infer the answer. Mirroring our identification of acceptable variability in the NQ task definition, HotpotQA's authors observe that the choice of supporting facts is somewhat subjective. They set high human upper bounds by selecting, for each example, the score maximizing partition of four annotations into one prediction and three references. The reference labels chosen by this maximization are not representative of the reference labels in HotpotQA's evaluation set, and it is not clear that the upper bounds are achievable. A more robust approach is to keep the evaluation distribution fixed, and calculate an acheivable upper bound by approximating the expectation over annotations-as we have done for NQ in Section 5.",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 47,
"end": 71,
"text": "(Rajpurkar et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 86,
"end": 108,
"text": "(Kocisky et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 124,
"end": 143,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 450,
"end": 470,
"text": "Jia and Liang (2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The QuAC (Choi et al., 2018) and CoQA (Reddy et al., 2018) data sets contain dialogues between a questioner, who is trying to learn about a text, and an answerer. QuAC also prevents the questioner from seeing the evidence text. Conversational QA is an exciting new area, but it is significantly different from the single turn QA task in NQ. In both QuAC and CoQA, conversations tend to explore evidence texts incrementally, progressing from the start to the end of the text. This contrasts with NQ, where individual questions often require reasoning over large bodies of text.",
"cite_spans": [
{
"start": 9,
"end": 28,
"text": "(Choi et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 38,
"end": 58,
"text": "(Reddy et al., 2018)",
"ref_id": "BIBREF22"
},
{
"start": 337,
"end": 340,
"text": "NQ.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The WikiQA (Yang et al., 2015) and MS Marco (Nguyen et al., 2016) data sets contain queries sampled from the Bing search engine. WikiQA contains only 3,047 questions. MS Marco contains 100,000 questions with freeform answers. For each question, the annotator is presented with 10 passages returned by the search engine, and is asked to generate an answer to the query, or to say that the answer is not contained within the passages. Free-form text answers allow more flexibility in providing abstractive answers, but lead to difficulties in evaluation (BLEU score [Papineni et al., 2002] is used). MS Marco's authors do not discuss issues of variability or report quality metrics for their annotations. From our experience, these issues are critical. DuReader is a Chinese language data set containing queries from Baidu search logs. Like NQ, DuReader contains real user queries; it requires systems to read entire documents to find answers; and it identifies acceptable variability in answers. However, as with MS Marco, DuReader is reliant on BLEU for answer scoring, and systems already outperform a humans according to this metric.",
"cite_spans": [
{
"start": 11,
"end": 30,
"text": "(Yang et al., 2015)",
"ref_id": "BIBREF25"
},
{
"start": 44,
"end": 65,
"text": "(Nguyen et al., 2016)",
"ref_id": "BIBREF15"
},
{
"start": 564,
"end": 587,
"text": "[Papineni et al., 2002]",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There are a number of reading comprehension benchmarks based on multiple choice tests (Mihaylov et al., 2018; Richardson et al., 2013; Lai et al., 2017) . The TriviaQA data set (Joshi et al., 2017) contains questions and answers taken from trivia quizzes found online. A number of Clozestyle tasks have also been proposed (Hermann et al., 2015; Hill et al., 2015; Paperno et al., 2016; Onishi et al., 2016) . We believe that all of these tasks are related to, but distinct from, answering information-seeking questions. We also believe that, because a solution to NQ will have genuine utility, it is better equipped as a benchmark for NLU.",
"cite_spans": [
{
"start": 86,
"end": 109,
"text": "(Mihaylov et al., 2018;",
"ref_id": "BIBREF14"
},
{
"start": 110,
"end": 134,
"text": "Richardson et al., 2013;",
"ref_id": "BIBREF23"
},
{
"start": 135,
"end": 152,
"text": "Lai et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 177,
"end": 197,
"text": "(Joshi et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 322,
"end": 344,
"text": "(Hermann et al., 2015;",
"ref_id": "BIBREF8"
},
{
"start": 345,
"end": 363,
"text": "Hill et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 364,
"end": 385,
"text": "Paperno et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 386,
"end": 406,
"text": "Onishi et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Natural Questions contains (question, wikipedia page, long answer, short answer) quadruples where: the question seeks factual information; the Wikipedia page may or may not contain the information required to answer the question; the long answer is a bounding box on this page containing all information required to infer the answer; and the short answer is one or more entities that give a short answer to the question, or a boolean yes or 1.a where does the nature conservancy get its funding 1.b who is the song killing me softly written about 2 who owned most of the railroads in the 1800s 4 how far is chardon ohio from cleveland ohio 5 american comedian on have i got news for you ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition and Data Collection",
"sec_num": "3"
},
{
"text": "All the questions in NQ are queries of 8 words or more that have been issued to the Google search engine by multiple users in a short period of time. From these queries, we sample a subset that either:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Questions and Evidence Documents",
"sec_num": "3.1"
},
{
"text": "1. start with ''who'', ''when'', or ''where'' directly followed by: a) a finite form of ''do'' or a modal verb; or b) a finite form of ''be'' or ''have'' with a verb in some later position;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Questions and Evidence Documents",
"sec_num": "3.1"
},
{
"text": "2. start with ''who'' directly followed by a verb that is not a finite form of ''be'';",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Questions and Evidence Documents",
"sec_num": "3.1"
},
{
"text": "3. contain multiple entities as well as an adjective, adverb, verb, or determiner;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Questions and Evidence Documents",
"sec_num": "3.1"
},
{
"text": "4. contain a categorical noun phrase immediately preceded by a preposition or relative clause;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Questions and Evidence Documents",
"sec_num": "3.1"
},
{
"text": "5. end with a categorical noun phrase, and do not contain a preposition or relative clause. 3 Table 1 gives examples. We run questions through the Google search engine and keep those where there is a Wikipedia page in the top 5 search results. The (question, Wikipedia page) pairs are the input to the human annotation task described next.",
"cite_spans": [],
"ref_spans": [
{
"start": 94,
"end": 101,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Questions and Evidence Documents",
"sec_num": "3.1"
},
{
"text": "The goal of these heuristics is to discard a large proportion of queries that are non-questions, while retaining the majority of queries of 8 words or more in length that are questions. A manual inspection showed that the majority of questions in the data, with the exclusion of question beginning with ''how to'', are accepted by the filters. We focus on longer queries as they are more complex, and are thus a more challenging test for deep NLU. We focus on Wikipedia as it is a very important source of factual information, and we believe that stylistically it is similar to other sources of factual information on the Web; however, like any data set there may be biases in this choice. Future datacollection efforts may introduce shorter queries, ''how to'' questions, or domains other than Wikipedia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Questions and Evidence Documents",
"sec_num": "3.1"
},
{
"text": "Annotation is performed using a custom annotation interface, by a pool of around 50 annotators, with an average annotation time of 80 seconds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Identification of Answers",
"sec_num": "3.2"
},
{
"text": "The guidelines and tooling divide the annotation task into three conceptual stages, where all three stages are completed by a single annotator in succession. The decision flow through these is illustrated in Figure 2 and the instructions given to annotators are summarized below.",
"cite_spans": [],
"ref_spans": [
{
"start": 208,
"end": 216,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Human Identification of Answers",
"sec_num": "3.2"
},
{
"text": "Question Identification: Contributors determine whether the given question is good or bad. A good question is a fact-seeking question that can be answered with an entity or explanation. A bad question is ambigous, incomprehensible, dependent on clear false presuppositions, opinionseeking, or not clearly a request for factual information. Annotators must make this judgment solely by the content of the question; they are not yet shown the Wikipedia page.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Identification of Answers",
"sec_num": "3.2"
},
{
"text": "Long Answer Identification: For good questions only, annotators select the earliest HTML bounding box containing enough information for a reader to completely infer the answer to the question. Bounding boxes can be paragraphs, tables, list items, or whole lists. Alternatively, annotators mark ''no answer'' if the page does not answer the question, or if the information is present but not contained in a single one of the allowed elements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Identification of Answers",
"sec_num": "3.2"
},
{
"text": "Short Answer Identification: For examples with long answers, annotators select the entity or set of entities within the long answer that answer the question. Alternatively, annotators can flag that the short answer is yes, no, or they can flag that no short answer is possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Identification of Answers",
"sec_num": "3.2"
},
{
"text": "In total, annotators identify a long answer for 49% of the examples, and short answer spans or a yes/no answer for 36% of the examples. We consider the choice of whether or not to answer a question a core part of the question answering task, and do not discard the remaining 51% that have no answer labeled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Statistics",
"sec_num": "3.3"
},
{
"text": "Annotators identify long answers by selecting the smallest HTML bounding box that contains all of the information required to answer the question. These are mostly paragraphs (73%). The remainder are made up of tables (19%), table rows (1%), lists (3%), or list items (3%). 4 We leave further subcategorization of long answers to future work, and provide a breakdown of baseline performance on each of these three types of answers in Section 6.4.",
"cite_spans": [
{
"start": 274,
"end": 275,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Statistics",
"sec_num": "3.3"
},
{
"text": "This section describes evaluation of the quality of the human annotations in our data. We use a combination of two methods: 1) post hoc evaluation of correctness of non-null answers, under consensus judgments from four ''experts'';",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Annotation Quality",
"sec_num": "4"
},
{
"text": "2) k-way annotations (with k = 25) on a subset of the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Annotation Quality",
"sec_num": "4"
},
{
"text": "Post hoc evaluation of non-null answers leads directly to a measure of annotation precision. As is common in information-retrieval style problems such as long-answer identification, measuring recall is more challenging. However, we describe how 25-way annotated data provide useful insights into recall, particularly when combined with expert judgments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Annotation Quality",
"sec_num": "4"
},
{
"text": "Each item in our data consists of a four-tuple (q, d, l, s) where q is a question, d is a document, l is a long answer, and s is a short answer. Thus we introduce random variables Q, D, L, and S corresponding to these items. Note that L, can be a span within the document, or NULL. Similarly, S can be one or more spans within L, a boolean, or NULL. For now we consider the three-tuple (q, d, l) . The treatment for short answers is the same throughout, with (q, d, s) replacing (q, d, l) .",
"cite_spans": [
{
"start": 51,
"end": 59,
"text": "d, l, s)",
"ref_id": null
},
{
"start": 390,
"end": 395,
"text": "d, l)",
"ref_id": null
},
{
"start": 463,
"end": 468,
"text": "d, s)",
"ref_id": null
},
{
"start": 479,
"end": 488,
"text": "(q, d, l)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries: The Sampling Distribution",
"sec_num": "4.1"
},
{
"text": "Each data item (q, d, l) is independent and identically distrbuted (IID) sampled from",
"cite_spans": [
{
"start": 19,
"end": 24,
"text": "d, l)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries: The Sampling Distribution",
"sec_num": "4.1"
},
{
"text": "p(l, q, d) = p(q, d) \u00d7 p(l|q, d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries: The Sampling Distribution",
"sec_num": "4.1"
},
{
"text": "Here, p(q, d) is the sampling distribution (probability mass function [PMF]) over question/ document pairs. It is defined as the PMF corresponding to the following sampling process: 5 First, sample a question at random from some distribution; second, perform a search on a major search engine using the question as the underlying query; finally, either: 1) return (q, d) where d is the top Wikipedia result for q, if d is in the top 5 search results for q; 2) if there is no Wikipedia page in the top 5 results, discard q and repeat the sampling process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries: The Sampling Distribution",
"sec_num": "4.1"
},
{
"text": "Here p(l|q, d) is the conditional distribution (PMF) over long answer l conditioned on the pair (q, d). The value for l is obtained by: 1) sampling an annotator uniformly at random from the pool of annotators; 2) presenting the pair (q, d) to the annotator, who then provides a value for l.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries: The Sampling Distribution",
"sec_num": "4.1"
},
{
"text": "Note that l is non-deterministic due to two sources of randomness: 1) the random choice of annotator; 2) the potentially random behavior of a particular annotator (the annotator may give a different answer depending on the time of day, etc.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries: The Sampling Distribution",
"sec_num": "4.1"
},
{
"text": "We will also consider the distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries: The Sampling Distribution",
"sec_num": "4.1"
},
{
"text": "p(l, q, d|L = NULL) = p(l, q, d) P (L = NULL) if l = NULL = 0 otherwise where P (L = NULL) = l,q,d:l =NULL p(l, q, d).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries: The Sampling Distribution",
"sec_num": "4.1"
},
{
"text": "Thus p(l, q, d|L = NULL) is the probability of seeing the triple (l, q, d), conditioned on L not being NULL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries: The Sampling Distribution",
"sec_num": "4.1"
},
{
"text": "We now define precision of annotations. Consider a function \u03c0 (l, q, d) that is equal to 1 if l is a ''correct'' answer for the pair (q, d) , 0 if the answer is incorrect. The next section gives a concrete definition of \u03c0. The annotation precision is defined as",
"cite_spans": [
{
"start": 62,
"end": 71,
"text": "(l, q, d)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 133,
"end": 139,
"text": "(q, d)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preliminaries: The Sampling Distribution",
"sec_num": "4.1"
},
{
"text": "\u03a8 = l,q,d p(l, q, d|L = NULL) \u00d7 \u03c0(l, q, d) Given a set of annotations S = {(l (i) , q (i) , d (i) )} |S| i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries: The Sampling Distribution",
"sec_num": "4.1"
},
{
"text": "drawn IID from p(l, q, d|L = NULL), we can derive an estimate of \u03a8 as\u03a8 = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries: The Sampling Distribution",
"sec_num": "4.1"
},
{
"text": "(l,q,d)\u2208 S\u03c0(l, q, d).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "|S|",
"sec_num": null
},
{
"text": "We now describe the process for deriving ''expert'' judgments of answer correctness. We used four experts for these judgments. These experts had prepared the guidelines for the annotation process. 6 In a first phase each of the four experts independently annotated examples for correctness. In a second phase the four experts met to discuss disagreements in judgments, and to reach a single consensus judgment for each example. A key step is to define the criteria used to determine correctness of an example. Given a triple (l, q, d), we extracted the passage l corresponding to l on the page d. The pair (q, l ) was then presented to the expert. Experts categorized (q, l ) pairs into the following three categories:",
"cite_spans": [
{
"start": 197,
"end": 198,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Expert Evaluations of Correctness",
"sec_num": "4.2"
},
{
"text": "Correct (C): It is clear beyond a reasonable doubt that the answer is correct. 6 The first four authors of this paper. Wrong (W): There is not convincing evidence that the answer is correct. Figure 3 shows some example judgments. We introduced the intermediate C d category after observing that many (q, l ) pairs are high quality answers, but raise some small doubt or quibble about whether they fully answer the question. The use of the word ''debatable'' is intended to be literal: (q, l ) pairs falling into the C d category could literally lead to some debate between reasonable people as to whether they fully answer the question or not.",
"cite_spans": [
{
"start": 79,
"end": 80,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 191,
"end": 199,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Expert Evaluations of Correctness",
"sec_num": "4.2"
},
{
"text": "Given this background, we will make the following assumption:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expert Evaluations of Correctness",
"sec_num": "4.2"
},
{
"text": "Answers in the C d category should be very useful to a user interacting with a QA system, and should be considered to be high-quality answers; however, an annotator would be justified in either annotating or not annotating the example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expert Evaluations of Correctness",
"sec_num": "4.2"
},
{
"text": "For these cases there is often disagreement between annotators as to whether the page contains",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expert Evaluations of Correctness",
"sec_num": "4.2"
},
{
"text": "Long answer Short answer an answer or not: We will see evidence of this when we consider the 25-way annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantity",
"sec_num": null
},
{
"text": "\u03a8 90% 84% E(C) 59% 51% E(C d ) 31% 33% E(W) 10% 16%",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantity",
"sec_num": null
},
{
"text": "We used the following procedure to derive measurements of precision: 1) We sampled examples IID from the distribution p(l, q, d|L = NULL). We call this set S. We had |S| = 139. 2) Four experts independently classified each of the items in S into the categories C, C d , W.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Precision Measurements",
"sec_num": "4.3"
},
{
"text": "3) The four experts met to come up with a consensus judgment for each item. For each example (l (i) , q (i) , d (i) ) \u2208 S, we define c (i) to be the consensus judgment. This process was repeated to derive judgments for short answers. We can then calculate the percentage of examples falling into the three expert categories; we denote these values as\u00ca(C),\u00ca(C d ), and\u00ca(W ). 7 We define\u03a8 =\u00ca(C)+\u00ca(C d ). We have explicitly included samples C and C d in the overall precision as we believe that C d answers are essentially correct. Table 2 shows the values for these quantities.",
"cite_spans": [
{
"start": 112,
"end": 115,
"text": "(i)",
"ref_id": null
},
{
"start": 374,
"end": 375,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 529,
"end": 536,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results for Precision Measurements",
"sec_num": "4.3"
},
{
"text": "We have shown that an annotation drawn from p(l, q, d|L = NULL) has high expected precision. Now we address the distribution over annotations for a given (q, d) pair. Annotators can disagree about whether or not d contains an answer to q-that is, whether or not L = NULL. In the case that annotators agree that L = NULL, they can also disagree about the correct assignment to L. In order to study variability, we collected 24 additional annotations from separate annotators for each of the (q, d, l) triples in S. For each (q, d, l) triple, we now have a 5-tuple (q (i) ",
"cite_spans": [
{
"start": 376,
"end": 378,
"text": "L.",
"ref_id": null
},
{
"start": 494,
"end": 499,
"text": "d, l)",
"ref_id": null
},
{
"start": 566,
"end": 569,
"text": "(i)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 523,
"end": 532,
"text": "(q, d, l)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Variability of Annotations",
"sec_num": "4.4"
},
{
"text": ", d (i) , l (i) , c (i) , a (i) ) where a (i) = a (i) 1 . . . a (i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variability of Annotations",
"sec_num": "4.4"
},
{
"text": "25 is a vector of 25 annotations (including l (i) ), and c ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variability of Annotations",
"sec_num": "4.4"
},
{
"text": "(\u03b8 1 , \u03b8 2 ], C],\u00ca[(\u03b8 1 , \u03b8 2 ], C d ], and\u00ca[(\u03b8 1 , \u03b8 2 ], W].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variability of Annotations",
"sec_num": "4.4"
},
{
"text": "the consensus judgment for l (i) . For each i also define",
"cite_spans": [
{
"start": 29,
"end": 32,
"text": "(i)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Variability of Annotations",
"sec_num": "4.4"
},
{
"text": "\u03bc (i) = 1 25 25 j=1 [[a (i) j = NULL]]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variability of Annotations",
"sec_num": "4.4"
},
{
"text": "to be the proportion of the 25-way annotations that are non-null. We now show that \u03bc (i) is highly correlated with annotation precision. We defin\u00ea",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variability of Annotations",
"sec_num": "4.4"
},
{
"text": "E[(0.8, 1.0]] = 1 |S| |S| i=1 [[0.8 < \u03bc (i) \u2264 1]]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variability of Annotations",
"sec_num": "4.4"
},
{
"text": "to be the proportion of examples with greater than 80% of the 25 annotators marking a non-null long answer, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variability of Annotations",
"sec_num": "4.4"
},
{
"text": "E[(0.8, 1.0], C] = 1 |S| |S| i=1 [[0.8 < \u03bc (i) \u2264 1 and c (i) = C]]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variability of Annotations",
"sec_num": "4.4"
},
{
"text": "to be the proportion of examples with greater than 80% of the 25 annotators marking a non-null long answer and with c (i) = C. Similar definitions apply for the intervals (0,0.2], (0.2, 0.4], (0.4, 0.6], and (0.6, 0.8], and for judgments C d and W. Figure 4 illustrates the proportion of annotations falling into the C/C d /W categories in different regions of \u03bc (i) . For those (q, d) pairs where more than 80% of annotators gave some non-null answer, our expert judgements agree that these annotations are overwhelmingly correct. Similarly, when fewer than 20% of annotators gave a non-null answer, these answers tend to be incorrect. In between these two extremes, the disagreement between annotators is largely accounted for by the C d category-where a reasonable person could either be satisfied with the answer, or want more information. Later, in Section 5, we make use of the correlation between \u03bc (i) and accuracy to define a metric for the evaluation of answer quality. In that section, we also show that a model trained on (l, q, d) triples can outperform a single annotator on this metric by accounting for the uncertainty of whether or not an answer is present.",
"cite_spans": [
{
"start": 363,
"end": 366,
"text": "(i)",
"ref_id": null
},
{
"start": 1038,
"end": 1043,
"text": "q, d)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 249,
"end": 257,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Variability of Annotations",
"sec_num": "4.4"
},
{
"text": "As well as disagreeing about whether (q, d) contains a valid answer, annotators can disagree about the location of the best answer. In many cases there are multiple valid long answers in multiple distinct locations on the page. 8 The most extreme example of this that we see in our 25-way annotated data is for the question ''name the substance used to make the filament of bulb'' paired with the Wikipedia page about incandescent light bulbs. Annotators identify 7 passages that discuss tungsten wire filaments.",
"cite_spans": [
{
"start": 228,
"end": 229,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Variability of Annotations",
"sec_num": "4.4"
},
{
"text": "Short answers can be arbitrarily delimited and this can lead to extreme variation. The most extreme example of this that we see in the 25-way annotated data is the 11 distinct, but correct, answers for the question ''where is blood pumped after it leaves the right ventricle''. Here, 14 annotators identify a substring of ''to the lungs'' as the best possible short answer. Of these, 6 label the entire string, 4 reduce it to ''the lungs'', and 4 reduce it to ''lungs''. A further 6 annotators do not consider this short answer to be sufficient and choose more precise phrases such as ''through the semilunar pulmonary valve into the left and right main pulmonary arteries (one for each lung)''. The remaining 5 annotators decide that there is no adequate short answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variability of Annotations",
"sec_num": "4.4"
},
{
"text": "For each question, we ranked each of the unique answers given by our 25 annotators according to the number of annotators that chose it. We found that by just taking the most popular long answer, we could account for 83% of the long answer annotations. The two most popular long answers account for 96% of the long answer annotations. It is extremely uncommon for a question to have more than three distinct long answers annotated. Short answers have greater variability, but the most popular short answer still accounts for 64% of all short answer annotations. The three most popular short answers account for 90% of all short answer annotations. 8 As stated earlier in this paper, we did instruct annotators to select the earliest instance of an answer when there are multiple answer instances on the page. However, there are still cases where different annotators disagree on whether an answer earlier in the page is sufficient in comparison to a later answer, leading to differences between annotators.",
"cite_spans": [
{
"start": 647,
"end": 648,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Variability of Annotations",
"sec_num": "4.4"
},
{
"text": "NQ includes 5-way annotations on 7,830 items for development data, and we will sequester a further 7,842 items, 5-way annotated, for test data. This section describes evaluation metrics using this data, and gives justification for these metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "5"
},
{
"text": "We choose 5-way annotations for the following reasons: First, we have evidence that aggregating annotations from 5 annotators is likely to be much more robust than relying on a single annotator (see Section 4). Second, 5 annotators is a small enough number that the cost of annotating thousands of development and test items is not prohibitive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "5"
},
{
"text": "Assume that we have a model f \u03b8 with parameters \u03b8 that maps an input (q, d) to a long answer l = f \u03b8 (q, d). We would like to evaluate the accuracy of this model. Assume we have evaluation examples",
"cite_spans": [
{
"start": 69,
"end": 75,
"text": "(q, d)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "{q (i) , d (i) , a (i) } for i = 1 . . . n, where q (i) is a question, d (i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "is the associated Wikipedia document, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "a (i) is a vector with components a (i) j for j = 1 . . . 5. Each a (i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "j is the output from the j'th annotator, and can be a paragraph in d (i) , or can be NULL. The five annotators are chosen uniformly at random from a pool of annotators.",
"cite_spans": [
{
"start": 69,
"end": 72,
"text": "(i)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "We define an evaluation measure based on the five way annotations as follows. If at least two out of five annotators have given a non-null long answer on the example, then the system is required to output a non-null answer that is seen at least once in the five annotations; conversely, if fewer than two annotators give a non-null long answer, the system is required to return NULL as its output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "To make this more formal, define the function g(a (i) ) to be the number of annotations in a (i) that are non-null. Define a function h \u03b2 (a, l) that judges the correctness of label l given annotations a = a 1 . . . a 5 . This function is parameterized by an integer \u03b2. The function returns 1 if the label l is judged to be correct, and 0 otherwise: (Definition of h \u03b2 (a, l) ) If g(a) \u2265 \u03b2 and l = NULL and l = a j for some j \u2208 {1 . . . 5} Then h \u03b2 (a, l) = 1; Else If g(a) < \u03b2 and l = NULL Then h \u03b2 (a, l) = 1; Else h \u03b2 (a, l) = 0.",
"cite_spans": [
{
"start": 93,
"end": 96,
"text": "(i)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 350,
"end": 375,
"text": "(Definition of h \u03b2 (a, l)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "Definition 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "We used \u03b2 = 2 in our experiments. 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "The accuracy of a model is then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "A \u03b2 (f \u03b8 ) = 1 n n i=1 h \u03b2 (a (i) , f \u03b8 (q (i) , d (i) ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "The value for A \u03b2 is an estimate of accuracy with respect to the underlying distribution, which we define as\u0100",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "\u03b2 (f \u03b8 ) = E[h \u03b2 (a, f \u03b8 (q, d))].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "Here the expectation is taken with respect to p (a, q, d) d) ; hence the annotations a 1 . . . a 5 are assumed to be drawn IID from p (l|q, d). 10 We discuss this measure at length in this section. First, however, we make the following critical point:",
"cite_spans": [
{
"start": 48,
"end": 57,
"text": "(a, q, d)",
"ref_id": null
},
{
"start": 134,
"end": 146,
"text": "(l|q, d). 10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 58,
"end": 60,
"text": "d)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "= p(q, d) 5 j=1 p(a j |q, d) where p(a j |q, d) = P (L = a j |Q = q, D =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "It is possible for a model trained on (l (i) , q IID from p(l, q, d) to exceed the performance of a single annotator on this measure.",
"cite_spans": [
{
"start": 49,
"end": 68,
"text": "IID from p(l, q, d)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "(i) , d (i) ) triples drawn",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "In particular, if we have a model p (l|q, d; \u03b8) , trained on (l, q, d) triples, which is a good approximation to p (l|q, d) , it is then possible to use p (l|q, d; \u03b8) to make predictions that outperform a single random draw from p (l|q, d) . The Bayes optimal hypothesis (see Devroye et al., 1997 ",
"cite_spans": [
{
"start": 36,
"end": 47,
"text": "(l|q, d; \u03b8)",
"ref_id": null
},
{
"start": 65,
"end": 70,
"text": "q, d)",
"ref_id": null
},
{
"start": 115,
"end": 123,
"text": "(l|q, d)",
"ref_id": null
},
{
"start": 155,
"end": 166,
"text": "(l|q, d; \u03b8)",
"ref_id": null
},
{
"start": 231,
"end": 239,
"text": "(l|q, d)",
"ref_id": null
},
{
"start": 276,
"end": 296,
"text": "Devroye et al., 1997",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": ") for h \u03b2 , defined as arg max f E q,d,a [[h \u03b2 (a, f (q, d))]]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": ", is a function of the posterior distribution p (\u2022|q, d) , 11 and will generally exceed the performance of a single random annotation,",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 56,
"text": "(\u2022|q, d)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "E q,d,a [[ l p(l|q, d) \u00d7 h \u03b2 (a, l)]].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "We also show this empirically, by constructing an approximation to p (l|q, d) from 20-way annotations, then using this approximation to make predictions that significantly outperform a single annotator.",
"cite_spans": [
{
"start": 69,
"end": 77,
"text": "(l|q, d)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "for \u03bc (i) < 0.4 over 35% (11/17 annotations) are in the W category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "10 This isn't quite accurate as the annotators are sampled without replacement; however, it simplifies the analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "11 Specifically, for an input (q, d), if we define , d) , then the Bayes optimal hypothesis is to output l * if P (h \u03b2 (a, l * ) = 1|\u03b3,\u03b3) \u2265 P (h \u03b2 (a, NULL) = 1|\u03b3,\u03b3), and to output NULL otherwise. Implementation of this strategy is straightforward if \u03b3 and\u03b3 are known; this strategy will in general give a higher accuracy value than taking a single sample l from p (l|q, d) and using this sample as the prediction. In principle a model p (l|q, d; \u03b8) trained on (l, q, d) triples can converge to a good estimate of \u03b3 and\u03b3. Note that for the special case \u03b3 +\u03b3 = 1 we have P (h \u03b2 (a, NULL) = 1|\u03b3,\u03b3) =\u03b3 5 + 5\u03b3 4 (1 \u2212\u03b3) and P (h \u03b2 (a, l * ) = 1|\u03b3,\u03b3) = 1 \u2212 P (h \u03b2 (a, NULL) = 1|\u03b3,\u03b3). It follows that the Bayes optimal hypothesis is to predict l * if \u03b3 \u2265 \u03b1 where \u03b1 \u2248 0.31381, and to predict NULL otherwise. \u03b1 is 1 \u2212\u1fb1 where\u1fb1 is the solution to\u1fb1 5 + 5\u1fb1 4 (1 \u2212\u1fb1) = 0.5.",
"cite_spans": [
{
"start": 365,
"end": 373,
"text": "(l|q, d)",
"ref_id": null
},
{
"start": 438,
"end": 449,
"text": "(l|q, d; \u03b8)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 51,
"end": 55,
"text": ", d)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "l * = arg max l =NULL p(l|q, d), \u03b3 = p(l * |q, d), and\u03b3 = p(NULL|q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "Precision and Recall During evaluation, it is often beneficial to separately measure false positives (incorrectly predicting an answer), and false negatives (failing to predict an answer). We define the precision (P ) and recall (R) of f \u03b8 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "t(q, d, a, f \u03b8 ) = h \u03b2 (a, f \u03b8 (q, d))[[f \u03b8 (q, d) = NULL]] R(f \u03b8 ) = n i=1 t(q (i) , d (i) , a (i) , f \u03b8 ) n i=1 [[g(a (i) \u2265 \u03b2]] P (f \u03b8 ) = n i=1 t(q (i) , d (i) , a (i) , f \u03b8 ) n i=1 [[f \u03b8 (q (i) , d (i) ) = NULL]]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of an Evaluation Measure Based on 5-Way Annotations",
"sec_num": "5.1"
},
{
"text": "To place an upper bound on the metrics introduced above we create a ''super-annotator'' from the 25way annotated data introduced in Section 4. From this data, we create four tuples (q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Super-Annotator Upper Bound",
"sec_num": "5.2"
},
{
"text": "(i) , d (i) , a (i) , b (i) ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Super-Annotator Upper Bound",
"sec_num": "5.2"
},
{
"text": "The first three terms in this tuple are the question, document, and vector of five reference annotations. b (i) is a vector of annotations b (i) j for j = 1 . . . 20 drawn from the same distribution as a (i) . The super-annotator predicts NULL if g(b (i) ) < \u03b1, and",
"cite_spans": [
{
"start": 204,
"end": 207,
"text": "(i)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Super-Annotator Upper Bound",
"sec_num": "5.2"
},
{
"text": "l * = arg max l\u2208d 20 j=1 [[l = b j ]]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Super-Annotator Upper Bound",
"sec_num": "5.2"
},
{
"text": "otherwise. Table 3 shows super-annotator performance for \u03b1 = 8, with 90.0% precision, 84.6% recall, and 87.2% F-measure. This significantly exceeds the performance (80.4% precision/67.6% recall/ 73.4% F-measure) for a single annotator. We subsequently view the super-annotator numbers as an effective upper bound on performance of a learned model.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Super-Annotator Upper Bound",
"sec_num": "5.2"
},
{
"text": "The NQ corpus is designed to provide a benchmark with which we can evaluate the performance of QA systems. Every question in NQ is unique under exact string match, and we split questions randomly in NQ into separate train/development/test sets. To facilitate comparison, we introduce baselines that either make use of high-level data set regularities, or are trained on the 307k examples in the training set. Here, we present well-established baselines that were state of the art at the time of submission. We also refer readers to Alberti et al. (2019) for more recent advances in modeling. All of our baselines focus on the long and short answer extraction tasks. We leave boolean answers to future work. Table 3 : Precision (P), recall (R), and the harmonic mean of these (F1) of all baselines, a single annotator, and the super-annotator upper bound. The human performances marked with \u2020 are evaluated on a sample of five annotations from the 25-way annotated data introduced in Section 5.",
"cite_spans": [
{
"start": 532,
"end": 553,
"text": "Alberti et al. (2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 707,
"end": 714,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline Performance",
"sec_num": "6"
},
{
"text": "NQ's long answer selection task admits several untrained baselines. The first paragraph of a Wikipedia page commonly acts as a summary of the most important information regarding the page's subject. We therefore implement a long answer baseline that simply selects the first paragraph for all pages. Furthermore, because 79% of the Wikipedia pages in the development set also appear in the training set, we implement two ''copying'' baselines. The first of these simply selects the most frequent annotation applied to a given page in the training set. The second selects the annotation given to the training set question closest to the eval set question according to TFIDF weighted word overlap. These three baselines are reported as First paragraph, Most frequent, and Closest question in Table 3 , respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 790,
"end": 797,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Untrained Baselines",
"sec_num": "6.1"
},
{
"text": "We adapt the reference implementation 12 of Document-QA (Clark and Gardner, 2018) for the NQ task. This system performs well on the SQuAD and TriviaQA short answer extraction tasks, but it is not designed to represent: (i) the long answers that do not contain short answers, and (ii) the NULL answers that occur in NQ. To address (i) we choose the shortest available answer span at training, differentiating long and short answers only through the inclusion of special start and end of passage tokens that identify long answer candidates. At prediction time, the model can either predict a long answer (and no short answer), or a short answer (which implies a long answer).",
"cite_spans": [
{
"start": 56,
"end": 81,
"text": "(Clark and Gardner, 2018)",
"ref_id": "BIBREF4"
},
{
"start": 315,
"end": 318,
"text": "NQ.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document-QA",
"sec_num": "6.2"
},
{
"text": "12 https://github.com/allenai/document-qa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-QA",
"sec_num": "6.2"
},
{
"text": "To address (ii), we tried adding special NULL passages to represent the lack of answer. However, we achieved better performance by training on the subset of questions with answers and then only predicting those answers whose scores exceed a threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-QA",
"sec_num": "6.2"
},
{
"text": "With these two modifications, we are able to apply Document-QA to NQ. We follow Clark and Gardner (2018) in pruning documents down to the set of passages that have highest TFIDF similarity with the question. Under this approach, we consider the top 16 passages as long answers. We consider short answers containing up to 17 words. We train Document-QA for 30 epochs with batches containing 15 examples. The post hoc score threshold is set to 3.0. All of these values were chosen on the basis of development set performance.",
"cite_spans": [
{
"start": 80,
"end": 104,
"text": "Clark and Gardner (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document-QA",
"sec_num": "6.2"
},
{
"text": "One view of the long answer selection task is that it is more closely related to natural language inference (Bowman et al., 2015; Williams et al., 2018 ) than short answer extraction. A valid long answer must contain all of the information required to infer the answer. Short answers do not need to contain this information-they need to be surrounded by it.",
"cite_spans": [
{
"start": 108,
"end": 129,
"text": "(Bowman et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 130,
"end": 151,
"text": "Williams et al., 2018",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Custom Pipeline (DecAtt + DocReader)",
"sec_num": "6.3"
},
{
"text": "Motivated by this intuition, we implement a pipelined approach that uses a model drawn from the natural language interference literature to select long answers. Then short answers are selected from these using a model drawn from the short answer extraction literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Custom Pipeline (DecAtt + DocReader)",
"sec_num": "6.3"
},
{
"text": "Long answer selection Let t (d, l) denote the sequence of tokens in d for the long answer candidate l. We then use the Decomposable Attention model (Parikh et al., 2016) to produce a score for each question, candidate pair t(d, l) ). To this we add a 10dimensional trainable embedding r l of the long answer candidate's position in the sequence of candidates; 13 an integer u l containing the number of the words shared by q and t (d, l) ; and a scalar v l containing the number of words shared by q and t (d, l) weighted by inverse document frequency. The long answer score z l is then given as a linear function of the above features",
"cite_spans": [
{
"start": 28,
"end": 34,
"text": "(d, l)",
"ref_id": null
},
{
"start": 148,
"end": 169,
"text": "(Parikh et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 431,
"end": 437,
"text": "(d, l)",
"ref_id": null
},
{
"start": 506,
"end": 512,
"text": "(d, l)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 223,
"end": 230,
"text": "t(d, l)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Custom Pipeline (DecAtt + DocReader)",
"sec_num": "6.3"
},
{
"text": "x l = DecAtt(q,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Custom Pipeline (DecAtt + DocReader)",
"sec_num": "6.3"
},
{
"text": "z l = w [x l , r l , u l , v l ] + b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Custom Pipeline (DecAtt + DocReader)",
"sec_num": "6.3"
},
{
"text": "where w and b are the trainable weight vector and bias, respectively, Short answer selection Given a long answer, the Document Reader model (Chen et al., 2017; abbreviated DocReader) is used to extract short answers.",
"cite_spans": [
{
"start": 140,
"end": 159,
"text": "(Chen et al., 2017;",
"ref_id": "BIBREF2"
},
{
"start": 160,
"end": 160,
"text": "",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Custom Pipeline (DecAtt + DocReader)",
"sec_num": "6.3"
},
{
"text": "Training The long answer selection model is trained by minimizing the negative log-likelihood of the correct answer l (i) with a hyperparameter \u03b7 that down-weights examples with the NULL label:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Custom Pipeline (DecAtt + DocReader)",
"sec_num": "6.3"
},
{
"text": "\u2212 n i=1 log exp(z l (i) ) l exp(z l ) \u00d7(1\u2212\u03b7[[l (i) = NULL]])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Custom Pipeline (DecAtt + DocReader)",
"sec_num": "6.3"
},
{
"text": "We found that the inclusion of \u03b7 is useful in accounting for the asymmetry in labels-because a NULL label is less informative than an answer location. Varying \u03b7 also seems to provide a more stable method of setting a model's precision point than post hoc thresholding of prediction scores. An analogous strategy is used for the short answer model where examples with no entity answers are given a different weight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Custom Pipeline (DecAtt + DocReader)",
"sec_num": "6.3"
},
{
"text": "13 Specifically, we have a unique learned 10-dimensional embedding for each position 1 . . . 19 in the sequence, and a 20th embedding used for all positions \u2265 20. Table 3 shows results for all baselines as well as a single annotator, and the super-annotator introduced in Section 5. It is clear that there is a great deal of headroom in both tasks. We find that Document-QA performs significantly worse than DecAtt+DocReader in long answer identification. This is likely because Document-QA was designed for the short answer task only.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 170,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Custom Pipeline (DecAtt + DocReader)",
"sec_num": "6.3"
},
{
"text": "To ground these results in the context of comparable tasks, we measure performance on the subset of NQ that has non-NULL labels for both long and short answers. Freed from the decision of whether or not to answer, DecAtt+DocReader obtains 68.0% F1 on the long answer task, and 40.4% F1 on the short answer task. We also examine performance of the short answer extraction systems in the setting where the long answer is given, and a short answer is known to exist. With this simplification, short answer F1 increases 57.7% for DocReader. Under this restriction NQ roughly approximates the SQuAD 1.1 task. From the gap to the super-annotator upper bound we know that this task is far from being solved in NQ. Finally, we break the long answer identification results down according to long answer type. From Table 3 we know that DecAtt+DocReader predicts long answers with 54.8% F1. If we only measure performance on examples that should have a paragraph long answer, this increases to 65.1%. For tables and table rows it is 66.4%. And for lists and list items it is 32.0%. All other examples have a NULL label. Clearly, the model is struggling to learn some aspect of list-formatted data from the 6% of the non NULL examples that have this type. Figure 5 that have long answers that are paragraphs (i.e., not tables or lists). We show the expert judgment (C/C d /W) for each non-null answer. ''Long answer stats'' a/25, b/25 have a = number of non-null long answers for this question, b = number of long answers the same as that shown in the figure. For example, for question A1, 13 out of 25 annotators give some non-null answer, and 4 out of 25 annotators give the same long answer After mashing . . .. ''Short answer stats'' has similar statistics for short answers.",
"cite_spans": [
{
"start": 703,
"end": 706,
"text": "NQ.",
"ref_id": null
}
],
"ref_spans": [
{
"start": 805,
"end": 812,
"text": "Table 3",
"ref_id": null
},
{
"start": 1244,
"end": 1252,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.4"
},
{
"text": "We argue that progress on QA has been hindered by a lack of appropriate training and test data. To address this, we present the Natural Questions corpus. This is the first large publicly available data set to pair real user queries with high-quality annotations of answers in documents. We also present metrics to be used with NQ, for the purposes of evaluating the performance of question answering systems. We demonstrate a high upper bound on these metrics and show that existing methods do not approach this upper bound. We argue that for them to do so will require significant advances in NLU. Figure 5 shows example questions from the data set. Figure 6 shows example question/answer pairs from the data set, together with expert judgments and statistics from the 25-way annotations.",
"cite_spans": [],
"ref_spans": [
{
"start": 599,
"end": 607,
"text": "Figure 5",
"ref_id": "FIGREF5"
},
{
"start": 651,
"end": 659,
"text": "Figure 6",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "For example, for machine translation/speech recognition humans provide translations/transcriptions relatively easily.2 Available at: https://ai.google.com/research/ NaturalQuestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We pre-define the set of categorical noun phrases used in 4 and 5 by running Hearst patterns(Hearst, 1992) to find a broad set of hypernyms. Part of speech tags and entities are identified using Google's Cloud NLP API: https://cloud. google.com/natural-language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We note that both tables and lists may be used purely for the purposes of formatting text, or they may have their own complex semantics-as in the case of Wikipedia infoboxes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "More formally, there is some base distribution p b (q) from which queries q are drawn, and a deterministic function s(q) which returns the top-ranked Wikipedia page in the top 5 search results, or NULL if there is no Wikipedia page in the top 5 results. Define Q to be the set of queries such that s(q) = NULL, and b = q\u2208Q p b (q). Then p(q, d) = p b (q)/b if q \u2208 Q and d = NULL and d = s(q), otherwise p(q, d) = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is partly motivated through the results on 25-way annotations (see Section 4.4), where for \u03bc (i) \u2265 0.4 over 93% (114/122 annotations) are in the C or C d categories, whereas",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A BERT Baseline for the Natural Questions",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Alberti, Kenton Lee, and Michael Collins. 2019. A BERT Baseline for the Natural Ques- tions. arXiv preprint:1901.08634.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Reading Wikipedia to answer open-domain questions",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1870--1879",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870-1879.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Quac: Question answering in context",
"authors": [
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2174--2184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answer- ing in context. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 2174-2184, Brussels.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Simple and effective multi-paragraph reading comprehension",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "845--855",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of the 56th An- nual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 845-855, Melbourne.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Probabilistic Theory of Pattern Recognition",
"authors": [
{
"first": "Luc",
"middle": [],
"last": "Devroye",
"suffix": ""
},
{
"first": "L\u00e1szl\u00f3",
"middle": [],
"last": "Gy\u00f6rfi",
"suffix": ""
},
{
"first": "G\u00e1bor",
"middle": [],
"last": "Lugosi",
"suffix": ""
}
],
"year": 1997,
"venue": "Applications of Mathematics",
"volume": "31",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luc Devroye, L\u00e1szl\u00f3 Gy\u00f6rfi, and G\u00e1bor Lugosi. 1997. A Probabilistic Theory of Pattern Rec- ognition, corrected 2nd edition, volume 31 of Applications of Mathematics. Springer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Dureader: A Chinese machine reading comprehension dataset from real-world applications",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yajuan",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Shiqi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Xinyan",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yizhong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Qiaoqiao",
"middle": [],
"last": "She",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tian",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on Machine Reading for Question Answering",
"volume": "",
"issue": "",
"pages": "37--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei He, Kai Liu, Jing Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yuan Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, Tian Wu, and Haifeng Wang. 2018. Dureader: A Chinese machine reading comprehension dataset from real-world applications. In Proceedings of the Workshop on Machine Reading for Question Answering, pages 37-46, Melbourne.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic acquisition of hyponyms from large text corpora",
"authors": [
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "The 15th International Conference on Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In COLING 1992 Volume 2: The 15th International Confer- ence on Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u00fd",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS'15",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u00fd, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceed- ings of the 28th International Conference on Neural Information Processing Systems, NIPS'15. Cambridge, MA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The goldilocks principle: Reading children's books with explicit memory representations",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Read- ing children's books with explicit memory rep- resentations. In Proceedings of the International Conference on Learning Representations.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adversarial examples for evaluating reading comprehension systems",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2021--2031",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021-2031, Copenhagen.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Weld",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1601--1611",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for read- ing comprehension. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1601-1611.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The narrative qa reading comprehension challenge. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Kocisky",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Schwarz",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Melis",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "6317--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Kocisky, Jonathan Schwarz, Phil Blun- som, Chris Dyer, Karl Moritz Hermann, Gabor Melis, and Edward Grefenstette. 2018. The nar- rative qa reading comprehension challenge. Transactions of the Association for Compu- tational Linguistics, 6317-328.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Race: Largescale reading comprehension dataset from examinations",
"authors": [
{
"first": "Guokun",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Qizhe",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Hanxiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "785--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large- scale reading comprehension dataset from examinations. In Proceedings of the 2017 Con- ference on Empirical Methods in Natu- ral Language Processing, pages 785-794. Copenhagen.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Can A suit of armor conduct electricity? A new dataset for open book question answering",
"authors": [
{
"first": "Todor",
"middle": [],
"last": "Mihaylov",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2381--2391",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can A suit of armor conduct electricity? A new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381-2391, Brussels.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "MS MARCO: A human generated machine reading comprehension dataset",
"authors": [
{
"first": "Tri",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Mir",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Xia",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Tiwary",
"suffix": ""
},
{
"first": "Rangan",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human gen- erated machine reading comprehension dataset. In Proceedings of the Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Who did what: A large-scale person-centered cloze dataset",
"authors": [
{
"first": "Takeshi",
"middle": [],
"last": "Onishi",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcallester",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2230--2235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2230-2235. Austin, TX.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The LAMBADA dataset: Word prediction requiring a broad discourse context",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "Paperno",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Quan"
],
"last": "Pham",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Sandro",
"middle": [],
"last": "Pezzelle",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Fernandez",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1525--1534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denis Paperno, Germ\u00e1n Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. 2016. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1525-1534, Berlin.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "BLUE: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLUE: A method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A decomposable attention model for natural language inference",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2249--2255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankur Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Em- pirical Methods in Natural Language Pro- cessing, pages 2249-2255, Austin, TX.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Know what you don't know: Unanswerable questions for squad",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "784--789",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Un- answerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784-789.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "SQuAD: 100,000+ Questions for Machine Comprehension of Text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceedings of the 2016 Conference on Em- pirical Methods in Natural Language Pro- cessing, pages 2383-2392, Austin, TX.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Coqa: A conversational question answering challenge",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.07042"
]
},
"num": null,
"urls": [],
"raw_text": "Siva Reddy, Danqi Chen, and Christopher D. Manning. 2018. Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "MCTest: A challenge dataset for the open-domain machine comprehension of text",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Christopher",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Burges",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Renshaw",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "193--203",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Richardson, Christopher J. C. Burges, and Erin Renshaw. 2013. MCTest: A chal- lenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 193-203, Seattle, WA.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, LA.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Wikiqa: A challenge dataset for opendomain question answering",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Meek",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2013--2018",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open- domain question answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2013-2018, Lisbon.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Hotpotqa: A dataset for diverse, explainable multi-hop question answering",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2369--2380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2369-2380, Brussels.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Example annotations from the corpus.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Annotation decision process with path proportions from NQ training data. Percentages are proportions of entire data set. A total of 49% of all examples have a long answer.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Examples with consensus expert judgments, and justification for these judgments. See Figure for more examples. Correct (but debatable) (C d ): A reasonable person could be satisfied by the answer; however, a reasonable person could raise a reasonable doubt about the answer.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "(i) is7 More formally, let[[e]] for any statement e be 1 if e is true, 0 if e is false. We define\u00ca(C) = 1|S| |S| i=1 [[c (i) = C]].The values for\u00ca(C d ) and\u00ca(W) are calculated in a similar manner.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Values of\u00ca[(\u03b8 1 , \u03b8 2 ]] and\u00ca[(\u03b8 1 , \u03b8 2 ], C/C d / W] for different intervals (\u03b8 1 , \u03b8 2 ].The height of each bar is equal to\u00ca[(\u03b8 1 , \u03b8 2 ]], the divisions within each bar show\u00ca[",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "Examples from the questions with 25-way annotations.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF6": {
"text": "Answer annotations for four examples from",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table/>",
"type_str": "table",
"text": "Matches for heuristics in Section 3.1.no. Both the long and short answer can be NULL if no viable candidates exist on the Wikipedia page.",
"html": null,
"num": null
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"text": "Precision results (\u03a8) and empirical estimates of the proportions of C, C d , and W items.",
"html": null,
"num": null
}
}
}
}