ACL-OCL / Base_JSON /prefixM /json /mrqa /2021.mrqa-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:13:34.356014Z"
},
"title": "Simple and Efficient ways to Improve REALM",
"authors": [
{
"first": "Vidhisha",
"middle": [],
"last": "Balachandran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Dense retrieval has been shown to be effective for Open Domain Question Answering, surpassing sparse retrieval methods like BM25. One such model, REALM, (Guu et al., 2020) is an end-to-end dense retrieval system that uses MLM based pretraining for improved downstream QA performance. However, the current REALM setup uses limited resources and is not comparable in scale to more recent systems, contributing to its lower performance. Additionally, it relies on noisy supervision for retrieval during fine-tuning. We propose REALM++, where we improve upon the training and inference setups and introduce better supervision signal for improving performance, without any architectural changes. REALM++ achieves \u223c5.5% absolute accuracy gains over the baseline while being faster to train. It also matches the performance of large models which have 3x more parameters demonstrating the efficiency of our setup.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Dense retrieval has been shown to be effective for Open Domain Question Answering, surpassing sparse retrieval methods like BM25. One such model, REALM, (Guu et al., 2020) is an end-to-end dense retrieval system that uses MLM based pretraining for improved downstream QA performance. However, the current REALM setup uses limited resources and is not comparable in scale to more recent systems, contributing to its lower performance. Additionally, it relies on noisy supervision for retrieval during fine-tuning. We propose REALM++, where we improve upon the training and inference setups and introduce better supervision signal for improving performance, without any architectural changes. REALM++ achieves \u223c5.5% absolute accuracy gains over the baseline while being faster to train. It also matches the performance of large models which have 3x more parameters demonstrating the efficiency of our setup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Open-domain question answering (ODQA) (Voorhees et al., 1999 ) is a task that aims to answer questions directly using a large set of documents without being given a specific document. These systems generally employ a \"retriever-reader\" based approach where a document retriever first retrieves a subset of evidence documents and a document reader processes the documents to identify the correct answer (Chen et al., 2017) . Recently, dense retrieval methods (Seo et al., 2018 (Seo et al., , 2019 Das et al., 2019; Karpukhin et al., 2020) have improved over sparse retrievers like BM25 (Robertson and Zaragoza, 2009) and made training these systems end-to-end by leveraging approximate MIPS search (Shrivastava and Li, 2014) . REALM is an end-to-end model, pre-trained on masked language modeling, that can be finetuned for QA tasks without relying on external sources like BM25 for supervision like DPR (Karpukhin et al., 2020) . Hence, it is simple and easier to train but is not competitive to pipeline alternatives like DPR. When finetuning, it uses a single GPU making it not directly comparable in scale to DPR which uses more resources for better optimization. Due to limited resources it is inefficient, taking more than a day to train. Additionally, it uses distant supervision for the retriever in the form of passages containing the target answer leading to ambiguous supervision for training.",
"cite_spans": [
{
"start": 38,
"end": 60,
"text": "(Voorhees et al., 1999",
"ref_id": "BIBREF19"
},
{
"start": 402,
"end": 421,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 458,
"end": 475,
"text": "(Seo et al., 2018",
"ref_id": "BIBREF15"
},
{
"start": 476,
"end": 495,
"text": "(Seo et al., , 2019",
"ref_id": "BIBREF16"
},
{
"start": 496,
"end": 513,
"text": "Das et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 514,
"end": 537,
"text": "Karpukhin et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 585,
"end": 615,
"text": "(Robertson and Zaragoza, 2009)",
"ref_id": "BIBREF14"
},
{
"start": 697,
"end": 723,
"text": "(Shrivastava and Li, 2014)",
"ref_id": "BIBREF17"
},
{
"start": 903,
"end": 927,
"text": "(Karpukhin et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a study of REALM aimed at understanding and improving its limitations. We find that REALM is significantly underoptimized and improve the training by scaling the system through (i) using exact MIPS search, (ii) introducing larger batch training, and (iii) scaling the reader to process more documents. We further address the noisy distant retrieval supervision by augmenting the training sets with human-annotated evidence passages. Since such human annotations are not available for every dataset and is expensive to obtain, we show that models trained with strong supervision transfer well to other datasets where such annotations are not available, indicating the benefits beyond a single annotated dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Incorporating our best findings, we show that an improved version of REALM, which we call REALM++ achieves \u223c5.5% absolute accuracy improvements over the baseline on multiple ODQA benchmarks while processing 4x more examples/sec and outperforms all prior methods of similar parameter regime. Further, it shows comparable performance to models with 3x more parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our results demonstrate that scale and supervision play an important role in ODQA systems highlighting the need for careful comparisons across systems in ODQA and for taking scale and efficiency into account in addition to performance when reporting results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Test EM",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": null
},
{
"text": "Dev EM Dev R@10 REALM (Guu et al., 2020) Open Domain QA is typically modeled as a Machine Reading Comprehension (MRC) model which answers a question using a large corpus of text documents/passages by employing a \"retrieverreader\" approach. REALM specifically uses dense retrieval to identify c (c = 5000) relevant passages and a BERT based reader to process a smaller set of top-k (k = 5) passages and find answer spans. When finetuning 1 , the retriever is trained using distant supervision with passages containing the target answer as positive and the reader is supervised using human annotated short answer spans . We follow the same design and optimization setup of finetuning REALM on QA. We explore the limits of various experiment choices by introducing simple changes to the training and inference setup. Table 1 compares results from our replicated experiments of REALM to prior published results and shows that our experiments produce similar results on the Natural Questions (NQ) dataset. Detailed analysis across other metrics is in in A.2.",
"cite_spans": [
{
"start": 22,
"end": 40,
"text": "(Guu et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 814,
"end": 821,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": null
},
{
"text": "REALM performs an approximate MIPS for retrieving the top c relevant documents based on a retrieval score, S retr (p i , q) = h q h p i where h q and h p i are question and passage representations respectively. The system is finetuned in practice on a single machine with a 12GB GPU with batch size 1. While this is modest use of resources, we show that this results in suboptimal training. We begin by scaling the REALM system during training. We perform exact MIPS search by leveraging the efficiency of large matrix multiplications of TPUs (Wang et al., 2019) passages having the highest scores. We further increase the training batch size to 16 by leveraging 8 TPUv3 cores on Google Cloud for distributed training. Finally, we increase the number of documents passed to the reader to k = 10 during training. Scaling training setup improves QA results: From Table 1 we observe that simple experiment choices like larger batch training and exact MIPS search significantly improve the Exact-Match Accuracy by 3.4% without introducing any model design changes. This shows that the original REALM setup was under-optimized and has much better performance than previously reported.",
"cite_spans": [
{
"start": 543,
"end": 562,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 861,
"end": 868,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Scaling the Training Setup",
"sec_num": "2.1"
},
{
"text": "To finetune the retriever, REALM relies on distant supervision in the form of passages containing the target answer. However, such a signal can lead to noisy and unrelated documents to be given a positive signal (Lin et al., 2018) as examples in Table 5 of Appendix A show. We address this by introducing supervision from human annotations similar to Yang et al. (2015) ; Nguyen et al. (2016) , to train the retriever by updating the retrieval scores by optimizing their marginal log-likelihood.",
"cite_spans": [
{
"start": 212,
"end": 230,
"text": "(Lin et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 351,
"end": 369,
"text": "Yang et al. (2015)",
"ref_id": "BIBREF21"
},
{
"start": 372,
"end": 392,
"text": "Nguyen et al. (2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 246,
"end": 253,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Introducing Strong Passage Supervision",
"sec_num": "2.2"
},
{
"text": "P (p i |Q) = exp(S retr (p i , Q)) p j \u2208{p i } 1:c exp(S retr (p j , Q)) L(Q, LA) = \u2212 log p j \u2208{p i } 1:c ,p i \u2208LA P (p i |Q)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introducing Strong Passage Supervision",
"sec_num": "2.2"
},
{
"text": "where LA is a list of human annotated evidence passages (e.g. Long Answers in Natural Questions), L(Q, LA) denotes the passage supervision loss that is augmented to the existing retriever distant supervision and span prediction loss, and p i \u2208 LA indicates whether the passage was in the annotated passages. Here, we assume that the passages in corpus and the annotated evidence passages in the dataset are from the same source (e.g. Wikipedia). Since corpus passages and annotated passages in the dataset can differ (e.g. due to different Wikipedia versions), we consider any passage in the retrieved set that has 50% 2 word overlap 3 with the target passages as a positive match. Supervision through evidence passage annotations improves performance: From Table 1 we see an improvement of 0.5% over the scaled REALM model leading to 3.8% improvement over the prior baseline REALM model, showing the benefit of providing better supervision. In \u00a7A.3 we present qualitative examples where the improved passage supervision leads to answer spans being extracted from the most relevant document to the question. While noisy distant supervision has been shown as effective for dense retrieval, our work experimentally shows that it can be limiting and simply introducing better supervision through gold evidence passages is beneficial. Table 4 in A.2 shows that though the retrieved 5000 documents has high answer recall (\u223c 95%), the recall significantly drops (\u223c 77%) in the top 10 documents processed by the reader. Readers are computationally intensive and memory limits makes scaling them to process more documents difficult. We explore an approach to rerank the retrieved documents to improve recall@10 and end accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 758,
"end": 765,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1331,
"end": 1338,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Introducing Strong Passage Supervision",
"sec_num": "2.2"
},
{
"text": "Our Document Reranker has L layers of crossdocument and query-document interactions to learn rich document representations. For each layer, the output passage representations from the previous layer, {u l\u22121 } 1:c are first passed through a Transformer block (T) with multi-headed self-attention (Vaswani et al., 2017) which allows for interaction between the passage representations and produces cross-document aware representations",
"cite_spans": [
{
"start": 295,
"end": 317,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reranking",
"sec_num": "2.3"
},
{
"text": "u l i . u l i = T(Q=u l\u22121 i , K={u l\u22121 } 1:c , V ={u l\u22121 } 1:c )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reranking",
"sec_num": "2.3"
},
{
"text": "where Q, K, V represent the query, key and value respectively in the transformer attention module.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reranking",
"sec_num": "2.3"
},
{
"text": "To model interaction between passages and query, the attended passage representation u l and query representations from the previous layer v l\u22121 are passed through a multi-head cross-attention Transformer to produce query aware representations",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reranking",
"sec_num": "2.3"
},
{
"text": "v l i . v l q =T(Q=v l\u22121 q , K={u l } 1:c , V ={u l } 1:c )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reranking",
"sec_num": "2.3"
},
{
"text": "For the first layer we consider the dense retriever's query and document representations as the input u 0 and v 0 to the reranker. The rich document and query representations from the final layer ({u L } 1:c ,v L q ) are used to compute the retriever score, S retr (p i , q) to find the top-k documents for the reader. Document Reranking does not significant gains when retriever is jointly trained but is highly effective when retriever is fixed: In Table 1 We observe that the accuracy of the system drops by 0.5% and the recall@10 drops by 0.2% when augmenting the reranker. We further study the role of the reranker in a fixed retriever setting where the top 5000 documents are retrieved once and kept constant during training. While, such a setting is a more efficient since documents are not retrieved at every training step, the retriever's zeroshot performance can be quite low, potentially hurting end accuracy. From Table 2 , we see that the scaled REALM model with a fixed-retriever has very low recall@Top-10 and EM. Here, augmenting the model with the Document Reranker significantly improves recall and EM performance, where recall@Top-10 improves by 8.3% and EM by 2.7%. Further introducing passage supervision during training improves performance by increasing the end accuracy by \u223c 1.3% making the fixed retriever setting very competitive in performance to a jointly trained retriever-reader setting.",
"cite_spans": [],
"ref_spans": [
{
"start": 451,
"end": 458,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 926,
"end": 933,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Reranking",
"sec_num": "2.3"
},
{
"text": "Due to memory constraints the reader cannot be scaled to process more documents during training without architectural changes. Such constraints do not apply during inference, since optimization based weights and parameters are not saved, the memory usage of the model reduces, allowing for the reader to process more documents. We experiment with scaling the reader to process more documents only during inference. Scaling the reader during inference significantly boosts performance: In Table 1 we see that the reader processing k = 100 documents significantly improves accuracy, achieving 44.8% on NQ which surpasses the baseline REALM by 4.4%. This shows that such systems can leverage a small number of documents (k = 10) for faster training and gain the benefits of scaling the reader (k = 100) at inference. Further from Figure 1 (Karpukhin et al., 2020) . \u2020 Though ReConsider large has higher accuracy, their approach of using answer span focused reranking model is orthogonal can be directly applied to our output. see that the gains increase with increasing number documents with a slight saturation beyond 120 documents. This is potentially due to increased answer recall in documents 4 .",
"cite_spans": [
{
"start": 836,
"end": 860,
"text": "(Karpukhin et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 488,
"end": 495,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 827,
"end": 835,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Scaling the Reader at Inference",
"sec_num": "2.4"
},
{
"text": "Based on the findings in \u00a72, we incorporate the best working components: (i) scaling at training \u00a72.1 (ii) better passage supervision \u00a72.2 and (iii) scaling reader during inference \u00a72.4 to establish an improved REALM model, which we call REALM++. We study the effect of REALM++ on three datasets NQ, Web Questions (WQ) (Berant et al., 2013) , and Curated Trec (CT) (Baudi\u0161 and \u0160ediv\u1ef3, 2015) . As WQ and CT do not have evidence passage annotations, we use them to study the transfer capabilities of the passage supervised NQ model. REALM++ outperforms models of similar size and is comparable to large models: Table 3 shows that REALM++ outperforms prior methods of similar size (models based on BERT base ) with no modifications to the model design. When transferred to WQ and CT which do not have human annotation for evidence passages, REALM++ shows an improvement of 3.8% on WQ and 4.3% on CT over base REALM showing benefit beyond a single dataset. REALM++ produces state-of-art results on extractive ODQA among models of similar size in all three datasets using a single endto-end model. Additionally REALM++, which uses BERT base (\u223c 110M params), performs comparable to large models based on BERT large and BART large (Lewis et al., 2020a ) (\u223c 340M params) with 3x lesser params. Discussion of speed and memory usage: By using 8 TPUv3 cores and increased batch size for training our REALM++ model, we can process 4x more examples/sec as compared to REALM and reduce training time from 2 days to 12hrs. REALM++ maintains the same number of parameters as the base REALM model and the entire model fits within 12GB memory which is the equivalent of an Nvidia Titan X. This demonstrates that our REALM++ model is efficient and can improve training time by leveraging distributed training.",
"cite_spans": [
{
"start": 319,
"end": 340,
"text": "(Berant et al., 2013)",
"ref_id": "BIBREF2"
},
{
"start": 365,
"end": 390,
"text": "(Baudi\u0161 and \u0160ediv\u1ef3, 2015)",
"ref_id": "BIBREF1"
},
{
"start": 1224,
"end": 1244,
"text": "(Lewis et al., 2020a",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 609,
"end": 616,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "REALM ++",
"sec_num": "3"
},
{
"text": "In this work, we present a study of a denseretrieval QA system, REALM, and identify key limitations in its experimental setup. We find that REALM is significantly undertrained and we improve REALM by introducing simple changes to its training, supervision, and inference setup. We propose REALM++ which incorporates our best findings and show that it can achieve significant improvement over prior methods and perform comparably with models with 3x more parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "A Example Appendix",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "For our study, we use three open-domain QA datasets following Guu et al. (2020) : Natural Questions (NQ) contains real user queries from Google Search. We consider questions with short answers (<=5 tokens) and the long answers for passage supervision.",
"cite_spans": [
{
"start": 62,
"end": 79,
"text": "Guu et al. (2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Datasets",
"sec_num": null
},
{
"text": "WebQuestions (WQ) (Berant et al., 2013 ) is a collection of questions collected from Google Suggest API, with Freebase entity answers whose string forms are the target short answers. CuratedTREC (CT) (Baudi\u0161 and \u0160ediv\u1ef3, 2015) contains curated questions from TREC QA track with real user questions and answer as regular expression matching all acceptable answers.",
"cite_spans": [
{
"start": 18,
"end": 38,
"text": "(Berant et al., 2013",
"ref_id": "BIBREF2"
},
{
"start": 200,
"end": 225,
"text": "(Baudi\u0161 and \u0160ediv\u1ef3, 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Datasets",
"sec_num": null
},
{
"text": "Experiments fairly reproduce results: Table 4 reports the results from our experiments and compares them to published results from REALM (Guu et al., 2020) . We find that our experiments produce similar results on NQ and WQ with slightly lower results on CT on the test set. We believe that this could be due to varying checkpoints due to early stopping. For fair evaluation, we use results from our experiments as a comparison for the remainder of the study.",
"cite_spans": [
{
"start": 137,
"end": 155,
"text": "(Guu et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 38,
"end": 45,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "A.2 REALM baseline analysis",
"sec_num": null
},
{
"text": "Answer recall drops significantly with reducing documents: We additionally present a breakdown of REALM's retriever and reader performances on the development set across the three datasets. While REALM retrieves c = 5000 documents for distantly supervising the retriever, only the top k = 5 documents are processed by the reader for finding the right answer. Comparing the recall of answers in the retrieved documents at different subsets of documents we observe very high (> 90%) recall@5000 for all three datasets but the recall@5 effectively drops to \u223c70%, showing that the document that contains the answer is not necessarily present in the top-5 highlighting limitations in the retriever. (Guu et al., 2020) . The bottom section compares Dev EM with Upper Bound performance of the Reader.",
"cite_spans": [
{
"start": 694,
"end": 712,
"text": "(Guu et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 REALM baseline analysis",
"sec_num": null
},
{
"text": "\u223c63% of the questions from NQ have the answer in the top retrieved documents, REALM is only able to get the exact span of the answer for \u223c36% of them showing the limitations of the reader in identifying the exact answer span in the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 REALM baseline analysis",
"sec_num": null
},
{
"text": "In \u00a72.2, we introduced strong passage supervision from annotated evidence passages to enable the model to distinguish misleading passages that might contain the target answer. Table 6 shows examples of questions where using passage supervision helps retrieve correct passages for the QA task. For Questions 1 and 3, the baseline model incorrectly retrieves a wrong passage of a similar genre or topic as the question, while for Question 2 the baseline model retrieves a completely incorrect, irrelevant passage. The model trained with passage supervision identifies the right context for answering the question, which aligns with the human annotation for each question. The New Mexico whiptail lizard is a crossbreed of a western whiptail and the little striped whiptail. The lizard is a female-only species that reproduces asexually by producing an egg through parthenogenesis. Which president supported the creation of the Environmental Protection Agency(EPA)?",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 183,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "A.3 Qualitative Analysis",
"sec_num": null
},
{
"text": "Some historians say that President Richard Nixon's southern strategy turned the southern United States into a republican stronghold, while others deem economic factors more important in the change.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.3 Qualitative Analysis",
"sec_num": null
},
{
"text": "The Environmental Protection Agency (EPA) is an agency of the federal government of the United States created for the purpose of protecting human health and the environment. President Richard Nixon proposed the establishment of EPA and it began operation on December 2, 1970, after Nixon signed an executive order. Guu et al. (2020) with the correct human annotated relevant passages showing the necessity for human annotation based supervision.",
"cite_spans": [
{
"start": 315,
"end": 332,
"text": "Guu et al. (2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.3 Qualitative Analysis",
"sec_num": null
},
{
"text": "Where did the Battle of Issus take place?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Incorrect Ret Passage Correct REALM++ Ret Passage",
"sec_num": null
},
{
"text": "The Battle of Alexander at Issus is a 1529 oil painting by the German artist Albrecht Altdorfer, a pioneer of landscape art and a founding member of the Danube School .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Incorrect Ret Passage Correct REALM++ Ret Passage",
"sec_num": null
},
{
"text": "The Battle of Issus occurred in southern Anatolia, on November 5, 333 BC between the Hellenic League led by Alexander the Great and the Achaemenid Empire, led by Darius III. Who played bubba in the Heat of the Night?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Incorrect Ret Passage Correct REALM++ Ret Passage",
"sec_num": null
},
{
"text": "A late Stevan Ridley touchdown run set up by a 23 -yard Deangelo Peterson run on a fourth -down play gave LSU the upset victory and effectively ended the opportunity for an Alabama repeat of the national championship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Incorrect Ret Passage Correct REALM++ Ret Passage",
"sec_num": null
},
{
"text": "Carlos Alan Autry Jr. is an American actor, politician, and former National Football League player. He played the role of Captain Bubba Skinner on the NBC television series, \"In the Heat of the Night\", starring Carroll O'Connor. Actress who plays penelope garcia on criminal minds?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Incorrect Ret Passage Correct REALM++ Ret Passage",
"sec_num": null
},
{
"text": "How to get away with Murder is an American television series created by Peter Nowalk and produced by Shonda Rhimes and ABC Studios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Incorrect Ret Passage Correct REALM++ Ret Passage",
"sec_num": null
},
{
"text": "Kirsten Simone Vangsness is an American actress, currently staring as FBI Analyst Penelope Garcia on the CBS series \"Criminal Minds\". ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Incorrect Ret Passage Correct REALM++ Ret Passage",
"sec_num": null
},
{
"text": "We experimented with thresholds=(0.3, 0.5, 0.75) and used threshold with best performance based on validation set3 We also experimented with ngram overlap which was similar in performance but computationally expensive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For the rest of the experiments inTable 3, we use 100 documents for fair comparison to other methods",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning to retrieve reasoning paths over Wikipedia graph for question answering",
"authors": [
{
"first": "Akari",
"middle": [],
"last": "Asai",
"suffix": ""
},
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2020,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2020. Learn- ing to retrieve reasoning paths over Wikipedia graph for question answering. In ICLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modeling of the question answering task in the yodaqa system",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Baudi\u0161",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference of the Cross-Language Evaluation Forum for European Languages",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petr Baudi\u0161 and Jan \u0160ediv\u1ef3. 2015. Modeling of the question answering task in the yodaqa system. In In- ternational Conference of the Cross-Language Eval- uation Forum for European Languages.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semantic parsing on Freebase from question-answer pairs",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Frostig",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In EMNLP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Reading Wikipedia to answer opendomain questions",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multi-step retriever-reader interaction for scalable open-domain question answering",
"authors": [
{
"first": "Rajarshi",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Dhuliawala",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zaheer",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mc-Callum",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajarshi Das, S. Dhuliawala, M. Zaheer, and A. Mc- Callum. 2019. Multi-step retriever-reader interac- tion for scalable open-domain question answering. ICLR.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "REALM: Retrieval-augmented language model pre-training",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Guu",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Zora",
"middle": [],
"last": "Tung",
"suffix": ""
},
{
"first": "Panupong",
"middle": [],
"last": "Pasupat",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2020,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pa- supat, and Ming-Wei Chang. 2020. REALM: Retrieval-augmented language model pre-training. In ICML.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Reconsider: Re-ranking using spanfocused cross-attention for open domain question answering",
"authors": [
{
"first": "Srinivasan",
"middle": [],
"last": "Iyer",
"suffix": ""
},
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Wentau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.10757"
]
},
"num": null,
"urls": [],
"raw_text": "Srinivasan Iyer, Sewon Min, Yashar Mehdad, and Wen- tau Yih. 2020. Reconsider: Re-ranking using span- focused cross-attention for open domain question an- swering. arXiv preprint arXiv:2010.10757.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Dense passage retrieval for open-domain question answering",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2020,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In EMNLP.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Natural questions: a benchmark for question answering research. TACL",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Kelcey",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral questions: a benchmark for question answering research. TACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Latent retrieval for weakly supervised open domain question answering",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7871--7880",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.703"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Sebastian Riedel, and Douwe Kiela. 2020b. Retrieval-augmented generation for knowledge-intensive nlp tasks",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Perez",
"suffix": ""
},
{
"first": "Aleksandara",
"middle": [],
"last": "Piktus",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Heinrich",
"middle": [],
"last": "K\u00fcttler",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Tau Yih",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K\u00fcttler, Mike Lewis, Wen tau Yih, Tim Rockt\u00e4schel, Sebastian Riedel, and Douwe Kiela. 2020b. Retrieval-augmented generation for knowledge-intensive nlp tasks. In neurips.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Denoising distantly supervised open-domain question answering",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Haozhe",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1736--1745",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1736- 1745.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Ms marco: A human-generated machine reading comprehension dataset",
"authors": [
{
"first": "Tri",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Mir",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Xia",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Tiwary",
"suffix": ""
},
{
"first": "Rangan",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human-generated machine read- ing comprehension dataset.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The probabilistic relevance framework: Bm25 and beyond",
"authors": [
{
"first": "S",
"middle": [],
"last": "Robertson",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Zaragoza",
"suffix": ""
}
],
"year": 2009,
"venue": "Found. Trends Inf. Retr",
"volume": "3",
"issue": "",
"pages": "333--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Robertson and H. Zaragoza. 2009. The probabilis- tic relevance framework: Bm25 and beyond. Found. Trends Inf. Retr., 3:333-389.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Phraseindexed question answering: A new challenge for scalable document comprehension",
"authors": [
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Ankur",
"middle": [
"P"
],
"last": "Parikh",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minjoon Seo, T. Kwiatkowski, Ankur P. Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2018. Phrase- indexed question answering: A new challenge for scalable document comprehension. In EMNLP.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Real-time open-domain question answering with dense-sparse phrase index",
"authors": [
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with dense-sparse phrase index. In ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips)",
"authors": [
{
"first": "Anshumali",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anshumali Shrivastava and P. Li. 2014. Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips). In NIPS.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The TREC-8 question answering track report",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ellen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Voorhees",
"suffix": ""
}
],
"year": 1999,
"venue": "TREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen M Voorhees et al. 1999. The TREC-8 question answering track report. In TREC.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Benchmarking tpu, gpu, and cpu platforms for deep learning",
"authors": [
{
"first": "Yu",
"middle": [
"Emma"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Gu-Yeon",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Brooks",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.10701"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Emma Wang, Gu-Yeon Wei, and David Brooks. 2019. Benchmarking tpu, gpu, and cpu platforms for deep learning. arXiv preprint arXiv:1907.10701.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "WikiQA: A challenge dataset for open-domain question answering",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Meek",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2013--2018",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1237"
]
},
"num": null,
"urls": [],
"raw_text": "Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. WikiQA: A challenge dataset for open-domain ques- tion answering. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 2013-2018, Lisbon, Portugal. As- sociation for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Test QA EM Acc v/s No: Reader Documents on NQ. QA Span EM increases when more documents are processed by the reader during inference.",
"uris": null
},
"TABREF1": {
"num": null,
"html": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td>: Answer Span EM Accuracy and Answer Re-</td></tr><tr><td>call@10. Improving training setup improves Test</td></tr><tr><td>EM Acc. Test = test set, Dev = development set</td></tr><tr><td>2 Exploring Limits of REALM</td></tr></table>"
},
"TABREF2": {
"num": null,
"html": null,
"text": "to compute the retrieval score, S retr , for \u223c13M passages of corpus and extract c",
"type_str": "table",
"content": "<table><tr><td>Model</td><td colspan=\"2\">R@10 DevEM</td></tr><tr><td>REALM</td><td>68.8</td><td>35.6</td></tr><tr><td>ScaledR (FixedRet)</td><td>59.6</td><td>33.1</td></tr><tr><td>ScaledR+Rerank (FixedRet)</td><td>67.9</td><td>35.8</td></tr><tr><td colspan=\"2\">ScaledR+Rerank+PS (FixedRet) 67.5</td><td>37.1</td></tr><tr><td>ScaledR (TrainedRet)</td><td>69.5</td><td>37.9</td></tr><tr><td>ScaledR+Rerank (TrainedRet)</td><td>67.5</td><td>37.4</td></tr><tr><td>1 REALM is pretrained using MLM on CC-News corpus</td><td/><td/></tr></table>"
},
"TABREF3": {
"num": null,
"html": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td>: Answer Recall and Span EM for fixed v/s fine-</td></tr><tr><td>tuned retriever. ScaledR = Scaled REALM, PS = Pas-</td></tr><tr><td>sage Supervision. Reranking is useful when retriever</td></tr><tr><td>is fixed but is not effective when the retriever is trained.</td></tr></table>"
},
"TABREF5": {
"num": null,
"html": null,
"text": "Test QA (Exact Match) Accuracy on Open-QA benchmarks showing REALM++ improving over prior methods of similar size. The number of train/test examples are shown in parentheses next to each benchmark. *indicates models finetuned on trained NQ model, as proposed in",
"type_str": "table",
"content": "<table/>"
},
"TABREF7": {
"num": null,
"html": null,
"text": "Experiments reproduce results of REALM on NQ dataset. First section compares Test EM from our experiments with previous published results from",
"type_str": "table",
"content": "<table/>"
},
"TABREF9": {
"num": null,
"html": null,
"text": "Examples of Questions from Natural Questions with incorrect retrieved passages from",
"type_str": "table",
"content": "<table/>"
},
"TABREF10": {
"num": null,
"html": null,
"text": "Qualitative Analysis of questions from NQ showing questions where baseline REALM retrieved incorrect passages and training with passage supervision helped retrieve the right passage.",
"type_str": "table",
"content": "<table/>"
}
}
}
}