ACL-OCL / Base_JSON /prefixD /json /deelio /2021.deelio-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:21:42.328599Z"
},
"title": "Investigating the Effect of Background Knowledge on Natural Questions",
"authors": [
{
"first": "Vidhisha",
"middle": [],
"last": "Balachandran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Haitian",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Existing work shows the benefits of integrating KBs with textual evidence for QA only on questions that are answerable by KBs alone (Sun et al., 2019). In contrast, real world QA systems often have to deal with questions that might not be directly answerable by KBs. Here, we investigate the effect of integrating background knowledge from KBs for the Natural Questions (NQ) task. We create a subset of the NQ data, Factual Questions (FQ), where the questions have evidence in the KB in the form of paths that link question entities to answer entities but still must be answered using text, to facilitate further research into KB integration methods. We propose and analyze a simple, model-agnostic approach for incorporating KB paths into text-based QA systems and establish a strong upper bound on FQ for our method using an oracle retriever. We show that several variants of Personalized PageRank based fact retrievers lead to a low recall of answer entities and consequently fail to improve QA performance. Our results suggest that fact retrieval is a bottleneck for integrating KBs into real world QA datasets 1 .",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Existing work shows the benefits of integrating KBs with textual evidence for QA only on questions that are answerable by KBs alone (Sun et al., 2019). In contrast, real world QA systems often have to deal with questions that might not be directly answerable by KBs. Here, we investigate the effect of integrating background knowledge from KBs for the Natural Questions (NQ) task. We create a subset of the NQ data, Factual Questions (FQ), where the questions have evidence in the KB in the form of paths that link question entities to answer entities but still must be answered using text, to facilitate further research into KB integration methods. We propose and analyze a simple, model-agnostic approach for incorporating KB paths into text-based QA systems and establish a strong upper bound on FQ for our method using an oracle retriever. We show that several variants of Personalized PageRank based fact retrievers lead to a low recall of answer entities and consequently fail to improve QA performance. Our results suggest that fact retrieval is a bottleneck for integrating KBs into real world QA datasets 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Prior work has shown the benefit of retrieving paths of related entities (Sun et al., 2018; Wang and Jiang, 2019; Sun et al., 2019) and learning relevant knowledge graph embeddings (Sawant et al., 2018; Bordes et al., 2014; Luo et al., 2018) for answering questions on KBQA datasets such as WebQuestions (Berant et al., 2013) and MetaQA (Zhang et al., 2018) . But such datasets are often curated to questions with KB paths that contain the right path to the answer and hence are directly answerable via KB. An open question remains whether such approaches are useful for questions not specifically 1 Data and Code available at: https://github.com/ vidhishanair/fact_augmented_text * Work done at Google Research designed to be answerable by KBs. In this paper, we aim to evaluate KB integration for real-world QA settings in the context of the Natural Questions (NQ) dataset (Kwiatkowski et al., 2019) which consists of questions naturally posed by users of a search engine. NQ is one of the common benchmarks that is used to test the real-world QA applicability of models, hence motivating our choice.",
"cite_spans": [
{
"start": 73,
"end": 91,
"text": "(Sun et al., 2018;",
"ref_id": "BIBREF16"
},
{
"start": 92,
"end": 113,
"text": "Wang and Jiang, 2019;",
"ref_id": "BIBREF18"
},
{
"start": 114,
"end": 131,
"text": "Sun et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 181,
"end": 202,
"text": "(Sawant et al., 2018;",
"ref_id": "BIBREF14"
},
{
"start": 203,
"end": 223,
"text": "Bordes et al., 2014;",
"ref_id": "BIBREF2"
},
{
"start": 224,
"end": 241,
"text": "Luo et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 304,
"end": 325,
"text": "(Berant et al., 2013)",
"ref_id": "BIBREF1"
},
{
"start": 337,
"end": 357,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 598,
"end": 599,
"text": "1",
"ref_id": null
},
{
"start": 875,
"end": 901,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To study the effect of augmenting KB knowledge, we construct a subset of NQ -Factual Questions (FQ). In FQ, answer entities are connected to question entities via short paths (up to 3 steps) in the Wikidata KB (Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014) . Using FQ, we analyze a simple but effective approach to incorporating KB knowledge into a textual QA system. We convert KB paths to text (using surface forms of entities and relation) and append it to the textual passage as additional context for a BERT-based QA system.",
"cite_spans": [
{
"start": 210,
"end": 240,
"text": "(Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first establish an upper bound oracle setting by building a retriever that provides the shortest path to an answer. We show that, in the presence of such knowledge, our approach leads to significant gains (up to 6 F1 for short-answers, 8-9 F1 for multi-hop questions). We experiment with several variants of KB path-retrieval methods and show that retrieving good paths is difficult: previouslyused Personalized PageRank (Haveliwala, 2003) (PPR)-based methods find answer entities less than 30% of the time, and even our weakly-supervised improvements recall answer entities no more than 40% of the time. As a consequence injecting retrieved KB paths in a realistic QA setting like NQ yields only small, inconsistent improvements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To summarize our contributions, we (1) identify a new experimental subset of NQ that supports (2) the study of effectiveness of KB path-retrieval approaches. We also (3) describe a simple, modelagnostic method to using oracle KB paths that can significantly improve QA performance and evaluate PPR based path-retrieval methods. To our knowledge this is the first study of such approaches on a QA dataset not curated for KBQA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Natural Questions (NQ) dataset (Kwiatkowski et al., 2019 ) is a large scale QA dataset containing 307,373 training, 7,830 dev, and 7,842 test examples. Each example is a user query paired with Wikipedia documents annotated with a passage (long answer) answering the question and one or more short spans (short answer) containing the answer. The questions in NQ are not artificially constructed, making the NQ task more difficult . We use Sling (Ringgaard et al., 2017 ) (which uses an NP chunker and phrase table for linking entities to Wikidata) to entity link the questions and documents.",
"cite_spans": [
{
"start": 35,
"end": 60,
"text": "(Kwiatkowski et al., 2019",
"ref_id": "BIBREF6"
},
{
"start": 448,
"end": 471,
"text": "(Ringgaard et al., 2017",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2"
},
{
"text": "To focus on knowledge-driven factoid question answering, we create a subset of NQ having relevant knowledge in the KB. Shortest paths between entities in KB is very often used as a proxy for gold knowledge linking questions to answer (Sun et al., 2019) and we use the same proxy in our setting. Specifically, we select questions whose short answers are entities in the KB and have a short path (up to 3 steps) from a question entity to an answer entity. These paths contain knowledge relevant to the question but are not necessarily the right path to answer the question. We call this subset Factual Questions (FQ) containing 6977 training, 775 dev and 264 (83 1-hop, 97 2-hop and 84 3-hop) test samples. FQ being an entity centric subset of NQ, provides a setting to investigate augmenting KB paths for real-world factoid question for which relevant knowledge exists in the KB. Examples of the dataset are provided in Table 4 .",
"cite_spans": [
{
"start": 234,
"end": 252,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 919,
"end": 926,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2"
},
{
"text": "Given a question Q, our knowledge retriever extracts top facts from a KB. We represent them in natural language form and augment it to a standard BERT model for reading comprehension as additional context along with the passage P .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "The Knowledge Retriever (KR) uses the input question Q to retrieve relevant facts for augmentation. We use the entities in the question as the set of seed entities denoted as E and use the Personalized PageRank (PPR) algorithm to perform a random walk over the KB to assign relevance scores to other entities around the seed entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Retriever",
"sec_num": "3.1"
},
{
"text": "The Traditional PPR algorithm takes the seed entities and iteratively jumps from and expands the seed entities until convergence. At each iteration, a transition with probability \u03b3 is made to a new entity in the KB (with all outgoing edges having equal weight) and a transition with probability 1 \u2212 \u03b3 is made to the start seed entities. The stationary distribution of this walk gives the relevance scores (PPR weights) of entities (nodes) w.r.t seed entities. Sun et al. (2018) present an improved PPR version, Question Informed (QI) PPR, to weigh relations which are semantically closer to the question higher. Specifically, they average the GLOVE (Pennington et al., 2014) embeddings to compute a relation vector v(R) from the relation surface form, and a question vector v(Q) from the question text, and use cosine similarity between them as edgeweights for PPR. For every node, the \u03b3 probability is multiplied by the edge-score to weigh entities along relevant paths higher.",
"cite_spans": [
{
"start": 460,
"end": 477,
"text": "Sun et al. (2018)",
"ref_id": "BIBREF16"
},
{
"start": 649,
"end": 674,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Retriever",
"sec_num": "3.1"
},
{
"text": "We improve on this setting by introducing Weakly Supervised (WS) PPR, which uses weak supervision from the QA pairs to train a classifier to discriminate relevant relations from irrelevant ones. We create a classification dataset of questions aligned with relations along the shortest KB path connecting question entities and answer entities as positive relevant examples. Other random relations connected to the question entities form negative examples. We train a simple BERT based classifier to classify relations as relevant or irrelevant conditioned on the question. The trained classifier is used to score relations for every question and used as edge-weights for PPR similar to QI PPR. Examples of the facts retrieved from WS PPR are provided in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 753,
"end": 760,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Knowledge Retriever",
"sec_num": "3.1"
},
{
"text": "After running PPR we retain the top-K entities e 1 , . . . , e K by PPR score, along with any edges between them. To further rank the facts, we compute entity scores as the sum of the PPR score and frequency of the entity in the text and aggregate the subject and object entity scores by taking the maximum score between them. Oracle Setting: In this upper bound setting for the Knowledge Retriever, the answer entities are known. The facts along the shortest path connecting the question entities and the answer entities are considered as gold or relevant facts to the question and are shuffled and augmented to the input of the QA model in place of the KB retrieved facts. As the oracle setting uses gold KB links, this setting is tested on the FQ subset where such links exist and is called the Clean Oracle. To establish a harder upper bound setting, random facts about the question are added in addition to the oracle shortest path facts using PPR, forming a Noisy Oracle setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Retriever",
"sec_num": "3.1"
},
{
"text": "Given a ranked set of triples from the retriever, a natural language statement is constructed from each fact using the surface form of the entities e s and e o and the natural language description of R (e.g. \"Washington D.C capital of United States\") similar to Lauscher et al. (2020) . These form the background knowledge to be injected F . We then tokenize them using the standard BERT tokenizers and augment them to the input of QA model as X = \"[CLS] Question tokens [SEP] Passage tokens [SEP] Fact tokens\". Following , we use a simple BERT architecture by training two linear classifiers independently on top of the output representations of X for predicting the answer span boundary (start and end). We assume that the answer, if present, is contained only in the given passage, P , and do not consider potential mentions of the answer in the background F . For instances which do not contain the answer, we simply set the answer span to be the special token [CLS] . We use a fixed Transformer input window size of 512, and use a sliding window with a stride of 128 tokens to handle longer documents. We use 256 tokens each for document passage input and KB facts.",
"cite_spans": [
{
"start": 262,
"end": 284,
"text": "Lauscher et al. (2020)",
"ref_id": "BIBREF7"
},
{
"start": 965,
"end": 970,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Augmented Text for QA",
"sec_num": "3.2"
},
{
"text": "Setup: As every passage that doesn't contain the answer is a potential negative, we sample a subset of negatives to balance the dataset. For the Factual NQ subset, we sample 2% of the negatives as in Alberti et al. (2019) to enable faster training. We find that increasing the negatives to 10% improves results by \u223c2 points and hence for a fair comparison, we sample 10% of the negatives for our models and the reimplemented baseline on the Full NQ dataset. We use the same preprocessing steps and all other hyperparameter settings as in .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "While the KB retriever's effect can be measured in the downstream QA model, it is beneficial to directly measure the quality of the top retrieved facts. As we consider the shortest path between the question and answer entities as gold facts, we evaluate our retriever using recall of answer entities and shortest path facts in a set of 200 questions from FQ. We compare our retriever with BM25 (Robertson and Zaragoza, 2009), traditional PPR and QI PPR (Sun et al., 2018) as baselines. Table 2 shows the retriever recall results. BM25, traditional PPR and the QI PPR have very poor recall of answers and facts. The low recall of QI PPR shows that questions in NQ do not have similar predicates to relations in the KB, and hence do not benefit from pretrained word vectors. In WS PPR answer entity recall improves by 15 points and Shortest Path fact recall improves by 20 points showing significant improvement. This shows that retrieval methods need question supervision to work in real-world settings and that heuristic methods do not adapt well to it. We show qualitative examples of oracle and retrieved facts in the Appendix.",
"cite_spans": [
{
"start": 453,
"end": 471,
"text": "(Sun et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 486,
"end": 493,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Retriever Results",
"sec_num": "4.1"
},
{
"text": "Additionally, Table 5 (Top) shows that the question independent knowledge (passage entities as seeds PPR(P)) version is slightly worse than question dependent knowledge (question entities as seeds PPR(Q)), showing the benefit of a question dependent factual knowledge retriever. ",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 21,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Retriever Results",
"sec_num": "4.1"
},
{
"text": "Factual Questions: Table 1 shows the results of our Knowledge Augmented QA system on the FQ subset 2 . The clean oracle setting improves over the text only baseline and when segregated along the number of hops in the gold shortest path, it has significantly large gains for 2 and 3 hop questions. These questions are generally more complex involving multiple steps of reasoning and augmenting gold facts linking the question to the answer entities significantly helps in the model's performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 26,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "QA Performance",
"sec_num": "4.2"
},
{
"text": "The noisy oracle setting which has additional facts with oracle facts maintains the QA performance showing that random facts with oracle are still useful to the QA model. This shows that the presence of relevant knowledge from the KB helps QA performance and establishes a strong upper bound for our KB integration. The performance drops when the QA model is given only the PPR facts, without the oracle facts. Both Short and Long answer F1 are similar to the text only setting showing that the retrieved facts are not providing any relevant knowledge to the QA model. Though the weakly supervised setting improves recall of answer entities and shortest path facts, it doesn't improve on the downstream QA task showing that this improved recall is still insufficient for the model to leverage. Comparing the oracle and no-oracle settings, we believe that better KB retrieval methods that have bery high recall of answer entities and relevant facts could lead to improved QA performance, even in real-world complex questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "QA Performance",
"sec_num": "4.2"
},
{
"text": "We also validate that our performance gains in oracle settings were not due to trivial entity overlap between the text and retrieved facts. We measure the entity overlap in the entire dev set and found that on average, correct predictions had 3.67 entities in common while incorrect predictions had 3.28, and the overall dev set had about 3.54. The small difference in overlap indicates that the oracle setting doesn't leverage any hidden bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "QA Performance",
"sec_num": "4.2"
},
{
"text": "Natural Questions: Table 3 show the performance of incorporating KB facts in the Full NQ task. Though we see improvements to previously published results, careful ablations reveal that the baseline achieves similar results with more (10%) negative examples. This confirms that even in the full dataset PPR methods fail to retrieve relevant knowledge for the model to leverage for QA.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 26,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "QA Performance",
"sec_num": "4.2"
},
{
"text": "To understand the benefit of augmenting facts as input, we compare against a baseline where the retrieved facts are separately represented by a Transformer. We use a stacked Transformer with the same architecture as BERT as a fact encoder. We feed top retrieved facts in natural language form to it, use a multihead attention layer between the text only BERT representation and the fact representation and use the new fact-attended text representation for prediction similar to Section 3.2. Results on NQ in Table 5 shows that the separate fact representation has lower performance than our approach showing the benefit of our augmented input approach. Table 4 shows examples of facts from clean oracle and retrieved facts from WS PPR for questions of varying difficulty. The first two examples shows a question where the oracle KB path (shortest path connecting question entities to answer entities) is the correct reasoning path for answering the question. The third and fourth examples shows a case where the oracle KB path contains relevant knowledge for the question but is not the right path for answering the question. WS PPR in all cases retrieves relevant facts about the question entity, and some oracle KB facts. For the first and the third examples, WS PPR retrieves the entire KB path. In the second and last example, WS PPR retrieves part of the oracle KB path but not the entire path.",
"cite_spans": [],
"ref_spans": [
{
"start": 508,
"end": 515,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 653,
"end": 660,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Facts as Augmented Input:",
"sec_num": null
},
{
"text": "We investigate incorporating KB facts into a realworld QA -Natural Questions. We create a subset of NQ, Factual Questions, to facilitate evaluation of KB integration. We present an oracle setting, where the gold KB path is provided and establish a strong upper-bound. We experimentally show that PPR based retrievers have low recall of answer entities and do not improve downstream QA showing that path-retrieval is a bottleneck for KB integration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "As NQ and FQ rely on span based evaluation, we do not consider KB only baselines for fair comparison.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their feedback and suggestions. We are grateful to Mandar Joshi for detailed regular discussions and feedback during the course of this work. We thank Chris Alberti, Kenton Lee and Matthew Siegler for feedback and help with implementation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A bert baseline for the natural questions",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.08634"
]
},
"num": null,
"urls": [],
"raw_text": "Chris Alberti, Kenton Lee, and Michael Collins. 2019. A bert baseline for the natural questions. arXiv preprint arXiv:1901.08634.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semantic parsing on freebase from question-answer pairs",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Frostig",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1533--1544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural lan- guage processing, pages 1533-1544.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Question answering with subgraph embeddings",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Bordes, S. Chopra, and J. Weston. 2014. Question answering with subgraph embeddings. In EMNLP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Reading wikipedia to answer opendomain questions",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open- domain questions. In ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Topic-sensitive pagerank: A context-sensitive ranking algorithm for web search",
"authors": [
{
"first": "H",
"middle": [],
"last": "Taher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Haveliwala",
"suffix": ""
}
],
"year": 2003,
"venue": "IEEE transactions on knowledge and data engineering",
"volume": "15",
"issue": "4",
"pages": "784--796",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taher H Haveliwala. 2003. Topic-sensitive pager- ank: A context-sensitive ranking algorithm for web search. IEEE transactions on knowledge and data engineering, 15(4):784-796.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Natural questions: a benchmark for question answering research",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "453--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a bench- mark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Common sense or world knowledge? investigating adapter-based knowledge injection into pretrained transformers",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Lauscher",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Majewska",
"suffix": ""
},
{
"first": "Leonardo",
"middle": [
"F R"
],
"last": "Ribeiro",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Nikolai",
"middle": [],
"last": "Rozanov",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Integration for Deep Learning Architectures",
"volume": "",
"issue": "",
"pages": "43--49",
"other_ids": {
"DOI": [
"10.18653/v1/2020.deelio-1.5"
]
},
"num": null,
"urls": [],
"raw_text": "Anne Lauscher, Olga Majewska, Leonardo F. R. Ribeiro, Iryna Gurevych, Nikolai Rozanov, and Goran Glava\u0161. 2020. Common sense or world knowledge? investigating adapter-based knowledge injection into pretrained transformers. In Proceed- ings of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Inte- gration for Deep Learning Architectures, pages 43- 49, Online. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Latent retrieval for weakly supervised open domain question answering",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Knowledge base question answering via encoding of complex query graphs",
"authors": [
{
"first": "F",
"middle": [],
"last": "Kangqi Luo",
"suffix": ""
},
{
"first": "Xusheng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "K",
"middle": [
"Q"
],
"last": "Luo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kangqi Luo, F. Lin, Xusheng Luo, and K. Q. Zhu. 2018. Knowledge base question answering via encoding of complex query graphs. In EMNLP.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A decomposable attention model for natural language inference",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2249--2255",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1244"
]
},
"num": null,
"urls": [],
"raw_text": "Ankur Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249-2255, Austin, Texas. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Sling: A framework for frame semantic parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Ringgaard",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Fernando Cn",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.07032"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Ringgaard, Rahul Gupta, and Fernando CN Pereira. 2017. Sling: A framework for frame seman- tic parsing. arXiv preprint arXiv:1710.07032.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The probabilistic relevance framework: Bm25 and beyond",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Robertson",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Zaragoza",
"suffix": ""
}
],
"year": 2009,
"venue": "Found. Trends Inf. Retr",
"volume": "3",
"issue": "",
"pages": "333--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and be- yond. Found. Trends Inf. Retr., 3:333-389.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Neural architecture for question answering using a knowledge graph and web corpus",
"authors": [
{
"first": "U",
"middle": [],
"last": "Sawant",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Chakrabarti",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Ramakrishnan",
"suffix": ""
}
],
"year": 2018,
"venue": "Information Retrieval Journal",
"volume": "22",
"issue": "",
"pages": "324--349",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "U. Sawant, S. Garg, S. Chakrabarti, and G. Ramakr- ishnan. 2018. Neural architecture for question an- swering using a knowledge graph and web corpus. Information Retrieval Journal, 22:324-349.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text",
"authors": [
{
"first": "Haitian",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Tania",
"middle": [],
"last": "Bedrax-Weiss",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP/IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitian Sun, Tania Bedrax-Weiss, and William W. Co- hen. 2019. Pullnet: Open domain question answer- ing with iterative retrieval on knowledge bases and text. In EMNLP/IJCNLP.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Open domain question answering using early fusion of knowledge bases and text",
"authors": [
{
"first": "Haitian",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Manzil",
"middle": [],
"last": "Zaheer",
"suffix": ""
},
{
"first": "Kathryn",
"middle": [],
"last": "Mazaitis",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William W. Co- hen. 2018. Open domain question answering us- ing early fusion of knowledge bases and text. In EMNLP.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Wikidata: a free collaborative knowledgebase",
"authors": [
{
"first": "Denny",
"middle": [],
"last": "Vrande\u010di\u0107",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Kr\u00f6tzsch",
"suffix": ""
}
],
"year": 2014,
"venue": "Communications of the ACM",
"volume": "57",
"issue": "10",
"pages": "78--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denny Vrande\u010di\u0107 and Markus Kr\u00f6tzsch. 2014. Wiki- data: a free collaborative knowledgebase. Commu- nications of the ACM, 57(10):78-85.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Explicit utilization of general knowledge in machine reading comprehension",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao Wang and Hui Jiang. 2019. Explicit utilization of general knowledge in machine reading compre- hension. In ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Variational reasoning for question answering with knowledge graph",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hanjun",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Zhang, Hanjun Dai, Zornitsa Kozareva, A. Smola, and L. Song. 2018. Variational reasoning for ques- tion answering with knowledge graph. In AAAI.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">Factual NQ</td><td>Hop 1</td><td/><td>Hop 2</td><td/><td>Hop 3</td><td/></tr><tr><td/><td colspan=\"8\">Short F1 Long F1 Short F1 Long F1 Short F1 Long F1 Short F1 Long F1</td></tr><tr><td>Text Only</td><td>68.2</td><td>77.3</td><td>77.3</td><td>82.2</td><td>60.0</td><td>74.3</td><td>60.2</td><td>73.4</td></tr><tr><td>Text + PPR(Q) facts</td><td>68.1</td><td>77.8</td><td>78.3</td><td>83.7</td><td>57.9</td><td>72.8</td><td>61.9</td><td>75.7</td></tr><tr><td>Text + QI PPR(Q) facts</td><td>68.2</td><td>77.5</td><td>79.2</td><td>83.9</td><td>55.2</td><td>72.0</td><td>58.9</td><td>74.4</td></tr><tr><td>Text + WS PPR(Q) facts</td><td>67.8</td><td>76.3</td><td>76.9</td><td>81.7</td><td>58.1</td><td>72.5</td><td>60.2</td><td>72.1</td></tr><tr><td>Text + Clean Oracle</td><td>74.9</td><td>80.8</td><td>79.5</td><td>83.0</td><td>69.1</td><td>80.2</td><td>72.4</td><td>77.2</td></tr><tr><td>Text + Noisy Oracle</td><td>75.3</td><td>81.3</td><td>80.7</td><td>84.4</td><td>69.7</td><td>80.2</td><td>71.9</td><td>77.2</td></tr><tr><td colspan=\"3\">Table 1: Shortest Path Fact R Ans R</td><td/><td/><td/><td/><td/><td/></tr><tr><td>BM25</td><td>19.1</td><td>29.8</td><td/><td/><td/><td/><td/><td/></tr><tr><td>PPR(Q)</td><td>33.0</td><td>28.8</td><td/><td/><td/><td/><td/><td/></tr><tr><td>QI PPR(Q)</td><td>31.2</td><td>25.2</td><td/><td/><td/><td/><td/><td/></tr><tr><td>WS PPR(Q)</td><td>51.0</td><td>40.0</td><td/><td/><td/><td/><td/><td/></tr></table>",
"text": "Results on FQ data compared to. Both Clean and Noisy Oracle setting improve over only text baseline setting. Variants of PPR do not improve over the text only baseline.",
"html": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>Question</td><td colspan=\"2\">Hops Clean Oracle Facts</td><td>WS PPR Facts</td></tr><tr><td>Who is the existing prime minister of pakistan ?</td><td>1</td><td>Prime Minister of Pakistan officeholder Imran Khan .</td><td/></tr><tr><td/><td/><td>Reign of Terror part of French Revolution .</td><td>Napoleon participant of French Revolution .</td></tr><tr><td>What emperor took over france after the reign of terror</td><td>3</td><td>French Revolution significant event 18 Brumaire .</td><td>Absolute Monarchy subclass of Monarchy . First French Empire head of state Napoleon .</td></tr><tr><td/><td/><td>18 Brumaire participant Napoleon .</td><td>Seven years ' war instance of war .</td></tr><tr><td/><td/><td/><td>Heather Locklear instance of human .</td></tr><tr><td/><td/><td/><td>Heather Locklear occupation actor .</td></tr><tr><td/><td/><td/><td>Looney Tunes: Back in Action cast member Heather Locklear .</td></tr><tr><td>Who plays the bad guy in looney tunes back in action ?</td><td>1</td><td>Looney Tunes: Back in Action cast member Steve Martin .</td><td>Stan Freberg occupation actor . Looney Tunes: Back in Action cast member Stan Freberg .</td></tr><tr><td/><td/><td/><td>Looney Tunes: Back in Action cast member Steve Martin .</td></tr><tr><td/><td/><td/><td>Steve Martin instance of human .</td></tr><tr><td/><td/><td/><td>Steve Martin sex or gender male .</td></tr><tr><td/><td/><td>The Burning Fiery Furnace</td><td/></tr><tr><td>Where does the book of daniel take place</td><td>2</td><td>narrative location Babylon. Book of Daniel derivative work</td><td/></tr><tr><td/><td/><td>The Burning Fiery Furnace .</td><td/></tr></table>",
"text": "Pakistan office held by head of government Prime Minister of Pakistan . Imran Khan position held Prime Minister of Pakistan . Pakistan head of government Imran Khan . Prime Minister of Pakistan officeholder Imran Khan . Imran Khan instance of human . Pakistan instance of country .",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Examples of Clean Oracle facts and WS PPR retrieved facts. Relations are highlighted in Italics.",
"html": null
},
"TABREF7": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>: Top: Comparing different seeds for PPR on</td></tr><tr><td>FQ. Using question entities as starting seeds is bet-</td></tr><tr><td>ter than passage specific entities. Bottom: Comparing</td></tr><tr><td>Facts as Augmented Input (Aug Facts) v/s as Separate</td></tr><tr><td>Input (Sep Facts) on NQ. Augmenting Facts as addi-</td></tr><tr><td>tional context is significantly better than embedding</td></tr><tr><td>them via an independent module.</td></tr></table>",
"text": "",
"html": null
}
}
}
}