|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T11:25:38.075623Z" |
|
}, |
|
"title": "NeuralQA: A Usable Library for Question Answering (Contextual Query Expansion + BERT) on Large Datasets", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Dibia", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Cloudera Fast Forward Labs", |
|
"institution": "", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Existing tools for Question Answering (QA) have challenges that limit their use in practice. They can be complex to set up or integrate with existing infrastructure, do not offer configurable interactive interfaces, and do not cover the full set of subtasks that frequently comprise the QA pipeline (query expansion, retrieval, reading, and explanation/sensemaking). To help address these issues, we introduce NeuralQA-a usable library for QA on large datasets. NeuralQA integrates well with existing infrastructure (e.g., ElasticSearch instances and reader models trained with the HuggingFace Transformers API) and offers helpful defaults for QA subtasks. It introduces and implements contextual query expansion (CQE) using a masked language model (MLM) as well as relevant snippets (RelSnip)-a method for condensing large documents into smaller passages that can be speedily processed by a document reader model. Finally, it offers a flexible user interface to support workflows for research explorations (e.g., visualization of gradient-based explanations to support qualitative inspection of model behaviour) and large scale search deployment. Code and documentation for Neu-ralQA is available as open source on Github.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Existing tools for Question Answering (QA) have challenges that limit their use in practice. They can be complex to set up or integrate with existing infrastructure, do not offer configurable interactive interfaces, and do not cover the full set of subtasks that frequently comprise the QA pipeline (query expansion, retrieval, reading, and explanation/sensemaking). To help address these issues, we introduce NeuralQA-a usable library for QA on large datasets. NeuralQA integrates well with existing infrastructure (e.g., ElasticSearch instances and reader models trained with the HuggingFace Transformers API) and offers helpful defaults for QA subtasks. It introduces and implements contextual query expansion (CQE) using a masked language model (MLM) as well as relevant snippets (RelSnip)-a method for condensing large documents into smaller passages that can be speedily processed by a document reader model. Finally, it offers a flexible user interface to support workflows for research explorations (e.g., visualization of gradient-based explanations to support qualitative inspection of model behaviour) and large scale search deployment. Code and documentation for Neu-ralQA is available as open source on Github.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The capability of providing exact answers to queries framed as natural language questions can significantly improve the user experience in many real world applications. Rather than sifting through lists of retrieved documents, automatic QA (also known as reading comprehension) systems can surface an exact answer to a query, thus reducing the cognitive burden associated with the standard search task. This capability is applicable in extending conventional information retrieval systems (search engines) and also for emergent use cases, How many rose species are found in the Montreal Botanical Garden? Figure 1 : NeuralQA implements Contextual Query Expansion (CQE 3.2.1) using Masked Language Models (MLM) and offers a visualization to explain behaviour. A rule set is used to determine which tokens are candidates for expansion (solid blue box); each candidate is iteratively masked, and an MLM is used to identify expansion terms (blue outline box). such as open domain conversational AI systems (Gao et al., 2018; Qu et al., 2019) . For enterprises, QA systems that are both fast and precise can help unlock knowledge value in large unstructured document collections. Conventional methods for open domain QA (Yang et al., 2015 ) follow a twostage implementation -(i) a retriever that returns a subset of relevant documents. Retrieval is typically based on sparse vector space models such as BM25 (Robertson and Zaragoza, 2009) and TF-IDF (Chen et al., 2017) ; (ii) a machine reading comprehension model (reader) that identifies spans from each document which contain the answer. While sparse representations are fast to compute, they rely on exact keyword match, and suffer from the vocabulary mismatch problem -scenarios where the vocabulary used to express a query is different from the vocabulary used to express the same concepts within the documents. To address these issues, recent studies have proposed neural ranking Kratzwald et al., 2019) and retrieval methods (Karpukhin et al., 2020; Lee et al., 2019; Guu et al., 2020) , which rely on dense representations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1002, |
|
"end": 1020, |
|
"text": "(Gao et al., 2018;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1021, |
|
"end": 1037, |
|
"text": "Qu et al., 2019)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 1215, |
|
"end": 1233, |
|
"text": "(Yang et al., 2015", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 1403, |
|
"end": 1433, |
|
"text": "(Robertson and Zaragoza, 2009)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 1445, |
|
"end": 1464, |
|
"text": "(Chen et al., 2017)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1932, |
|
"end": 1955, |
|
"text": "Kratzwald et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1978, |
|
"end": 2002, |
|
"text": "(Karpukhin et al., 2020;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 2003, |
|
"end": 2020, |
|
"text": "Lee et al., 2019;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 2021, |
|
"end": 2038, |
|
"text": "Guu et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 605, |
|
"end": 613, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, while dense representations show significantly improved results, they introduce additional complexity and latency, which limits their practical application. For example, Guu et al. (2020) require a specialized MLM pretraining regime, as well as a supervised fine-tuning step, to obtain representations used in a retriever. Similarly Karpukhin et al. (2020) use dual encoders in learning a dense representation for queries and all documents in the target corpus. Each of these methods require additional infrastructure to compute dense representation vectors for all documents in the target corpus as well as implement efficient similarity search at run time. In addition, transformerbased architectures (Vaswani et al., 2017) used for dense representations are unable to process long sequences due to their self-attention operations which scale quadratically with sequence length. As a result, these models require that documents are indexed/stored in small paragraphs. For many use cases, meeting these requirements (rebuilding retriever indexes, training models to learn corpus specific representations, precomputing representations for all indexed documents) can be cost-intensive. These costs are hard to justify, given that simpler methods can yield comparable results (Lin, 2019; Weissenborn et al., 2017) . Furthermore, as reader models are applied to domain-specific documents, they fail in counter-intuitive ways. It is thus valuable to offer visual interfaces that support debugging or sensemaking of results (e.g., explanations for why a set of documents were retrieved or why an answer span was selected from a document). While several libraries exist to explain NLP models, they do not integrate interfaces that help users make sense of both the query expansion, retriever and the reader tasks. Collectively, these challenges can hamper experimentation with QA systems and the integration of QA models into practitioner workflows.", |
|
"cite_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 196, |
|
"text": "Guu et al. (2020)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 365, |
|
"text": "Karpukhin et al. (2020)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 712, |
|
"end": 734, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 1283, |
|
"end": 1294, |
|
"text": "(Lin, 2019;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1295, |
|
"end": 1320, |
|
"text": "Weissenborn et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we introduce NeuralQA to help address these limitations. Our contributions are summarized as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 An easy to use, end-to-end library for implementing QA systems. It integrates methods for query expansion, document retrieval (Elas-ticSearch 1 ), and document reading (QA models trained using the HuggingFace Transformers API (Wolf et al., 2019) ). It also offers an interactive user interface for sensemaking of results (retriever + reader) . NeuralQA is open source and released under the MIT License. \u2022 To address the vocabulary mismatch problem, NeuralQA introduces and implements a method for contextual query expansion (CQE), using a masked language model (MLM) fine-tuned on the target document corpus (see Fig 1) . Early qualitative results show CQE can surface relevant additional query terms that help improve recall and require minimal changes for integration with existing retrieval infrastructure. \u2022 In addition, we implement RelSnip, a simple method for extracting relevant snippets from retrieved passages before feeding it into a document reader. This, in turn, reduces the latency required to chunk and read lengthy documents. Importantly, these options offer the opportunity to improve latency and recall, with no changes to existing retriever infrastructure.", |
|
"cite_spans": [ |
|
{ |
|
"start": 228, |
|
"end": 247, |
|
"text": "(Wolf et al., 2019)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 323, |
|
"end": 343, |
|
"text": "(retriever + reader)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 616, |
|
"end": 622, |
|
"text": "Fig 1)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Overall, NeuralQA complements a line of endto-end applications that improve QA system deployment (Akkalyoncu Yilmaz et al., 2019; and provide visual interfaces for understanding machine learning models Strobelt et al., 2018; Madsen, 2019; Dibia, 2020a,b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 129, |
|
"text": "deployment (Akkalyoncu Yilmaz et al., 2019;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 202, |
|
"end": 224, |
|
"text": "Strobelt et al., 2018;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 238, |
|
"text": "Madsen, 2019;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 239, |
|
"end": 254, |
|
"text": "Dibia, 2020a,b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There are several subtasks that frequently comprise the QA pipeline and are implemented in NeuralQA.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Question Answering Pipeline", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The first stage in the QA process focuses on retrieving a list of candidate passages, which are subsequently processed by a reader. Conventional approaches to QA apply representations from sparse vector space models (e.g., BM25, TF-IDF) in identifying the most relevant document candidates. For example, Chen et al. (2017) introduce an end-toend system combining TF-IDF retrieval with a multi-layer RNN for document reading. This is further improved upon by , who utilize BM25 for retrieval with a modern BERT transformer reader. However, sparse representations are keyword dependent, and suffer from the vocabulary mismatch problem in information retrieval (IR); given a query Q and a relevant document D, a sparse retrieval method may fail to retrieve D if D uses a different vocabulary to refer to the same content in Q. Furthermore, given that QA queries are under-specified by definition (users are searching for unknown information), sparse representations may lack the contextual information needed to retrieve the most relevant documents. To address these issues, a set of related work has focused on methods for re-ranking retrieved documents to improve recall (Wang et al., 2018; Kratzwald et al., 2019) . More recently, there have been efforts to learn representations of queries and documents useful for retrieval. Lee et al. (2019) introduce an inverse cloze task for pretraining encoders used to create static embeddings that are indexed and used for similarity retrieval during inference. Their work is further expanded by Guu et al. (2020) who introduce non-static representations that are learned simultaneous to reader fine-tuning. Finally, Karpukhin et al. (2020) use dual encoders for retrieval: one encoder that learns to map queries to a fixed dimension vector, and another that learns to map documents to a similar fixed-dimension vector (such that representations for similar query and documents are close).", |
|
"cite_spans": [ |
|
{ |
|
"start": 304, |
|
"end": 322, |
|
"text": "Chen et al. (2017)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1170, |
|
"end": 1189, |
|
"text": "(Wang et al., 2018;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 1190, |
|
"end": 1213, |
|
"text": "Kratzwald et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1327, |
|
"end": 1344, |
|
"text": "Lee et al. (2019)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1538, |
|
"end": 1555, |
|
"text": "Guu et al. (2020)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1659, |
|
"end": 1682, |
|
"text": "Karpukhin et al. (2020)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Document Retrieval", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In addition to re-ranking and dense representation retrieval, query expansion methods have also been proposed to help address the vocabulary mismatch problem. They serve to identify additional relevant query terms, using a variety of sources -such as the target corpus, external dictionaries (e.g., Word-Net), or historical queries. Existing research has explored how implicit information contained in queries can be leveraged in query expansion. For example, Lavrenko and Croft (2017); Lv and Zhai (2010) show how a relevance model (RM3) can be applied for query expansion and improve retrieval performance. More recently, (Lin, 2019) also show that the use of a well-tuned relevance model such as RM3 (Lavrenko and Croft, 2017; Abdul-Jaleel et al., 2004) results in performance at par with complex neural retrieval methods. Word embeddings have been explored as a potential method for query expansion, as well. In their work, Kuzi et al. (2016) train a word2vec (Mikolov et al., 2013 ) CBOW model on their search corpora and use embeddings to identify expansion terms that are either semantically related to the query as a whole or to its terms. Their results suggest that a combination of word2vec embeddings and a relevance model (RM3) provide good results. However, while word embeddings trained on a target corpus are useful, they are static and do not take into consideration the context of the words in a specific query. In this work, we propose an extension to this direction of thought and explore how contextual embeddings produced by an MLM, such as BERT (Devlin et al., 2018) , can be applied in generating query expansion terms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 487, |
|
"end": 505, |
|
"text": "Lv and Zhai (2010)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 624, |
|
"end": 635, |
|
"text": "(Lin, 2019)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 703, |
|
"end": 729, |
|
"text": "(Lavrenko and Croft, 2017;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 730, |
|
"end": 756, |
|
"text": "Abdul-Jaleel et al., 2004)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 928, |
|
"end": 946, |
|
"text": "Kuzi et al. (2016)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 964, |
|
"end": 985, |
|
"text": "(Mikolov et al., 2013", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1567, |
|
"end": 1588, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Query Expansion", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Recent advances in pretrained neural language models, like BERT (Vaswani et al., 2017) and GPT (Radford et al., 2019) , have enabled robust contextualized representation of natural language, which, in turn, have enabled significant performance increases on the QA task. Each QA model (reader) consists of a base representation and an output feedforward layer which produces two sets of scores: (i) scores for each input token that indicate the likelihood of an answer span starting at the token offset, and (ii) scores for each input token that indicate the likelihood of an answer span ending at the token offset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 86, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 95, |
|
"end": 117, |
|
"text": "(Radford et al., 2019)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Document Reading", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "In this section, we review the architecture for Neu-ralQA, as well as design decisions and supported workflows. The core modules for NeuralQA (Fig. 2) include a user interface, retriever, expander, and reader. Each of these modules are implemented as extensible python classes (to facilitate code reuse and incremental development), and are exposed as REST API endpoints that can be either consumed by 3rd party applications or interacted with via the NeuralQA user interface.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 151, |
|
"text": "(Fig. 2)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "NeuralQA System Architecture", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The retriever supports the execution of queries on an existing ElasticSearch instance, using the industry standard BM25 scoring algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Retriever", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In practice, open corpus documents can be of arbitrary length (sometimes including thousands of tokens) and are frequently indexed for retrieval as is. On the other hand, document reader models have limits on the maximum number of tokens they can process in a single pass (e.g., BERT-based models can process a maximum of 512 tokens). Thus, retrieving large documents can incur latency costs, as a reader will have to first split the document into Figure 2 : The NeuralQA Architecture is comprised of four primary modules. (a) User interface: enables user queries and visualizes results from the retriever and reader (b) Contextual Query Expander: offers options for generating query expansion terms using an MLM (c) Retriever: leverages the BM25 scoring algorithm in retrieving a list of candidate passages; it also optionally condenses lengthy documents to smaller passages via 3.1 RelSnip. (d) Document Reader: identifies answer spans within documents (where available) and provides explanations for each prediction. manageable chunks, and then process each chunk individually. To address this issue, NeuralQA introduces RelSnip, a method for constructing smaller documents from lengthy documents. RelSnip is implemented as follows: For each retrieved document, we apply a highlighter (Lucene Unified Highlighter), which breaks the document into fragments of size k f rag and uses the BM25 algorithm to score each fragment as if they were individual documents in the corpus. Next, we concatenate the top n fragments as a new document, which is then processed by the reader. RelSnip relies on the simplifying assumption that fragments with higher match scores contain more relevant information. As an illustrative example, RelSnip can yield a document of 400 tokens (depending on k f rag and n ) from a document containing 10,000 tokens. In practice, this can translate to 25x increase in speed.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 448, |
|
"end": 456, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Condensing Passages with RelSnip", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "CQE relies on the assumption that an MLM which has been fine-tuned on the target document corpus contains implicit information (Petroni et al., 2019) on the target corpus. The goal is to exploit this information in identifying relevant query expansion terms. Ideally, we want to expand a query, such that expansion tokens serve to increase recall, while adding minimal noise and without significantly altering the semantics of the original query. We implement CQE as follows: First, we identify a set of expansion candidate tokens. For each token t i in the query t query , we use the SpaCy (Honni-bal and Montani, 2017) library to infer its part of speech tag t ipos and apply a filter f rule to determine if it is added to a list of candidate tokens for expansion t candidates . Next, we construct intermediate versions of the original query, in which each token in t candidates is masked, and an MLM (BERT) predicts the top n tokens that are contextually most likely to complete the query. These predicted tokens t expansion can then be added to the original query as expansion terms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 149, |
|
"text": "(Petroni et al., 2019)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextual Query Expansion (CQE)", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "To minimize the chance of introducing spurious terms that are unrelated to the original query, we find that two quality control measures are useful. First, we leverage confidence scores returned by the MLM and only accept expansion tokens above a certain threshold (e.g., k thresh = 0.5) where k thresh is a hyperparameter. Secondly, we find that a conservative filter in selecting token expansion candidates can mitigate the introduction of spurious terms. Our filter rule f rule currently only expands tokens that are either nouns or adjectives t ipos \u2208 (noun, adj) and are not named entities; tokens for other parts of speech are not expanded. Finally, the list of expansion terms are further cleaned by the removal of duplicate terms, punctuation, and stop words. Fig. 3 shows a qualitative comparison of query expansion terms suggested by a static word embedding and an MLM for a given query. The NeuralQA interface offers a user-in-the-loop visualization of CQE which highlights POS tags for each token to help the user make sense of expansion values. The user can then select expansion candidates for inclusion in retrieval. Figure 3 : Examples of qualitative results from applying query expansion: (a) Query expansion using SpaCy word embeddings to identify the most similar words for each expansion candidate token. This approach yields terms with low relevance (e.g., terms related to work (jobs, hiring) and fruits (apple, blackberry, pears) are not relevant to the current query context) (b) Query expansion using an MLM (BERT). This approach yields terms that are absent in the original query (e.g., mac, macintosh, personal) but are, in general, relevant to the current query.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 768, |
|
"end": 774, |
|
"text": "Fig. 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1132, |
|
"end": 1140, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Contextual Query Expansion (CQE)", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "The reader module implements an interface for predicting answer spans, given a query and context documents. Underneath, it loads any QA model trained using the HuggingFace Transformers API (Wolf et al., 2019) . Documents that exceed the maximum token size for the reader are automatically split into chunks with a configurable stride and answer spans provided for each chunk. All answers are then sorted, based on an associated score (start and end token softmax probabilities).", |
|
"cite_spans": [ |
|
{ |
|
"start": 189, |
|
"end": 208, |
|
"text": "(Wolf et al., 2019)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reader", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Finally, each reader model provides a method that generates gradient-based explanations (Vanilla Gradients (Simonyan et al., 2013; Erhan et al., 2009; Baehrens et al., 2010) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 130, |
|
"text": "(Simonyan et al., 2013;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 131, |
|
"end": 150, |
|
"text": "Erhan et al., 2009;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 151, |
|
"end": 173, |
|
"text": "Baehrens et al., 2010)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reader", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The NeuralQA user interface (Fig. 4) seeks to aid the user in performing queries and in sensemaking of underlying model behaviour. As a first step, we provide a visualization of retrieved document highlights that indicate what portions of the retrieved document contributed to their relevance ranking. Next, following work done in Al-lenNLP Interpret , we implement gradient-based explanations that help the user understand what sections of the input (question and passage) were most relevant to the choice of answer span. We do not use attention weights, as they have have been shown to be unfaithful explanations of model behaviour (Jain and Wallace, 2019; Ser-rano and Smith, 2019) and not intuitive for end user sensemaking. We also implement a document and answer tagging scheme that indicates the source document from which an answer span is derived.", |
|
"cite_spans": [ |
|
{ |
|
"start": 634, |
|
"end": 658, |
|
"text": "(Jain and Wallace, 2019;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 659, |
|
"end": 684, |
|
"text": "Ser-rano and Smith, 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 36, |
|
"text": "(Fig. 4)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "User Interface", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "NeuralQA is scalable, as it is built on industry standard OSS tools that are designed for scale (ElasticSearch, HuggingFace Transformers API, FastAPI, Uvicorn asgi web server). We have tested deployments of NeuralQA on docker containers running on CPU machine clusters which rely on ElasticSearch clusters. The UI is responsive and optimized to work on desktop, as well as on mobile devices.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Interface", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "NeuralQA implements a command line interface for instantiating the library, and a declarative approach for specifying the parameters for each module. At run time, users can provide a command line argument specifying the location of a configuration YAML file 2 . If no configuration file is found in the provided location and in the current folder, Neu-ralQA will create a default configuration file that can be subsequently modified. As an illustrative example, users can configure properties of the user interface (views to show or hide, title and description of the page, etc.), retriever properties (a list of supported retriever indices), and reader properties (a list of supported models that are loaded into memory on application startup).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Configuration and Workflow", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "NeuralQA is designed to support use cases and personas at various levels of complexities. We discuss two specific personas briefly below. Data Scientists: Janice, a data scientist, has extensive experience applying a collection of machine learning models to financial data. Recently, she has started a new project, in which the deliverable includes a QA model that is skillful at answering factoid questions on financial data. As part of this work, Janice has successfully fine-tuned a set of transformer models on the QA task, but would like to better understand how the model behaves. More importantly, she would like to enable visual interaction with the model for her broader team. To achieve this, Janice hosts NeuralQA on an internal server accessible to her team. Via a configuration file, she can specify a set of trained models, as well as enable user selection of reader/retriever parameters. This workflow also extends to other user types Figure 4 : The NeuralQA UI. a.) Basic view (mobile) for closed domain QA, i.e., the user provides a question and passage. b.) Advanced options view (desktop mode) for open domain QA. The user can select the retriever (e.g., # of returned documents, toggle RelSnip, fragment size k f rag ), set expander and reader parameters (BERT reader model, token stride size)). View also shows a list of returned documents (D0-D4) with highlights that match query terms; a list of answers (A0) with gradient-based explanation of which tokens impact the selected answer span.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 950, |
|
"end": 958, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "User Personas", |
|
"sec_num": "3.4.1" |
|
}, |
|
{ |
|
"text": "(such as hobbyists, entry level data scientists, or researchers) who want an interface for qualitative inspection of custom reader models on custom document indices. Engineering Teams: Candice manages the internal knowledge base service for her startup. They have an internal ElasticSearch instance for search, but would like to provide additional value via QA capabilities. To achieve this, Candice provisions a set of docker containers running instances for NeuralQA and then modifies the frontend of their current search application to make requests to the NeuralQA REST API and serve answer spans.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Personas", |
|
"sec_num": "3.4.1" |
|
}, |
|
{ |
|
"text": "QA systems that integrate deep learning models remain an active area of research and practice. For example, AllenNLP Interpret provides a demonstration interface and sample code for interpreting a set of AllenNLP models across multiple tasks. Similarly, Chakravarti et al. (2019) provide a gRPC-based orchestration flow for QA. However, while these projects provide a graphical user interface (GUI), their installation process is complex and requires specialized code to adapt them to existing infrastructure, such as retriever instances. Several open source projects also offer a programmatic interface for inference (e.g., Hug-ginFace Pipelines), as well as joint retrieval paired with reading (e.g., Deepset Haystack). NeuralQA makes progress along these lines, by providing an extensible code base, a low-code declarative configuration interface, tools for query expansion and a visual interface for sensemaking of results. It supports a local research/development workflow (via the pip) package manager and scaled deployment via containerization (we provide a Dockerfile). We believe this ease of use can serve to remove barriers to experimentation for researchers, and accelerate the deployment of QA interfaces for experienced teams.", |
|
"cite_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 279, |
|
"text": "Chakravarti et al. (2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "In this paper, we presented NeuralQA -a usable library for question answering on large datasets. NeuralQA is useful for developers interested in qualitatively exploring QA models for their custom datasets, as well as for enterprise teams seeking a flexible QA interface/API for their customers. Neu-ralQA is under active development, and roadmap features include support for a Solr retriever, additional model explanation methods and additional query expansion methods such as RM3 (Lavrenko and Croft, 2017) . Future work will also explore empirical evaluation of our CQE and RelSnip implementation to better understand their strengths and limitations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 481, |
|
"end": 507, |
|
"text": "(Lavrenko and Croft, 2017)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "ElasticSearch https://www.elastic.co", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A sample configuration file can be found on Github.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The author thanks Melanie Beck, Andrew Reed, Chris Wallace, Grant Custer, Danielle Thorpe and other members of the Cloudera Fast Forward team for their valuable feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Umass at trec 2004: Novelty and hard", |
|
"authors": [ |
|
{ |
|
"first": "Nasreen", |
|
"middle": [], |
|
"last": "Abdul-Jaleel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Allan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruce", |
|
"middle": [], |
|
"last": "Croft", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Diaz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leah", |
|
"middle": [], |
|
"last": "Larkey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Mark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Courtney", |
|
"middle": [], |
|
"last": "Smucker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wade", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Computer Science Department Faculty Publication Series", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nasreen Abdul-Jaleel, James Allan, W Bruce Croft, Fernando Diaz, Leah Larkey, Xiaoyan Li, Mark D Smucker, and Courtney Wade. 2004. Umass at trec 2004: Novelty and hard. Computer Science Depart- ment Faculty Publication Series, page 189.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Applying BERT to document retrieval with birch", |
|
"authors": [ |
|
{ |
|
"first": "Shengjin", |
|
"middle": [], |
|
"last": "Zeynep Akkalyoncu Yilmaz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haotian", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--24", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-3004" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zeynep Akkalyoncu Yilmaz, Shengjin Wang, Wei Yang, Haotian Zhang, and Jimmy Lin. 2019. Ap- plying BERT to document retrieval with birch. In Empirical Methods in Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 19-24, Hong Kong, China. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "How to explain individual classification decisions", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Baehrens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timon", |
|
"middle": [], |
|
"last": "Schroeter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Harmeling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Motoaki", |
|
"middle": [], |
|
"last": "Kawanabe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katja", |
|
"middle": [], |
|
"last": "Hansen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus-Robert", |
|
"middle": [], |
|
"last": "M\u00e3\u017eller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "1803--1831", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Baehrens, Timon Schroeter, Stefan Harmel- ing, Motoaki Kawanabe, Katja Hansen, and Klaus- Robert M\u00c3\u017eller. 2010. How to explain individual classification decisions. Journal of Machine Learn- ing Research, 11(Jun):1803-1831.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "CFO: A framework for building production NLP systems", |
|
"authors": [ |
|
{ |
|
"first": "Rishav", |
|
"middle": [], |
|
"last": "Chakravarti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cezar", |
|
"middle": [], |
|
"last": "Pendus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrzej", |
|
"middle": [], |
|
"last": "Sakrajda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Ferritto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lin", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vittorio", |
|
"middle": [], |
|
"last": "Castelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Murdock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Florian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Avi", |
|
"middle": [], |
|
"last": "Sil", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "EMNLP-IJCNLP 2019: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "31--36", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-3006" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rishav Chakravarti, Cezar Pendus, Andrzej Sakrajda, Anthony Ferritto, Lin Pan, Michael Glass, Vittorio Castelli, J William Murdock, Radu Florian, Salim Roukos, and Avi Sil. 2019. CFO: A framework for building production NLP systems. In EMNLP- IJCNLP 2019: System Demonstrations, pages 31- 36, Hong Kong, China. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Reading wikipedia to answer opendomain questions", |
|
"authors": [ |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Fisch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open- domain questions. ACL 2017.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Anomagram: An interactive visualization for training and evaluating autoencoders on the task of anomaly detection", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Dibia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Dibia. 2020a. Anomagram: An interactive visualization for training and evaluating autoen- coders on the task of anomaly detection. ArXiv. https://github.com/victordibia/anomagram.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Victor Dibia", |
|
"authors": [], |
|
"year": 2020, |
|
"venue": "Convnet playground: A learning tool for exploring representations learned by convolutional neural networks. arXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Dibia. 2020b. Convnet playground: A learning tool for exploring representations learned by convolutional neural networks. arXiv. https://github.com/fastforwardlabs/convnetplayground.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Visualizing higher-layer features of a deep network", |
|
"authors": [ |
|
{ |
|
"first": "Dumitru", |
|
"middle": [], |
|
"last": "Erhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Vincent", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "1341", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2009. Visualizing higher-layer fea- tures of a deep network. University of Montreal, 1341(3):1.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Neural approaches to conversational ai", |
|
"authors": [ |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Galley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lihong", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1371--1374", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational ai. In The 41st International ACM SIGIR Conference on Re- search & Development in Information Retrieval, pages 1371-1374.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Realm: Retrievalaugmented language model pre-training", |
|
"authors": [ |
|
{ |
|
"first": "Kelvin", |
|
"middle": [], |
|
"last": "Guu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zora", |
|
"middle": [], |
|
"last": "Tung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Panupong", |
|
"middle": [], |
|
"last": "Pasupat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2002.08909" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. Realm: Retrieval- augmented language model pre-training. arXiv preprint arXiv:2002.08909.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Honnibal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ines", |
|
"middle": [], |
|
"last": "Montani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Attention is not explanation", |
|
"authors": [ |
|
{ |
|
"first": "Sarthak", |
|
"middle": [], |
|
"last": "Jain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Byron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wallace", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarthak Jain and Byron C Wallace. 2019. Attention is not explanation. NAACL 2019.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Dense passage retrieval for open-domain question answering", |
|
"authors": [ |
|
{ |
|
"first": "Vladimir", |
|
"middle": [], |
|
"last": "Karpukhin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barlas", |
|
"middle": [], |
|
"last": "Oguz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sewon", |
|
"middle": [], |
|
"last": "Min", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ledell", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wentau", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.04906" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen- tau Yih. 2020. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "RankQA: Neural question answering with answer re-ranking", |
|
"authors": [ |
|
{ |
|
"first": "Bernhard", |
|
"middle": [], |
|
"last": "Kratzwald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Eigenmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Feuerriegel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6076--6085", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1611" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bernhard Kratzwald, Anna Eigenmann, and Stefan Feuerriegel. 2019. RankQA: Neural question an- swering with answer re-ranking. In ACL, pages 6076-6085, Florence, Italy. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Query expansion using word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Saar", |
|
"middle": [], |
|
"last": "Kuzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Shtok", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Kurland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 25th ACM international on conference on information and knowledge management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1929--1932", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saar Kuzi, Anna Shtok, and Oren Kurland. 2016. Query expansion using word embeddings. In Pro- ceedings of the 25th ACM international on confer- ence on information and knowledge management, pages 1929-1932.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Relevancebased language models", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Lavrenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruce", |
|
"middle": [], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "ACM SIGIR Forum", |
|
"volume": "51", |
|
"issue": "", |
|
"pages": "260--267", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Lavrenko and W Bruce Croft. 2017. Relevance- based language models. In ACM SIGIR Forum, vol- ume 51, pages 260-267. ACM New York, NY, USA.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Ranking paragraphs for improving answer recall in open-domain question answering", |
|
"authors": [ |
|
{ |
|
"first": "Jinhyuk", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Seongjun", |
|
"middle": [], |
|
"last": "Yun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hyunjae", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miyoung", |
|
"middle": [], |
|
"last": "Ko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaewoo", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "565--569", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1053" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jinhyuk Lee, Seongjun Yun, Hyunjae Kim, Miyoung Ko, and Jaewoo Kang. 2018. Ranking paragraphs for improving answer recall in open-domain ques- tion answering. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 565-569, Brussels, Belgium. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Latent retrieval for weakly supervised open domain question answering", |
|
"authors": [ |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1906.00300" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. arXiv preprint arXiv:1906.00300.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "The neural hype and comparisons against weak baselines", |
|
"authors": [ |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ACM SIGIR Forum", |
|
"volume": "52", |
|
"issue": "", |
|
"pages": "40--51", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jimmy Lin. 2019. The neural hype and comparisons against weak baselines. In ACM SIGIR Forum, vol- ume 52, pages 40-51. ACM New York, NY, USA.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Positional relevance model for pseudo-relevance feedback", |
|
"authors": [ |
|
{ |
|
"first": "Yuanhua", |
|
"middle": [], |
|
"last": "Lv", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengxiang", |
|
"middle": [], |
|
"last": "Zhai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "579--586", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuanhua Lv and ChengXiang Zhai. 2010. Positional relevance model for pseudo-relevance feedback. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in infor- mation retrieval, pages 579-586.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Visualizing memorization in rnns", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Madsen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.23915/distill.00016" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreas Madsen. 2019. Visualizing memorization in rnns. Distill. https://distill.pub/2019/memorization- in-rnns.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Association for Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Fabio", |
|
"middle": [], |
|
"last": "Petroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rockt\u00e4schel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anton", |
|
"middle": [], |
|
"last": "Bakhtin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuxiang", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2463--2473", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1250" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2463-2473, Hong Kong, China. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Bert with history answer embedding for conversational question answering", |
|
"authors": [ |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Qu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liu", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minghui", |
|
"middle": [], |
|
"last": "Qiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruce", |
|
"middle": [], |
|
"last": "Croft", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yongfeng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1133--1136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen Qu, Liu Yang, Minghui Qiu, W Bruce Croft, Yongfeng Zhang, and Mohit Iyyer. 2019. Bert with history answer embedding for conversational ques- tion answering. In Proceedings of the 42nd Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1133- 1136.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Language models are unsupervised multitask learners", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rewon", |
|
"middle": [], |
|
"last": "Child", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Luan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dario", |
|
"middle": [], |
|
"last": "Amodei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "OpenAI Blog", |
|
"volume": "1", |
|
"issue": "8", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "The probabilistic relevance framework: BM25 and beyond", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Robertson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Zaragoza", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and be- yond. Now Publishers Inc.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Is attention interpretable?", |
|
"authors": [ |
|
{ |
|
"first": "Sofia", |
|
"middle": [], |
|
"last": "Serrano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2931--2951", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1282" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2931-2951, Florence, Italy. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", |
|
"authors": [ |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Simonyan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Vedaldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Zisserman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1312.6034" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karen Simonyan, Andrea Vedaldi, and Andrew Zisser- man. 2013. Deep inside convolutional networks: Vi- sualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Strobelt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Gehrmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Behrisch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Perer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Pfister", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Strobelt, S. Gehrmann, M. Behrisch, A. Perer, H. Pfister, and A. M. Rush. 2018. Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models. ArXiv e-prints.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Al-lenNLP Interpret: A framework for explaining predictions of NLP models", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Wallace", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jens", |
|
"middle": [], |
|
"last": "Tuyls", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junlin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanjay", |
|
"middle": [], |
|
"last": "Subramanian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subra- manian, Matt Gardner, and Sameer Singh. 2019. Al- lenNLP Interpret: A framework for explaining pre- dictions of NLP models. In Empirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "R3: Reinforced ranker-reader for open-domain question answering. Thirty-Second AAAI Conference on Artificial Intelligence", |
|
"authors": [ |
|
{ |
|
"first": "Shuohang", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mo", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoxiao", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiguo", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Klinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shiyu", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerry", |
|
"middle": [], |
|
"last": "Tesauro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bowen", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerry Tesauro, Bowen Zhou, and Jing Jiang. 2018. R3: Reinforced ranker-reader for open-domain question answering. Thirty-Second AAAI Conference on Arti- ficial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Fastqa: A simple and efficient neural architecture for question answering", |
|
"authors": [ |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Weissenborn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Georg", |
|
"middle": [], |
|
"last": "Wiese", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Seiffe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Fastqa: A simple and efficient neu- ral architecture for question answering. CoRR, abs/1703.04816.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Huggingface's transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R'emi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jamie", |
|
"middle": [], |
|
"last": "Brew", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "End-to-end open-domain question answering with bertserini", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuqing", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aileen", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xingyu", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luchen", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kun", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1902.01718" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with bertserini. arXiv preprint arXiv:1902.01718.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Wikiqa: A challenge dataset for open-domain question answering", |
|
"authors": [ |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yih", |
|
"middle": [], |
|
"last": "Wen-Tau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Meek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 conference on empirical methods in natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2013--2018", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain ques- tion answering. In Proceedings of the 2015 confer- ence on empirical methods in natural language pro- cessing, pages 2013-2018.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"text": "Get AnswerWhat is the goal of the fourth amendm List of retriever results with matched keyword highlights QA models trained with the HuggingFace Transformers API", |
|
"content": "<table><tr><td/><td>term 1</td><td>term 2</td><td>term 3</td><td>\u2026.</td><td colspan=\"2\">term n</td><td colspan=\"2\">Reader (BERT)</td><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>DistilBERT</td><td>BERT</td><td>ALBERT</td><td>\u2026.</td><td>X Model</td></tr><tr><td/><td colspan=\"4\">term 1 term 2 term 3 term 4 term 5</td><td>\u2026</td><td>term n</td><td/><td/><td/><td/></tr><tr><td>Retriever Results</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Explainer</td><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>gradients</td><td>Gradcam</td><td colspan=\"3\">\u2026. int. gradients</td></tr><tr><td>Answers</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>passage 1</td><td>passage 2</td><td>passage 3</td><td>\u2026.</td><td colspan=\"2\">passage k</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>span 1</td><td>span 2</td><td>span 3</td><td>\u2026.</td><td>span n</td></tr><tr><td>Visualization of explanations List of answer spans ranked by score</td><td>RelSnip</td><td>RelSnip</td><td>RelSnip</td><td>\u2026.</td><td colspan=\"2\">RelSnip</td><td>exp 1</td><td>exp 2</td><td>exp 3</td><td>\u2026.</td><td>exp n</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |