ACL-OCL / Base_JSON /prefixS /json /spanlp /2022.spanlp-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:24:46.862408Z"
},
"title": "Knowledge Base Index Compression via Dimensionality and Precision Reduction",
"authors": [
{
"first": "Vil\u00e9m",
"middle": [],
"last": "Zouhar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Saarland University",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Marius",
"middle": [],
"last": "Mosbach",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Saarland University",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Miaoran",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Saarland University",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Saarland University",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recently neural network based approaches to knowledge-intensive NLP tasks, such as question answering, started to rely heavily on the combination of neural retrievers and readers. Retrieval is typically performed over a large textual knowledge base (KB) which requires significant memory and compute resources, especially when scaled up. On HotpotQA we systematically investigate reducing the size of the KB index by means of dimensionality (sparse random projections, PCA, autoencoders) and numerical precision reduction. Our results show that PCA is an easy solution that requires very little data and is only slightly worse than autoencoders, which are less stable. All methods are sensitive to preand post-processing and data should always be centered and normalized both before and after dimension reduction. Finally, we show that it is possible to combine PCA with using 1bit per dimension. Overall we achieve (1) 100\u00d7 compression with 75%, and (2) 24\u00d7 compression with 92% original retrieval performance.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Recently neural network based approaches to knowledge-intensive NLP tasks, such as question answering, started to rely heavily on the combination of neural retrievers and readers. Retrieval is typically performed over a large textual knowledge base (KB) which requires significant memory and compute resources, especially when scaled up. On HotpotQA we systematically investigate reducing the size of the KB index by means of dimensionality (sparse random projections, PCA, autoencoders) and numerical precision reduction. Our results show that PCA is an easy solution that requires very little data and is only slightly worse than autoencoders, which are less stable. All methods are sensitive to preand post-processing and data should always be centered and normalized both before and after dimension reduction. Finally, we show that it is possible to combine PCA with using 1bit per dimension. Overall we achieve (1) 100\u00d7 compression with 75%, and (2) 24\u00d7 compression with 92% original retrieval performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent approaches to knowledge-intensive NLP tasks combine neural network based models with a retrieval component that leverages dense vector representations (Guu et al., 2020; Lewis et al., 2020; Petroni et al., 2021) . The most straightforward example is question answering, where the retriever receives as input a question and returns relevant documents to be used by the reader (both encoder and decoder), which outputs the answer (Chen, 2020) . The same approach can also be applied in other contexts, such as fact-checking (Tchechmedjiev et al., 2019) or knowledgable dialogue (Dinan et al., 2018) . Moreover, this paradigm can also be applied to systems that utilize e.g. caching of contexts from the training corpus to provide better output, such as the k-nearest neighbours language model proposed by Khandelwal et al. (2019) or the dynamic gating language model mechanism by Yogatama et al. (2021) . All these pipelines are generalized as retrieving an artefact from a knowledge base (Zouhar et al., 2021) on which the reader is conditioned together with the query.",
"cite_spans": [
{
"start": 158,
"end": 176,
"text": "(Guu et al., 2020;",
"ref_id": "BIBREF9"
},
{
"start": 177,
"end": 196,
"text": "Lewis et al., 2020;",
"ref_id": "BIBREF15"
},
{
"start": 197,
"end": 218,
"text": "Petroni et al., 2021)",
"ref_id": "BIBREF25"
},
{
"start": 435,
"end": 447,
"text": "(Chen, 2020)",
"ref_id": "BIBREF3"
},
{
"start": 529,
"end": 557,
"text": "(Tchechmedjiev et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 583,
"end": 603,
"text": "(Dinan et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 810,
"end": 834,
"text": "Khandelwal et al. (2019)",
"ref_id": "BIBREF17"
},
{
"start": 885,
"end": 907,
"text": "Yogatama et al. (2021)",
"ref_id": "BIBREF35"
},
{
"start": 994,
"end": 1015,
"text": "(Zouhar et al., 2021)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Crucially, all of the previous examples rely on the quality of the retrieval component and the knowledge base. The knowledge base is usually indexed by dense vector representations 1 and the retrieval component performs maximum similarity search, commonly using the inner product or the L 2 distance, to retrieve documents 2 from the knowledge base. Only the index alone takes up a large amount of size of the knowledge base, making deployment and experimentation very difficult. The retrieval speed is also dependent on the dimensionality of the index vector. An example of a large knowledge base is the work of Borgeaud et al. (2021) which performs retrieval over a database of 1.8 billion documents.",
"cite_spans": [
{
"start": 613,
"end": 635,
"text": "Borgeaud et al. (2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper focuses on the issue of compressing the index through dimensionality and precision reduction and makes the following contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Comparison of various unsupervised index compression methods for retrieval, including random projections, PCA, autoencoder, precision reduction and their combination. \u2022 Examination of effective pre-and postprocessing transformations, showing that centering and normalization are necessary for boosting the performance. \u2022 Analysis on the impact of adding irrelevant documents and retrieval errors. Recommendations for use by practicioners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 3, we describe the problem scenario and the experimental setup. We discuss the results of different compression methods in Section 4. We provide further analysis in Section 5 and conclude with usage recommendations in Section 6. The repository for this project is available open-source. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Reducing index size. A thorough overview of the issue of dimensionality reduction in information retrieval in the context of dual encoders has been done by Luan et al. (2021) . Though in-depth and grounded in formal arguments, their study is focused on the limits and properties of dimension reduction in general (even with sparse representations) and the effect of document length on performance. In contrast to their work, this paper aims to compare more methods and give practical advice with experimental evidence.",
"cite_spans": [
{
"start": 156,
"end": 174,
"text": "Luan et al. (2021)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A baseline for dimensionality reduction has been recently proposed by Izacard et al. (2020) in which they perform the reduction while training the document (and query) encoder by adding a low dimensional linear projection layer as the final output layer. Compared to our work, their approach is supervised.",
"cite_spans": [
{
"start": 70,
"end": 91,
"text": "Izacard et al. (2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the concurrent work of Ma et al. (2021) , PCA is also used to reduce the size of the document index. Compared to our work, they perform PCA using the combination of all question and document vectors. We show in Figures 4 and 6 that this is not needed and the PCA transformation matrix can be estimated much more efficiently. Moreover, we use different unsupervised compression approaches for comparison and perform additional analysis of our findings.",
"cite_spans": [
{
"start": 26,
"end": 42,
"text": "Ma et al. (2021)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "An orthogonal approach to the issue of memory cost has been proposed by Yamada et al. (2021) . Instead of moving to another continuous vector representation, their proposed method maps original vectors to vectors of binary values which are trained using the signal from the downstream task. The pipeline, however, still relies on re-ranking using the uncompressed vectors. This method is different from ours and in Section 4.4 we show that they can be combined.",
"cite_spans": [
{
"start": 72,
"end": 92,
"text": "Yamada et al. (2021)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Finally, He et al. (2021) investigate filtering and k-means pruning for the task of kNN language modelling. This work also circumvents the issue of having to always perform an expensive retrieval of a large data store by determining whether the retrieval is actually needed for a given input.",
"cite_spans": [
{
"start": 9,
"end": 25,
"text": "He et al. (2021)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3 Link will be available in the camera-ready version.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Effect of normalization. Timkey and van Schijndel (2021) examine how dominating embedding dimensions can worsen retrieval performance. They study the contribution of individual dimensions find that normalization is key for document retrieval based on dense vector representation when BERTbased embeddings are used. Compared to our work, they study pre-trained BERT directly, while we focus on DPR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Given a query q, the following set of equations summarizes the conceptual progression from retrieving top k relevant documents Z = {d 1 , d 2 , . . . , d k } from a large collection of documents D so that the relevance of d with q is maximized. For this, the query and the document embedding functions f Q : Q \u2192 R d and f D : D \u2192 R d are used to map the query and all documents to a shared embedding space and a similarity function sim :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement and Evaluation",
"sec_num": "3.1"
},
{
"text": "R d \u00d7 R d \u2192 R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement and Evaluation",
"sec_num": "3.1"
},
{
"text": "approximates the relevance between query and documents. Here, we consider either the inner product or the L 2 distance as sim. 4 Finally, to speed up the similarity computation over a large set of documents and to decrease memory usage (f D is usually precomputed), we apply dimension reduction functions r Q :",
"cite_spans": [
{
"start": 127,
"end": 128,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement and Evaluation",
"sec_num": "3.1"
},
{
"text": "R d \u2192 R d \u2032 and r D : R d \u2192 R d \u2032",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement and Evaluation",
"sec_num": "3.1"
},
{
"text": "for the query and document embeddings respectively. Formally, we are solving the following problem:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement and Evaluation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Z = arg top-k d\u2208D rel.(q, d) , with (1) rel.(q, d) \u2248 sim(f Q (q), f D (d)) (2) \u2248 sim(r Q (f Q (q)), r D (f D (d)))",
"eq_num": "(3)"
}
],
"section": "Problem Statement and Evaluation",
"sec_num": "3.1"
},
{
"text": "The approximation in (2) was shown to work well in practice for inner product and L 2 distance (Lin, 2021) . In this case, f Q is commonly finetuned for a specific downstream task. For this reason, it is desirable in (3) for the functions r Q and r D to be differentiable so that they can propagate the signal. These dimension-reducing functions need not be the same because even though they project to a shared vector space, the input distribution may still be different. Similarly to the query and document embedding functions, they can be fine-tuned.",
"cite_spans": [
{
"start": 95,
"end": 106,
"text": "(Lin, 2021)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement and Evaluation",
"sec_num": "3.1"
},
{
"text": "Task Agnostic Representation. When dealing with multiple downstream tasks that share a single (large) knowledge base, typically only f Q is finetuned for a specific task while f D remains fixed (Lewis et al., 2020; Petroni et al., 2021) . This assumes that the organization of the document vector space is sufficient across tasks and that only the mapping of the queries to this space needs to be trained. 5 Hence, this work is motivated primarily by finding a good r D (because of the dominant size of the document index), though we note that r Q is equally important and necessary because even without any vector semantics, the key and the document embeddings must have the same dimensionality.",
"cite_spans": [
{
"start": 194,
"end": 214,
"text": "(Lewis et al., 2020;",
"ref_id": "BIBREF15"
},
{
"start": 215,
"end": 236,
"text": "Petroni et al., 2021)",
"ref_id": "BIBREF25"
},
{
"start": 406,
"end": 407,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement and Evaluation",
"sec_num": "3.1"
},
{
"text": "R-Precision. To evaluate retrieval performance we compute R-Precision averaged over queries: (relevant documents among top k passages in Z)/r, k = number of passages in relevant documents, in the same way as Petroni et al. (2021) . Following previous work, we consider the inner product (IP) and the L 2 distance as the similarity function.",
"cite_spans": [
{
"start": 208,
"end": 229,
"text": "Petroni et al. (2021)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement and Evaluation",
"sec_num": "3.1"
},
{
"text": "As knowledge base we use documents from English Wikipedia and follow the setup described by Petroni et al. (2021) . We mark spans (original articles split into 100 token pieces, 50 million in total) as relevant for a query if they come from the same Wikipedia article as one of the provenances. 6 In order to make our experiments computationally feasible and easy to reproduce we experiment with a modified version of this knowledge base where we keep only spans of documents that are relevant to at least one query from the training or validation set of our downstream tasks. As downstream tasks, we use HotpotQA (Yang et al., 2018) for all main experiments and Natural Questions (Kwiatkowski et al., 2019) to verify that the results transfer to other datasets as well. This leads to over 2 million encoded spans for HotpotQA (see Table 6 for dataset sizes). The 768-dimensional embeddings (32-bit floats) of this dataset (both queries and documents) add up to 7GB (146GB for the whole unpruned dataset).",
"cite_spans": [
{
"start": 92,
"end": 113,
"text": "Petroni et al. (2021)",
"ref_id": "BIBREF25"
},
{
"start": 295,
"end": 296,
"text": "6",
"ref_id": null
},
{
"start": 614,
"end": 633,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF34"
},
{
"start": 681,
"end": 707,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 832,
"end": 839,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.2"
},
{
"text": "To establish baselines for uncompressed performance we use models based on BERT (Devlin et al., 5 Guu et al. (2020) provide evidence that this assumption can lead to worse results in some cases.",
"cite_spans": [
{
"start": 96,
"end": 97,
"text": "5",
"ref_id": null
},
{
"start": 98,
"end": 115,
"text": "Guu et al. (2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Uncompressed Retrieval Peformance",
"sec_num": "3.3"
},
{
"text": "6 Spans of the original text which help in answering the query. Figure 1 : Comparison of different BERT-based embedding models and versions when using faster but slightly inaccurate nearest neighbour search.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 72,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Uncompressed Retrieval Peformance",
"sec_num": "3.3"
},
{
"text": "DPR (Avg) Sentence BERT (Avg) BERT (Avg) DPR [CLS] Sentence BERT [CLS] BERT [CLS] 0.0 0.1 0.2 0.3 0.4 0.5 0.6 R-Precision IP IP fast L 2 L 2 fast",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uncompressed Retrieval Peformance",
"sec_num": "3.3"
},
{
"text": "[CLS] is the specific token embedding from the last layer while (Avg) is all token average. 2019). We consider (1) vanilla BERT, (2) Sentence-BERT (Reimers and Gurevych, 2019) and 3DPR (Karpukhin et al., 2020) , which was specifically trained for document retrieval. To obtain document embeddings, we use either the last hidden state representation at [CLS] or the average across tokens of the last layer. Our first experiment compares the retrieval performance of the different models on HotpotQA. The result is shown in Figure 1 . In alignment with previous works (Reimers and Gurevych, 2019) an immediately noticeable conclusion is that vanilla BERT has a poor performance, especially when taking the hidden state representation for the [CLS] token. Next, to make computation tractable, we repeat the experiment using FAISS (Johnson et al., 2019) . 7 We find that the performance loss across models is systematic, which warrants the use of this approximate nearest neighbour search for comparisons and all our following experiments will use FAISS on the DPR-CLS model.",
"cite_spans": [
{
"start": 147,
"end": 175,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF26"
},
{
"start": 185,
"end": 209,
"text": "(Karpukhin et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 352,
"end": 357,
"text": "[CLS]",
"ref_id": null
},
{
"start": 566,
"end": 594,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF26"
},
{
"start": 740,
"end": 745,
"text": "[CLS]",
"ref_id": null
},
{
"start": 827,
"end": 849,
"text": "(Johnson et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 852,
"end": 853,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 522,
"end": 530,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Uncompressed Retrieval Peformance",
"sec_num": "3.3"
},
{
"text": "Pre-processing Transformations. Figure 1 also shows that model performance, especially for DPR, depends heavily on what similarity metric is used for retrieval. This is because none of the models produces normalized vectors by default. Figure 2 shows that performing only normalization ( x ||x|| ) sometimes hurts the performance but when joined with centering beforehand ( x\u2212x ||x\u2212x|| ), it improves the results (compared to no pre- processing) in all cases. The normalization and centering is done for queries and documents separatedly. Moreover, if the vectors are normalized, then the retrieved documents are the same for L 2 and inner product. 8 Nevertheless, we argue it still makes sense to study the compression capabilities of L 2 and the inner product separately, since the output of the compression of normalized vectors need not be normalized.",
"cite_spans": [
{
"start": 649,
"end": 650,
"text": "8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 32,
"end": 40,
"text": "Figure 1",
"ref_id": null
},
{
"start": 236,
"end": 244,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Uncompressed Retrieval Peformance",
"sec_num": "3.3"
},
{
"text": "DPR (Avg) Sentence BERT (Avg) BERT (Avg) DPR [CLS] Sentence BERT [CLS] BERT [CLS] 0.0 0.1 0.2 0.3 0.4 0.5 0.6 R-Precision IP IP (center) IP, L 2 (norm) L 2 L 2 (center) IP, L 2 (center, norm)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uncompressed Retrieval Peformance",
"sec_num": "3.3"
},
{
"text": "Having established the retrieval performance of the uncompressed baseline, we now turn to methods for compressing the dense document index and the queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compression Methods",
"sec_num": "4"
},
{
"text": "Note that we consider unsupervised methods on already trained index, for maximum ease of use and applicability. This is in contrast to supervised methods, which have access to the query-doc relevancy mapping, or to in-training dimension reduction (i.e. lower final layer dimension).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compression Methods",
"sec_num": "4"
},
{
"text": "The simplest way to perform dimension reduction for a given index x \u2208 R d is to randomly preserve only certain d \u2032 dimensions and drop all other dimensions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Random Projection",
"sec_num": "4.1"
},
{
"text": "f drop. (x) = (x m 1 , x m 2 , . . . , x m d \u2032 ) 8 arg max k \u2212||a\u2212b|| 2 = arg max k \u2212\u27e8a, a\u27e9 2 \u2212\u27e8b, b\u27e9 2 + 2 \u2022 \u27e8a, b\u27e9 = arg max k 2 \u2022 \u27e8a, b\u27e9 \u2212 2 = arg max k \u27e8a, b\u27e9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Random Projection",
"sec_num": "4.1"
},
{
"text": "Another approach is to greedily search which dimensions to drop (those that, when omitted, either improve the performance or lessen it the least):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Random Projection",
"sec_num": "4.1"
},
{
"text": "p i (x) = (x 0 , x 1 , . . ., x i\u22121 , x i+1 , . . ., x 768 ) L i = R-Prec(p i (Q), p i (D)) m = sort desc. L ([1 . . . 768]) f greedy drop. (x) = (x m 1 , x m 2 , . . . , x m d \u2032 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Random Projection",
"sec_num": "4.1"
},
{
"text": "The advantage of these two approaches is that they can be represented easily by a single R 768\u00d7d matrix. We consider two other standard random projection methods: Gaussian random projection and Sparse random projection (Fodor, 2002) . Such random projections are suitable mostly for inner product (Kaski, 1998) though the differences are removed by normalizing the vectors (which also improves the performance). Results. The results of all random projection methods are shown in Figure 3 . Gaussian random projection seems to perform equally to sparse random projection. The performance is not fully recovered for the two methods. Interestingly, simply dropping random dimensions led to better performance than that of sparse or Gaussian random projection. The greedy dimension dropping even improves the performance slightly over random dimension dropping in some cases before saturating and is deterministic. As shown in Table 2 , the greedy dimension dropping with post-processing achieves the best performance among all random projection methods. Without post-processing, L 2 distance works better compared to inner product.",
"cite_spans": [
{
"start": 219,
"end": 232,
"text": "(Fodor, 2002)",
"ref_id": "BIBREF6"
},
{
"start": 297,
"end": 310,
"text": "(Kaski, 1998)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 479,
"end": 487,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 923,
"end": 930,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Random Projection",
"sec_num": "4.1"
},
{
"text": "Another natural candidate for dimensionality reduction is principal component analysis (PCA) (F. R.S., 1901) . PCA considers the dimensions with the highest variance and omits the rest. This leads to a projection matrix that projects the original data onto the principal components using an orthonormal basis T . The following loss is minimized L = MSE(T \u2032 Tx, x). Note that we fit PCA on the covariance matrix of either the document index, query embeddings or both and the trained dimension-reducing projection is then applied to both the document and query embeddings.",
"cite_spans": [
{
"start": 97,
"end": 108,
"text": "R.S., 1901)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Principal Component Analysis",
"sec_num": "4.2"
},
{
"text": "Results. The results of performing PCA are shown in Figure 4 . First, we find that the uncompressed performance, as well as the effect of compression, is highly dependent on the data pre-processing. This should not be surprising as the PCA algorithm assumes centered and preprocessed data. Nevertheless, we stress and demonstrate the importance of this step. This is given by the normalization of the input vectors and also that the column vectors of PCA are orthonormal.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 60,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Principal Component Analysis",
"sec_num": "4.2"
},
{
"text": "Second, when the data is not centered, the PCA is sensitive to what it is trained on. Figure 4 show systematically that training on the set of available queries provides better performance than training on the documents or a combination of both. Subsequently, after centering the data, it does not matter anymore what is used for fitting: both the queries and the documents provide good estimates of the data variance and the dependency on training data size for PCA is explored explicitly in Section 5.1. The reason why queries provide better results without centering is that they are more centered in the first place, as shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 94,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 633,
"end": 640,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Principal Component Analysis",
"sec_num": "4.2"
},
{
"text": "Avg. L 1 (std) Avg. L 2 (std) Documents 243.0 (20.1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Principal Component Analysis",
"sec_num": "4.2"
},
{
"text": "12.3 (0.6) Queries 137.0 (7.5) 9.3 (0.2) In all cases, the PCA performance starts to plateau around 128 dimensions and is within 95% of the uncompressed performance. Finally, we note that while PCA is concerned with minimizing re-construction loss, Figure 4 shows that even after vastly decreasing the reconstruction loss, no significant improvements in retrieval performance are achieved. We further discuss this finding in Section 5.4.",
"cite_spans": [],
"ref_spans": [
{
"start": 249,
"end": 257,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Principal Component Analysis",
"sec_num": "4.2"
},
{
"text": "Component Scaling. One potential issue of PCA is that there may be dimensions that dominate the vector space. Mu et al. (2017) suggest to simply remove the dimension corresponding to the highest eigenvalue though we find that simply scaling down the top k eigenvectors systematically outperforms standard PCA. For simplicity, we focused on the top 5 eigenvectors and performed a smallscale grid-search of the scaling factors. The best performing one was (0.5, 0.8, 0.8, 0.9, 0.8) and Table 2 shows that it provides a small additional boost in retrieval performance.",
"cite_spans": [
{
"start": 110,
"end": 126,
"text": "Mu et al. (2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Principal Component Analysis",
"sec_num": "4.2"
},
{
"text": "A straightforward extension of PCA for dimensionality reducing is to use autoencoders, which has been widely explored (Hu et al., 2014; Wang et al., 2016) . Usually, the model is described by an encoder e : R d \u2192 R b , a function from a higher dimension to the target (bottleneck) dimension and a decoder r : R b \u2192 R d , which maps back from the target dimension to the original vector space. The final (reconstruction) loss is then commonly computed as L = MSE((r \u2022 e)(x), x). To reduce the dimensionality of a dataset, only the function e is applied to both the query and the document embedding. We consider three models with the bottleneck: 1. A linear projection similar to PCA but without the restriction of orthonormal columns:",
"cite_spans": [
{
"start": 118,
"end": 135,
"text": "(Hu et al., 2014;",
"ref_id": "BIBREF12"
},
{
"start": 136,
"end": 154,
"text": "Wang et al., 2016)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Autoencoder",
"sec_num": "4.3"
},
{
"text": "e 1 (x) = L 768 128 r 1 (x) = L 128 768",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autoencoder",
"sec_num": "4.3"
},
{
"text": "2. A multi-layer feed forward neural network with tanh activation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autoencoder",
"sec_num": "4.3"
},
{
"text": "e 2 (x) = L 768 512 \u2022 tanh \u2022L 512 256 \u2022 tanh \u2022L 256 128 r 2 (x) = L 128 256 \u2022 tanh \u2022L 256 512 \u2022 tanh \u2022L 512 768",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autoencoder",
"sec_num": "4.3"
},
{
"text": "3. The same encoder as in the previous model but with a shallow decoder:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autoencoder",
"sec_num": "4.3"
},
{
"text": "e 3 (x) = L 768 512 \u2022 tanh \u2022L 512 256 \u2022 tanh \u2022L 256 0.2 0.4 0.6 R-Precision (PCA)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autoencoder",
"sec_num": "4.3"
},
{
"text": "No pre-processing Normalized Centered Centered, Normalized Results. We explore the effects of training data and pre-processing with results for the first model shown in Figure 4 . Surprisingly, the Autoencoder is even more sensitive to proper pre-processing than PCA, most importantly centering which makes the results much more stable. The rationale for the third model is that we would like the hidden representation to require as little post-processing as possible to become the original vector again. The higher performance of the model with shallow decoder, shown in Table 2 supports this reasoning. An alternative way to reduce the computation (modelling dimension relationships) in the decoder is to regularize the weights in the decoder. We make use of L 1 regularization explicitly because L 2 regularization is conceptually already present in Adam's weight decay. This improves each of the three models.",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 177,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Autoencoder",
"sec_num": "4.3"
},
{
"text": "Similarly to the other reconstruction loss-based method (PCA), without post-processing, inner product works yields better results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Autoencoder",
"sec_num": "4.3"
},
{
"text": "Lastly, we also experiment with reducing index size by lowering the float precision from 32 bits to 16 and 8 bits. Note that despite their quite high retrieval performance, they only reduce the size by 2 and 4 respectively (as opposed to 6 by dimension reduction via PCA to 128 dimensions). Another drawback is that retrieval time is not affected because the dimensionality remains the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision Reduction",
"sec_num": "4.4"
},
{
"text": "Using only one bit per dimension is a special case of precision reduction suggested by Yamada et al. (2021) . Because we use centered data, we can define the element-wise transformation function as:",
"cite_spans": [
{
"start": 87,
"end": 107,
"text": "Yamada et al. (2021)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Precision Reduction",
"sec_num": "4.4"
},
{
"text": "f \u03b1 (x i ) = 1 \u2212 \u03b1 x i \u2265 0 0 \u2212 \u03b1 x i < 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision Reduction",
"sec_num": "4.4"
},
{
"text": "Bit 1 would then correspond to 1 \u2212 \u03b1 and 0 to 0 \u2212 \u03b1. While Yamada et al. (2021) use values 1 and 0, we work with 0.5 and \u22120.5 in order to be able to distinguish between certain cases when using IP-based similarity. 9 As shown in Table 2 , this indeed yields a slight improvement. When applying post-processing, however, the two approaches are equivalent. While this method achieves extreme 32x compression on the disk and retains most of the retrieval performance, the downside is that if one wishes to use standard retrieval pipelines, these variables would have to be converted to a supported, larger, data type. 10",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 236,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Precision Reduction",
"sec_num": "4.4"
},
{
"text": "Original Center + Norm. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method Compression",
"sec_num": null
},
{
"text": "Finally, reducing precision can be readily combined with dimension reduction methods, such as PCA (prior to changing the data type). The results in Figure 5 show that PCA can be combined with e.g. 8-bit precision reduction with negligible loss in performance. As shown in the last row of Table 2, this can lead to the compressed size be 100x smaller while retaining 75% retrieval performance on HotpotQA and 89% for NaturalQuestions (see Table 7 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 156,
"text": "Figure 5",
"ref_id": "FIGREF3"
},
{
"start": 438,
"end": 445,
"text": "Table 7",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Combination of PCA and Precision Reduction",
"sec_num": "4.5"
},
{
"text": "The comparison of all discussed dimension reduction methods is shown in Table 2 . It also shows the role of centering and normalization post-encoding which systematically improves the performance. The best performing model for dimension reduction is the autoencoder with L 1 regularization and either just a single projection layer for the encoder and decoder or with the shallow decoder (6x compression with 97% retrieval performance). Additionally, Appendix B compares training and evaluation speeds of common implementations. ",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Model Comparison",
"sec_num": "5.1"
},
{
"text": "A crucial aspect of the PCA and autoencoder methods is how much data they need for training. In the following, we experimented with limiting the number of training samples for PCA and the linear autoencoder. Results are shown in Figure 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 237,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data size",
"sec_num": "5.2"
},
{
"text": "While Ma et al. (2021) used a much larger training set to fit PCA, we find that PCA requires very 128 10 3.0 10 4.0 10 5.0 10 6.0 10 7.0 10 7.5 Docs count (log scale) Figure 6 : Dependency of PCA and autoencoder performance (evaluated on HotpotQA dev data, trained on document encodings, pre-and post-processing) by modifying the training data (solid lines) and by adding irrelevant documents to the retrieval pool (dashed lines). Black crosses indicate the original training size. Vertical bars are 95% confidence intervals using t-distibution (across 6 runs with random model initialization and sampling). Note the log scale on the x-axis and the truncation of the y-axis. few samples (lower-bounded by 128 which is also the number of dimensions used for this experiment). This is because in the case of PCA training data is used to estimate the data covariance matrix which has been shown to work well when using a few samples (Tadjudin and Landgrebe, 1999) . Additionally, we find that overall the autoencoder needs more data to outperform PCA.",
"cite_spans": [
{
"start": 6,
"end": 22,
"text": "Ma et al. (2021)",
"ref_id": "BIBREF23"
},
{
"start": 930,
"end": 960,
"text": "(Tadjudin and Landgrebe, 1999)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 167,
"end": 175,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data size",
"sec_num": "5.2"
},
{
"text": "Next, we experimented with adding more (potentially irrelevant) documents to the knowledge base. For this, we kept the training data for the autoencoder and PCA to the original size. The results are shown as dashed lines in Figure 6 . Retrieval performance quickly deteriorates for both models (faster than for the uncompressed case), highlighting the importance of filtering irrelevant documents from the knowledge base.",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 232,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data size",
"sec_num": "5.2"
},
{
"text": "So far, our evaluation focused on quantitative comparisons. In the following, we compare the distribution of documents retrieved before and after compression to investigate if there are systematic differences. We carry out this analysis using Hot-potQA which, by design, requires two documents in order to answer a given query. We compare retrieval with the original document embeddings to retrieval with PCA and 1-bit compression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval errors",
"sec_num": "5.3"
},
{
"text": "We find that there are no systematic differences compared to the uncompressed retrieval. This is demonstrated by the small off-diagonal values in Figure 7 . This result shows that if the retriever working with uncompressed embeddings returns two relevant documents in the top-k for a given query, also the retriever working with the compressed index is very likely to include the same two documents in the top-k. This is further shown by the Pearson correlation in Table 4 . This suggests that the compressed index can be used on downstream tasks with predictable performance loss based on the slightly worsened retrieval performance. Furthermore, there do not seem to be any systematic differences even between the two vastly different compression methods used for this experiment (PCA and 1-bit precision). This indicates that, despite their methodological differences, the two compression approaches seem to remove the same redundances in the uncompressed data. We leave a more detailed exploration of these findings for future work. ",
"cite_spans": [],
"ref_spans": [
{
"start": 146,
"end": 154,
"text": "Figure 7",
"ref_id": "FIGREF5"
},
{
"start": 465,
"end": 472,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Retrieval errors",
"sec_num": "5.3"
},
{
"text": "Despite PCA and autoencoder being the most successful methods, low reconstruction loss provides no theoretical guarantee to the retrieval performance. Consider a simple linear projection that can be represented as a diagonal matrix that projects to a space of the same dimensionality. This function has a trivial inverse and therefore no information is lost when it is applied. The retrieval is however disrupted, as it will mostly depend on the first dimension and nothing else. This is a major flaw of approaches that minimize the vector reconstruction loss because the optimized quantity is different to the actual goal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitfalls of Reconstruction Loss",
"sec_num": "5.4"
},
{
"text": "R = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ed 10 99 0 \u2022 \u2022 \u2022 0 0 1 \u2022 \u2022 \u2022 0 . . . . . . . . . . . . 0 0 \u2022 \u2022 \u2022 1 \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f8 R \u22121 = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ed 10 \u221299 0 \u2022 \u2022 \u2022 0 0 \u2022 \u2022 \u2022 0 . . . . . . . . . . . . 0 0 \u2022 \u2022 \u2022 1 \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitfalls of Reconstruction Loss",
"sec_num": "5.4"
},
{
"text": "Distance Learning. The task of dimensionality reduction has been explored by standard statistical methods by the name manifold learning. The most used method is t-distributed stochastic neighbor (t-SNE) embedding built on the work of Hinton and Roweis (2002) or multidimensional scaling (Kruskal, 1964; Borg and Groenen, 2005) . They organize a new vector space (of lower dimensionality) so that the L 2 distances follow those of the original space (extensions to other metrics also exist). Although the optimization goal is more in line with our task of vector space compression with the preservation of nearest neighbours, methods of manifold learning are limited by the large computation costs 11 and the fact that they do not construct a function but rather move the discrete points in the new space to lower the optimization loss. This makes it not applicable for online purposes (i.e. adding new samples that need to be compressed as well). The main disadvantage of the approaches based on reconstruction loss is that their optimization goal strays from what we are interested in, namely preserving distances between vectors. We tried to reformulate the problem in terms of deep learning and gradient-based optimization to alleviate the issue of speed and extensibility of standard manifold learning approaches. We try to learn a function that maps the original vector space to a lower-dimensional one while preserving similarities. That can be either a simple linear projection A or generally a more complex differentiable function f :",
"cite_spans": [
{
"start": 234,
"end": 258,
"text": "Hinton and Roweis (2002)",
"ref_id": "BIBREF11"
},
{
"start": 287,
"end": 302,
"text": "(Kruskal, 1964;",
"ref_id": "BIBREF18"
},
{
"start": 303,
"end": 326,
"text": "Borg and Groenen, 2005)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pitfalls of Reconstruction Loss",
"sec_num": "5.4"
},
{
"text": "L = MSE(sim(f (t i ), f (t j )), sim(t i , t j ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitfalls of Reconstruction Loss",
"sec_num": "5.4"
},
{
"text": "After the function f is fitted, both the training and new data can be compressed by its application. As opposed to manifold learning which usually 11 The common fast implementation for t-SNE, Barnes-Hut (Barnes and Hut, 1986; Van Der Maaten, 2013) is based on either quadtrees or octrees and is limited to 3 dimensions. leverages specific properties of the metrics, here they can be any differentiable functions. The optimization was, however, too slow, underperforming (between sparse projection and PCA) and did not currently provide any benefits.",
"cite_spans": [
{
"start": 147,
"end": 149,
"text": "11",
"ref_id": null
},
{
"start": 192,
"end": 225,
"text": "Barnes-Hut (Barnes and Hut, 1986;",
"ref_id": "BIBREF0"
},
{
"start": 226,
"end": 247,
"text": "Van Der Maaten, 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pitfalls of Reconstruction Loss",
"sec_num": "5.4"
},
{
"text": "We also tried to use unsupervised contrastive learning by considering close neighbours in the original space as positive samples and distant neighbours as negative samples but reached similar results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pitfalls of Reconstruction Loss",
"sec_num": "5.4"
},
{
"text": "In this section we briefly discuss the main conclusions from our experiments and analysis in the form of recommendations for NLP practicioners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Importance of Pre-/post-processing. As our results show, for all methods (and models), centering and normalization should be done before and after dimension reduction, as it boosts the performance of every model. Method recommendation. While most compression methods achieve similar retrieval performance and compression ratios (cf. Table 2 and Table 7) , PCA stands out in the following regards: (1) It requires only minimal implementation effort and no tuning of hyper-parameters beyond selecting which principal components to keep; (2) as our analysis shows, the PCA matrix can be estimated well with only 1000 document or query embeddings. It is not necessary to learn a transformation matrix on the full knowledge base; (3) PCA can easily be combined with precision reduction based approaches.",
"cite_spans": [],
"ref_spans": [
{
"start": 333,
"end": 353,
"text": "Table 2 and Table 7)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "In this work, we examined several simple unsupervised methods for dimensionality reduction for retrieval-based NLP tasks: random projections, PCA, autoencoder and precision reduction and their combination. We also documented the data requirements of each method and their reliance on preand post-processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "7"
},
{
"text": "Future work. As shown in prior works, dimension reduction can take place also during training where the loss is more in-line with the retrieval goal. Methods for dimension reduction after training, however, rely mostly on reconstruction loss, which is suboptimal. Therefore more research for dimension reduction methods is needed, such as fast manifold or distance-based learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "7"
},
{
"text": "Adam Learning rate 10 \u22123 L 1 regularization 10 \u22125.9 Table 3 : Hyperparameters of autoencoder architectures described in Section 4.3. L 1 regularization is used only when explicitly mentioned.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 59,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Batch size 128 Optimizer",
"sec_num": null
},
{
"text": "A Pre-processing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Batch size 128 Optimizer",
"sec_num": null
},
{
"text": "Another common approach before any feature selection is to use z-scores ( x\u2212x \u03c3 ) instead of the original values. Its boost in performance is however similar to that of centering and normalization. The effects of each pre-processing step are in Table 5 . The significant differences in performance show the importance of data pre-processing (agnostic to model selection).",
"cite_spans": [],
"ref_spans": [
{
"start": 245,
"end": 252,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Batch size 128 Optimizer",
"sec_num": null
},
{
"text": "Despite the autoencoder providing slightly better retrieval performance and PCA being generally easier to use (due to the lack of hyperparameters), there are several tradeoffs in model selection. Once the models are trained, the runtime performance (encoding) is comparable though for PCA it is a single matrix projection while for the autoencoder it may be several layers and activation functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Speed",
"sec_num": null
},
{
"text": "Depending on the specific library used for implementation, however, the results differ. Figure 8 shows that the autoencoder (implemented in Py-Torch) is much slower than any other model when run on a CPU but the fastest when run on a GPU. Similarly, PCA works best if used from the Py-Torch library (whether on CPU or GPU) and from 12 PyTorch 1.9.1, scikit-learn 0.23.2, RTX 2080 Ti (CUDA 11.4), 64\u00d72.1GHz Intel Xeon E5-2683 v4, 1TB RAM.",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 96,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Speed",
"sec_num": null
},
{
"text": "Uncompressed PCA 1bit Uncompressed 1.00 PCA 0.87 1.00 1bit 0.81 0.80 1.00 the standard Scikit package. Except for Scikit, there seems to be little relation between the target dimensionality and computation time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Speed",
"sec_num": null
},
{
"text": "We also show the major experiments in Table 7 (table structure equivalent to that for the pruned dataset in Table 2 ) on Natural Question (Kwiatkowski et al., 2019) with identical dataset pre-processing. The performance is overall larger because the task is different and the set of documents is lower (1.5 million spans) but comparatively the trends are in line with the previous conclusions of the paper. Figure 8 : Speed comparison of PCA and autoencoder (model 3) implemented in PyTorch and Scikit 12 split into training and encoding parts. Models were trained on documents and queries jointly (normalized). Error bars are 95% confidence intervals using t-distribution (5 runs).",
"cite_spans": [
{
"start": 138,
"end": 164,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 108,
"end": 115,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 407,
"end": 415,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "C Comparison on Natural Questions",
"sec_num": null
},
{
"text": "Compression Original Center + Norm. : Overview of compression method performance (from 768) using either L 2 or inner product for retrieval. Inputs are based on (1) original and (2) centered and normalized output of DPR-CLS. Performance is measured by R-Precision on NaturalQuestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "Sparse representations via BM25 (Robertson et al., 1995) are also commonly used but not the focus of this work.2 We refer to the retrieved objects as documents though they commonly range from spans of text (e.g. 100 tokens) to the full documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Cosine similarity could also be used but for computation reasons we skip it. Results are the same as for inner product and L 2 distance when the vectors are normalized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "IndexIVFFlat, nlist=200, nprobe=100.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "r 3 (x) = L 128 768Compared to PCA, it is able to model nonpairwise interaction between dimensions (in case of models 2 and 3 also non-linear interaction).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "When using 0 and 1, the IP similarity of 0 and 1 is the same as 0 and 0 while for \u22120.5 and 0.5 they are \u22120.25 and 0.25 respectively.10 The Tevatron toolkit(Gao et al., 2022) supports mixed precision training with 16-bit floats.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -Project-ID 232722074 -SFB 1102. Thank you to the reviewers, Badr M. Abdullah and many others for their comments to our work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A hierarchical o (n log n) force-calculation algorithm",
"authors": [
{
"first": "Josh",
"middle": [],
"last": "Barnes",
"suffix": ""
},
{
"first": "Piet",
"middle": [],
"last": "Hut",
"suffix": ""
}
],
"year": 1986,
"venue": "nature",
"volume": "324",
"issue": "6096",
"pages": "446--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josh Barnes and Piet Hut. 1986. A hierarchical o (n log n) force-calculation algorithm. nature, 324(6096):446-449.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modern multidimensional scaling: Theory and applications",
"authors": [
{
"first": "Ingwer",
"middle": [],
"last": "Borg",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Patrick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Groenen",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ingwer Borg and Patrick JF Groenen. 2005. Modern multidimensional scaling: Theory and applications. Springer Science & Business Media.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural Network Models for Tasks in Open-Domain and Closed-Domain Question Answering",
"authors": [
{
"first": "",
"middle": [],
"last": "Charles L Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles L Chen. 2020. Neural Network Models for Tasks in Open-Domain and Closed-Domain Question Answering. Ohio University.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 4171- 4186.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Wizard of wikipedia: Knowledge-powered conversational agents",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Kurt",
"middle": [],
"last": "Shuster",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.01241"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A survey of dimension reduction techniques",
"authors": [
{
"first": "K",
"middle": [],
"last": "Imola",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fodor",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Imola K Fodor. 2002. A survey of dimension reduction techniques. Technical report, Citeseer.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Liii. on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Pearson",
"suffix": ""
},
{
"first": "F",
"middle": [
"R S"
],
"last": "",
"suffix": ""
}
],
"year": 1901,
"venue": "Journal of Science",
"volume": "2",
"issue": "11",
"pages": "559--572",
"other_ids": {
"DOI": [
"10.1080/14786440109462720"
]
},
"num": null,
"urls": [],
"raw_text": "Karl Pearson F.R.S. 1901. Liii. on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11):559-572.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Tevatron: An efficient and flexible toolkit for dense retrieval",
"authors": [
{
"first": "Luyu",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xueguang",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Callan",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2203.05765"
]
},
"num": null,
"urls": [],
"raw_text": "Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. 2022. Tevatron: An efficient and flexible toolkit for dense retrieval. arXiv preprint arXiv:2203.05765.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Realm: Retrievalaugmented language model pre-training",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Guu",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Zora",
"middle": [],
"last": "Tung",
"suffix": ""
},
{
"first": "Panupong",
"middle": [],
"last": "Pasupat",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.08909"
]
},
"num": null,
"urls": [],
"raw_text": "Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. Realm: Retrieval- augmented language model pre-training. arXiv preprint arXiv:2002.08909.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Efficient nearest neighbor language models",
"authors": [
{
"first": "Junxian",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "5703--5714",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junxian He, Graham Neubig, and Taylor Berg- Kirkpatrick. 2021. Efficient nearest neighbor lan- guage models. In Proceedings of the 2021 Confer- ence on Empirical Methods in Natural Language Processing, pages 5703-5714.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Stochastic neighbor embedding",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sam T Roweis",
"suffix": ""
}
],
"year": 2002,
"venue": "NIPS",
"volume": "15",
"issue": "",
"pages": "833--840",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton and Sam T Roweis. 2002. Stochastic neighbor embedding. In NIPS, volume 15, pages 833-840. Citeseer.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving the architecture of an autoencoder for dimension reduction",
"authors": [
{
"first": "Changjie",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Xiaoli",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Yonggang",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2014,
"venue": "IEEE 11th Intl Conf on Ubiquitous Intelligence and Computing and 2014 IEEE 11th Intl Conf on Autonomic and Trusted Computing and 2014 IEEE 14th Intl Conf on Scalable Computing and Communications and Its Associated Workshops",
"volume": "",
"issue": "",
"pages": "855--858",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Changjie Hu, Xiaoli Hou, and Yonggang Lu. 2014. Im- proving the architecture of an autoencoder for di- mension reduction. In 2014 IEEE 11th Intl Conf on Ubiquitous Intelligence and Computing and 2014 IEEE 11th Intl Conf on Autonomic and Trusted Com- puting and 2014 IEEE 14th Intl Conf on Scalable Computing and Communications and Its Associated Workshops, pages 855-858. IEEE.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A memory efficient baseline for open domain question answering",
"authors": [
{
"first": "Gautier",
"middle": [],
"last": "Izacard",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Hosseini",
"suffix": ""
},
{
"first": "Nicola",
"middle": [
"De"
],
"last": "Cao",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2012.15156"
]
},
"num": null,
"urls": [],
"raw_text": "Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Sebastian Riedel, and Edouard Grave. 2020. A memory efficient baseline for open domain ques- tion answering. arXiv preprint arXiv:2012.15156.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Billion-scale similarity search with gpus",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Matthijs",
"middle": [],
"last": "Douze",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Transactions on Big Data",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Johnson, Matthijs Douze, and Herv\u00e9 J\u00e9gou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Dense passage retrieval for opendomain question answering",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "6769--6781",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Dimensionality reduction by random mapping: Fast similarity computation for clustering",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Kaski",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE International Joint Conference on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence (Cat. No. 98CH36227)",
"volume": "1",
"issue": "",
"pages": "413--418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Kaski. 1998. Dimensionality reduction by ran- dom mapping: Fast similarity computation for clus- tering. In 1998 IEEE International Joint Confer- ence on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence (Cat. No. 98CH36227), volume 1, pages 413-418. IEEE.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Generalization through memorization: Nearest neighbor language models",
"authors": [
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.00172"
]
},
"num": null,
"urls": [],
"raw_text": "Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2019. Generalization through memorization: Nearest neighbor language models. arXiv preprint arXiv:1911.00172.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Nonmetric multidimensional scaling: a numerical method",
"authors": [
{
"first": "B",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kruskal",
"suffix": ""
}
],
"year": 1964,
"venue": "Psychometrika",
"volume": "29",
"issue": "2",
"pages": "115--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph B Kruskal. 1964. Nonmetric multidimen- sional scaling: a numerical method. Psychometrika, 29(2):115-129.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Natural questions: a benchmark for question answering research",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "453--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453- 466.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Tim Rockt\u00e4schel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Perez",
"suffix": ""
},
{
"first": "Aleksandra",
"middle": [],
"last": "Piktus",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Heinrich",
"middle": [],
"last": "K\u00fcttler",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.11401"
]
},
"num": null,
"urls": [],
"raw_text": "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim Rock- t\u00e4schel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. arXiv preprint arXiv:2005.11401.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A proposed conceptual framework for a representational approach to information retrieval",
"authors": [
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2110.01529"
]
},
"num": null,
"urls": [],
"raw_text": "Jimmy Lin. 2021. A proposed conceptual framework for a representational approach to information re- trieval. arXiv preprint arXiv:2110.01529.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Sparse, dense, and attentional representations for text retrieval",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2021,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "9",
"issue": "",
"pages": "329--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2021. Sparse, dense, and attentional representations for text retrieval. Transactions of the Association for Computational Linguistics, 9:329- 345.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Simple and effective unsupervised redundancy elimination to compress dense vectors for passage retrieval",
"authors": [
{
"first": "Xueguang",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Minghan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Xin",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2854--2859",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xueguang Ma, Minghan Li, Kai Sun, Ji Xin, and Jimmy Lin. 2021. Simple and effective unsupervised re- dundancy elimination to compress dense vectors for passage retrieval. In Proceedings of the 2021 Con- ference on Empirical Methods in Natural Language Processing, pages 2854-2859.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Simple and effective postprocessing for word representations",
"authors": [
{
"first": "Jiaqi",
"middle": [],
"last": "Mu",
"suffix": ""
},
{
"first": "Suma",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "Pramod",
"middle": [],
"last": "Viswanath",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.01417"
]
},
"num": null,
"urls": [],
"raw_text": "Jiaqi Mu, Suma Bhat, and Pramod Viswanath. 2017. All-but-the-top: Simple and effective postprocess- ing for word representations. arXiv preprint arXiv:1702.01417.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Kilt: a benchmark for knowledge intensive language tasks",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Aleksandra",
"middle": [],
"last": "Piktus",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Majid",
"middle": [],
"last": "Yazdani",
"suffix": ""
},
{
"first": "Nicola",
"middle": [
"De"
],
"last": "Cao",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Maillard",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "2523--2544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, et al. 2021. Kilt: a benchmark for knowledge in- tensive language tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2523-2544.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sentence-bert: Sentence embeddings using siamese bert-networks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3973--3983",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3973-3983.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Okapi at trec-3",
"authors": [
{
"first": "Steve",
"middle": [],
"last": "Stephen E Robertson",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Micheline",
"middle": [
"M"
],
"last": "Jones",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Hancock-Beaulieu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gatford",
"suffix": ""
}
],
"year": 1995,
"venue": "Nist Special Publication Sp",
"volume": "109",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu, Mike Gatford, et al. 1995. Okapi at trec-3. Nist Special Publication Sp, 109:109.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Covariance estimation with limited training samples",
"authors": [
{
"first": "Saldju",
"middle": [],
"last": "Tadjudin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "David A Landgrebe",
"suffix": ""
}
],
"year": 1999,
"venue": "IEEE Transactions on Geoscience and Remote Sensing",
"volume": "37",
"issue": "4",
"pages": "2113--2118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saldju Tadjudin and David A Landgrebe. 1999. Co- variance estimation with limited training samples. IEEE Transactions on Geoscience and Remote Sens- ing, 37(4):2113-2118.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Claimskg: A knowledge graph of fact-checked claims",
"authors": [
{
"first": "Andon",
"middle": [],
"last": "Tchechmedjiev",
"suffix": ""
},
{
"first": "Pavlos",
"middle": [],
"last": "Fafalios",
"suffix": ""
},
{
"first": "Katarina",
"middle": [],
"last": "Boland",
"suffix": ""
},
{
"first": "Malo",
"middle": [],
"last": "Gasquet",
"suffix": ""
},
{
"first": "Matth\u00e4us",
"middle": [],
"last": "Zloch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Zapilko",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Dietze",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Todorov",
"suffix": ""
}
],
"year": 2019,
"venue": "International Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "309--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andon Tchechmedjiev, Pavlos Fafalios, Katarina Boland, Malo Gasquet, Matth\u00e4us Zloch, Benjamin Zapilko, Stefan Dietze, and Konstantin Todorov. 2019. Claimskg: A knowledge graph of fact-checked claims. In International Semantic Web Conference, pages 309-324. Springer.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "All bark and no bite: Rogue dimensions in transformer language models obscure representational quality",
"authors": [
{
"first": "William",
"middle": [],
"last": "Timkey",
"suffix": ""
},
{
"first": "Marten",
"middle": [],
"last": "Van Schijndel",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4527--4546",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Timkey and Marten van Schijndel. 2021. All bark and no bite: Rogue dimensions in transformer language models obscure representational quality. In Proceedings of the 2021 Conference on Empir- ical Methods in Natural Language Processing, pages 4527-4546.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Auto-encoder based dimensionality reduction",
"authors": [
{
"first": "Yasi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hongxun",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Sicheng",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "Neurocomputing",
"volume": "184",
"issue": "",
"pages": "232--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yasi Wang, Hongxun Yao, and Sicheng Zhao. 2016. Auto-encoder based dimensionality reduction. Neu- rocomputing, 184:232-242.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Efficient passage retrieval with hashing for open-domain question answering",
"authors": [
{
"first": "Ikuya",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Akari",
"middle": [],
"last": "Asai",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2106.00882"
]
},
"num": null,
"urls": [],
"raw_text": "Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. arXiv preprint arXiv:2106.00882.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Hotpotqa: A dataset for diverse, explainable multi-hop question answering",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2369--2380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 2369-2380.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Adaptive semiparametric language models",
"authors": [
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Cyprien",
"middle": [],
"last": "De Masson D'autume",
"suffix": ""
},
{
"first": "Lingpeng",
"middle": [],
"last": "Kong",
"suffix": ""
}
],
"year": 2021,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "9",
"issue": "",
"pages": "362--373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dani Yogatama, Cyprien de Masson d'Autume, and Lingpeng Kong. 2021. Adaptive semiparametric lan- guage models. Transactions of the Association for Computational Linguistics, 9:362-373.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Artefact retrieval: Overview of NLP models with knowledge base access",
"authors": [
{
"first": "Vil\u00e9m",
"middle": [],
"last": "Zouhar",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Mosbach",
"suffix": ""
},
{
"first": "Debanjali",
"middle": [],
"last": "Biswas",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": ""
}
],
"year": 2021,
"venue": "Workshop on Commonsense Reasoning and Knowledge Bases",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vil\u00e9m Zouhar, Marius Mosbach, Debanjali Biswas, and Dietrich Klakow. 2021. Artefact retrieval: Overview of NLP models with knowledge base access. In Work- shop on Commonsense Reasoning and Knowledge Bases.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Effect of data centering and normalization on performance (evaluated with FAISS).",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Dimension reduction using different random projections methods. Presented values are the max of 3 runs (except for greedy dimension dropping, which is deterministic), semi-transparent lines correspond to the minimum. Embeddings are provided by centered and normalized DPR-CLS. Final vectors are also postprocessed by centering and normalization.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Dimension reduction using PCA (top) and Autoencoder (bottom) trained either on document index, query embeddings or both. Each figure corresponds to one of the four possible combinations of centering and normalizing the input data. The output vectors are not post-processed. Reconstruction loss (MSE, average for both documents and queries) is shown in transparent colour and computed in original data space. Horizontal lines show uncompressed performance. Embeddings are provided by DPR-CLS.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Combination of PCA and precision reduction. Compression ratio is shown in text. 16-bit and 32-bit values overlap with 8-bit and their compression ratios are not shown. Measured on HotpotQA with DPR-CLS.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "Distribution of the number of retrieved documents for HotpotQA queries before and after compression: PCA (128) and 1-bit precision with R-Precisions (centered & normalized) of 0.579 and 0.561, respectively.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"text": "Average L 1 and L 2 norms of document and query embeddings from DPR-CLS without preprocessing.",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF3": {
"type_str": "table",
"text": "Overview of compression method performance (from 768) using either L 2 or inner product for retrieval. Inputs are based on centered and normalized output of DPR-CLS and the outputs optionally post-processed again.",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF5": {
"type_str": "table",
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td>: Correlation of the number of retrieved docu-ments for HotpotQA queries in different retrieval modes: uncompressed, PCA (128) and 1-bit precision with R-Precisions (centered &amp; normalized) of 0.618, 0.579 and 0.561, respectively.</td></tr></table>"
},
"TABREF6": {
"type_str": "table",
"text": "Number of training and dev queries and documents for HotpotQA and Natural Questions. Train and dev columns are queries.",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF8": {
"type_str": "table",
"text": "",
"num": null,
"html": null,
"content": "<table/>"
}
}
}
}