ACL-OCL / Base_JSON /prefixM /json /mrqa /2021.mrqa-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:13:33.052597Z"
},
"title": "Multi-modal Retrieval of Tables and Texts Using Tri-encoder Models",
"authors": [
{
"first": "Bogdan",
"middle": [],
"last": "Kosti\u0107",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Julian",
"middle": [],
"last": "Risch",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Timo",
"middle": [],
"last": "M\u00f6ller",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Open-domain extractive question answering works well on textual data by first retrieving candidate texts and then extracting the answer from those candidates. However, some questions cannot be answered by text alone but require information stored in tables. In this paper, we present an approach for retrieving both texts and tables relevant to a question by jointly encoding texts, tables and questions into a single vector space. To this end, we create a new multi-modal dataset based on text and table datasets from related work and compare the retrieval performance of different encoding schemata. We find that dense vector embeddings of transformer models outperform sparse embeddings on four out of six evaluation datasets. Comparing different dense embedding models, tri-encoders with one encoder for each question, text and table increase retrieval performance compared to bi-encoders with one encoder for the question and one for both text and tables. We release the newly created multi-modal dataset to the community so that it can be used for training and evaluation.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Open-domain extractive question answering works well on textual data by first retrieving candidate texts and then extracting the answer from those candidates. However, some questions cannot be answered by text alone but require information stored in tables. In this paper, we present an approach for retrieving both texts and tables relevant to a question by jointly encoding texts, tables and questions into a single vector space. To this end, we create a new multi-modal dataset based on text and table datasets from related work and compare the retrieval performance of different encoding schemata. We find that dense vector embeddings of transformer models outperform sparse embeddings on four out of six evaluation datasets. Comparing different dense embedding models, tri-encoders with one encoder for each question, text and table increase retrieval performance compared to bi-encoders with one encoder for the question and one for both text and tables. We release the newly created multi-modal dataset to the community so that it can be used for training and evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Finding the answer to a factual question in a large collection of documents is a tedious task that many people from a broad range of domains have to complete on a daily basis. In order to address this task with machine learning approaches, it has been formalized as open-domain extractive questionanswering (QA). More specifically, given a natural language question and a database of text documents as a knowledge base, open-domain extractive QA aims to extract a substring that answers the given question out of one of the documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The standard approach for this task is a pipeline architecture consisting of two components: a retriever selecting a small subset of relevant documents from the database and a reader extracting granular answers out of each of these retrieved documents (Voorhees and Tice, 2000) . In this paper, we focus on the retriever and present a transformerbased tri-encoder model as an implementation of this component. Retrievers can also be implemented with bag-of-words retrieval methods, such as TF-IDF or BM25, that transform all documents and the question to sparse vector representations. However, as these methods rely on a lexical overlap of the question and the documents, they fail to capture synonymy and other semantic relationships. This limitation motivates the use of dense vector representations and we compare dense retrieval models to a BM25 baseline in our experiments. A survey on term-based, early semantic, and neural semantic models for document retrieval as a first step before document reranking and down-stream tasks, such as question answering, has been published by Cai et al. (2021) .",
"cite_spans": [
{
"start": 252,
"end": 277,
"text": "(Voorhees and Tice, 2000)",
"ref_id": "BIBREF22"
},
{
"start": 1085,
"end": 1102,
"text": "Cai et al. (2021)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To date, most of the research centered around question-answering focuses on using free-form text as the single source for answering questions. However, valuable information can obviously be found in other modalities as well. For instance, a lot of information is stored in semi-structured tables; according to Cafarella et al. (2008) , more than 14.1 billion tables can be found on the World Wide Web. Given that a user typically does not know in advance in which modality the answer to their question resides, a QA system capable of jointly handling text and tables is needed. One major challenge in building such a system is to represent texts and tables in a way that allows capturing semantic similarity and retrieving texts and tables that are semantically related to a given question.",
"cite_spans": [
{
"start": 310,
"end": 333,
"text": "Cafarella et al. (2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions The contributions of this paper can be summarized as follows: (1) we present biencoder and tri-encoder models that are capable of joint retrieval of tables and texts; (2) we create and release a multi-modal dataset for training and evaluating models on this task; 1 (3) we compare sparse retrieval models with dense retrieval models using bi-encoders and tri-encoders on our new multi-modal dataset and on five uni-modal datasets from related work.",
"cite_spans": [
{
"start": 278,
"end": 279,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Outline The remainder of this paper is structured as follows: Section 2 summarizes existing methods for uni-modal retrieval of texts on the one hand and of tables on the other hand. Further, it discusses the only two, recently published approaches for joint retrieval of tables and texts. Section 3 briefly describes the existing uni-modal datasets and our new multi-modal dataset, which we use to train the retrieval models presented in Section 4 and to evaluate these models in Section 5. Section 6 concludes the paper and gives directions for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge the only existing work that addresses joint retrieval of tables and texts is by Chen et al. (2021) , Talmor et al. (2021) and Li et al. (2021) . To address the challenge of limited context available for tables, Chen et al. (2021) fuse a table segment and text passages into one block if they mention the same named entities. Each block is represented with a single dense embedding so that a table and relevant passages are jointly retrieved as a group if the embedding is similar to that of the question. This grouping makes sense because Chen et al. (2021) address the task of multihop QA, where information needs to be aggregated from multiple tables and texts to answer a question. In contrast to that, we address the slightly different task of single-hop QA, where either only one table or one text is needed to answer a question. Therefore, our approach represents tables and texts with separate embeddings, which are in the same embedding space. The advantage here is that the model learns to estimate relevance on the more finegrained level of individual tables or texts and can decide whether a particular table or text is more relevant to the question. Li et al. (2021) also address the task of multi-hop QA on texts and tables but retrieve them individually. They make use of the sparse retrieval method BM25 and a transformer-based reranker to generate a set of candidate texts and tables. With two separate BM25 indices for texts and tables, they retrieve a set of documents for each modality. In a second step, they apply a joint BERT-based reranker to reduce the size of candidate texts and tables. Talmor et al. (2021) create a MULTIMODALQA dataset containing questions that require joint reasoning over tables, texts, and images.",
"cite_spans": [
{
"start": 109,
"end": 127,
"text": "Chen et al. (2021)",
"ref_id": "BIBREF3"
},
{
"start": 130,
"end": 150,
"text": "Talmor et al. (2021)",
"ref_id": "BIBREF21"
},
{
"start": 155,
"end": 171,
"text": "Li et al. (2021)",
"ref_id": "BIBREF13"
},
{
"start": 240,
"end": 258,
"text": "Chen et al. (2021)",
"ref_id": "BIBREF3"
},
{
"start": 568,
"end": 586,
"text": "Chen et al. (2021)",
"ref_id": "BIBREF3"
},
{
"start": 1191,
"end": 1207,
"text": "Li et al. (2021)",
"ref_id": "BIBREF13"
},
{
"start": 1642,
"end": 1662,
"text": "Talmor et al. (2021)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Otherwise closely related to our approach are TABERT by Yin et al. (2020) and TAPAS by Herzig et al. (2020) , who use tables and texts for language model pre-training but do not consider joint retrieval of tables and texts. TURL focuses on representation learning for tables but also does not consider the retrieval task (Deng et al., 2020) . Due to this limited amount of prior research, we discuss related work on the separate tasks of uni-modal text retrieval and table retrieval in the following.",
"cite_spans": [
{
"start": 56,
"end": 73,
"text": "Yin et al. (2020)",
"ref_id": "BIBREF23"
},
{
"start": 87,
"end": 107,
"text": "Herzig et al. (2020)",
"ref_id": "BIBREF10"
},
{
"start": 321,
"end": 340,
"text": "(Deng et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Dense Passage Retrieval (DPR) by Karpukhin et al. (2020) relies on a bi-encoder model comprising two separate BERT models . Similar to TF-IDF, DPR is a vector space model that represents both queries and documents in the same vector space. However, while TF-IDF represents text documents as very high dimensional sparse vectors, DPR relies on relatively low dimensional dense embeddings. While one of the models, the passage encoder (BERT p ), is used to encode text passages at indexing time, the second model, the question encoder (BERT q ), is used to encode questions at query time. Since BERT's [CLS]-token is particularly designated to capture the meaning of the whole input sequence, its embedding is used as a representation vector for both the text passages and the questions.",
"cite_spans": [
{
"start": 33,
"end": 56,
"text": "Karpukhin et al. (2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Retrieval",
"sec_num": "2.1"
},
{
"text": "The training aims to increase the dot product or cosine similarity of semantically similar passages and questions. In order to achieve this, Karpukhin et al. (2020) use, besides the question's positive passage, hard negative passages as well as in-batch negative passages as training signal. Hard negatives are sampled utilizing a BM25-based retriever on the whole English Wikipedia dump. For each question, they use the highest ranked passage not containing the question's answer string. DPR drastically outperforms BM25 by almost 20 percentage points with regard to recall@20 on the Natural Questions (NQ) dataset by Kwiatkowski et al. (2019) .",
"cite_spans": [
{
"start": 141,
"end": 164,
"text": "Karpukhin et al. (2020)",
"ref_id": "BIBREF11"
},
{
"start": 619,
"end": 644,
"text": "Kwiatkowski et al. (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Retrieval",
"sec_num": "2.1"
},
{
"text": "Most existing table retrieval approaches rely on supervised learning-to-rank approaches. Zhang and Balog (2018) (Mikolov et al., 2013) and RDF2vec embeddings (Ristoski and Paulheim, 2016) . These features are then used to train a random forest regressor to get relevance scores of the tables with regard to the query. Interestingly, training word embeddings on a corpus of tables instead of texts does not improve performance (Zhang et al., 2019 Bagheri and Al-Obeidat (2020) focus on hard queries that contain terms that do not occur in the relevant tables, which means the query and the relevant tables have a low lexical overlap. To this end, they learn low dimensional latent factor matrices to represent tables as well as queries, i.e., they learn term co-occurrences to be able to get tables that address the same topic but only partially overlap on a lexical level. Based on this result, we compare results on datasets with high or low lexical overlap in our experiments.",
"cite_spans": [
{
"start": 89,
"end": 111,
"text": "Zhang and Balog (2018)",
"ref_id": "BIBREF25"
},
{
"start": 112,
"end": 134,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 158,
"end": 187,
"text": "(Ristoski and Paulheim, 2016)",
"ref_id": "BIBREF18"
},
{
"start": 426,
"end": 445,
"text": "(Zhang et al., 2019",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Table Retrieval",
"sec_num": "2.2"
},
{
"text": "There are four deep learning approaches for table retrieval (Shraga et al., 2020b; Pan et al., 2021; Chen et al., 2020b; Herzig et al., 2021) . Shraga et al. (2020b) treat tables and queries as multi-modal objects that consist of query, table caption, schema, rows and columns. Each component is encoded using its own neural network encoder that accounts for its special characteristics. Subsequently, these uni-modal encodings are joined into a single representation, which is passed on to fully connected layers that predict whether the input table is relevant with regard to the input query.",
"cite_spans": [
{
"start": 60,
"end": 82,
"text": "(Shraga et al., 2020b;",
"ref_id": "BIBREF20"
},
{
"start": 83,
"end": 100,
"text": "Pan et al., 2021;",
"ref_id": "BIBREF16"
},
{
"start": 101,
"end": 120,
"text": "Chen et al., 2020b;",
"ref_id": "BIBREF5"
},
{
"start": 121,
"end": 141,
"text": "Herzig et al., 2021)",
"ref_id": "BIBREF9"
},
{
"start": 144,
"end": 165,
"text": "Shraga et al. (2020b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Table Retrieval",
"sec_num": "2.2"
},
{
"text": "Pan et al. (2021) stack two retrieval components. First, they use BM25 to produce a large subset of possibly relevant tables. These tables are then passed to a row-column intersection model that generates a probability distribution over the table cells whether they contain the answer to the user's query. The maximum cell-level score for each table represents its retrieval score. Chen et al. (2020b) apply the transformer-based language model BERT to the table retrieval task by combining BERT embeddings with other, hand-curated table and query features. Their approach consists of several components, including a step where the concatenation of a query, a table's context fields and selected relevant table rows are processed by a BERT model. A significant downside of this approach is that it becomes inefficient with increasing number of tables: for each query, all tables need to be passed through a BERT network. Herzig et al. (2021) solve this efficiency problem by adapting Karpukhin et al.'s (2020) dense passage retrieval approach to dense table retrieval (DTR). To this end, they make use of TAPAS (Herzig et al., 2020) , a transformer-based language model that has been pre-trained on millions of tables. TAPAS extends BERT by adding three different types of positional embeddings to encode the two-dimensional tabular structure: row, column and rank embeddings. This allows to flatten the table by concatenating the rows to a one-dimensional sequence of tokens. Similar to Karpukhin et al. (2020) , Herzig et al. (2021) make use of a bi-encoder approach. However, they use two TAPAS instances instead of BERT instances to encode the queries and the tables, respectively. The goal of training this bi-encoder is to build an embedding model that generates similar embeddings for questions and their relevant tables. As in Karpukhin et al.'s (2020) approach, this goal is achieved using hard-negatives retrieved from all the tables from the English Wikipedia dump as well as in-batch negatives. DTR outperforms BM25 by more than 40 percentage points on the NQ-TABLES dataset (Herzig et al., 2021) . However, the experiments also show that TAPAS requires additional pre-training on the task of table retrieval on millions of tables scraped from Wikipedia. As a further research direction for future work, Herzig et al. (2021) propose to combine tables and texts for multi-modal open-domain QA. We contribute towards this goal in our paper by providing a multimodal retriever as one component of a multi-modal open-domain QA pipeline on tables and texts.",
"cite_spans": [
{
"start": 382,
"end": 401,
"text": "Chen et al. (2020b)",
"ref_id": "BIBREF5"
},
{
"start": 921,
"end": 941,
"text": "Herzig et al. (2021)",
"ref_id": "BIBREF9"
},
{
"start": 1111,
"end": 1132,
"text": "(Herzig et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 1488,
"end": 1511,
"text": "Karpukhin et al. (2020)",
"ref_id": "BIBREF11"
},
{
"start": 1835,
"end": 1860,
"text": "Karpukhin et al.'s (2020)",
"ref_id": null
},
{
"start": 2087,
"end": 2108,
"text": "(Herzig et al., 2021)",
"ref_id": "BIBREF9"
},
{
"start": 2316,
"end": 2336,
"text": "Herzig et al. (2021)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Table Retrieval",
"sec_num": "2.2"
},
{
"text": "The training and evaluation of models examined in this paper on the task of multi-modal retrieval makes use of five datasets from related work: NQ (Kwiatkowski et al., 2019) , NQ-TABLES (Herzig et al., 2021) , WIKISQL (Zhong et al., 2017) , a subset of WIKISQL, which we call WIKISQL ctx-independent , and OTT-QA (Chen et al., 2021) . This section briefly explains the characteristics of these datasets and of our newly created multi-modal retrieval dataset comprising tables and texts, which we call MULTIMODALRETRIEVAL. Table 1 gives an overview of the modality and the number of samples in each dataset.",
"cite_spans": [
{
"start": 147,
"end": 173,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 186,
"end": 207,
"text": "(Herzig et al., 2021)",
"ref_id": "BIBREF9"
},
{
"start": 218,
"end": 238,
"text": "(Zhong et al., 2017)",
"ref_id": "BIBREF26"
},
{
"start": 313,
"end": 332,
"text": "(Chen et al., 2021)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 522,
"end": 529,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "NQ Natural Questions (NQ) (Kwiatkowski et al., 2019) is an open-domain QA dataset on Wikipedia articles. It consists of questions, their answers and the text passages the answers reside in. The questions are natural because they consist of real user queries issued to the Google search engine instead of questions posed by annotators after reading a text passage, which was done to create other popular QA datasets, such as the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2018) . Having natural questions does not only ensure that the questions correspond to information needs by real users but also makes the dataset open-domain, i.e., the questions are context-independent and can be understood without their accompanying text passage that contains the answer. For the purpose of retrieval, we utilize Karpukhin et al.'s (2020) preprocessed variant of NQ.",
"cite_spans": [
{
"start": 26,
"end": 52,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 472,
"end": 496,
"text": "(Rajpurkar et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "The answers to most of the questions inside the NQ dataset can be found in plain unstructured text. However, a subset of the questions are answered by tables. As a consequence, Herzig et al. (2021) construct NQ-TABLES, a tablespecific QA dataset based on NQ. To achieve this objective, they extract all the tables and all the questions whose answers reside inside a table. They come up with a dataset consisting of 9,594 questions in the training set, 1,068 questions in the development set, 966 questions in the test set and 169,898 tables in total. Given that this dataset is a subset of NQ, the questions share the characteristic of being context-independent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NQ-TABLES",
"sec_num": null
},
{
"text": "WIKISQL WIKISQL (Zhong et al., 2017 ) is a closed-domain QA dataset consisting of 24,241 ta-bles and 80,654 natural language questions together with their corresponding SQL query. To build this dataset, Zhong et al. (2017) generate a number of random SQL queries for each table. These SQL queries are then transformed into crude questions using templates. Finally, Amazon Mechanical Turk crowd workers paraphrase these crude questions into natural language questions, which are checked by two additional crowd workers.",
"cite_spans": [
{
"start": 16,
"end": 35,
"text": "(Zhong et al., 2017",
"ref_id": "BIBREF26"
},
{
"start": 203,
"end": 222,
"text": "Zhong et al. (2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NQ-TABLES",
"sec_num": null
},
{
"text": "WIKISQL ctx-independent Since WIKISQL is a closed-domain QA dataset, the majority of its questions is context-dependent, i.e., they do not provide enough context to be answered without their accompanying table and are therefore not suitable for training or evaluation on the retrieval task. One example for such insufficient questions inside the dataset is: \"Who is the player that wears number 42?\", which cannot be answered without additional context given in the table, such as the name of a sports team and a year. As a consequence, to make use of WIKISQL, all questions that do not provide enough context for retrieval need to be filtered out. For automating this filtering, we labeled a subset of WIKISQL's questions with regard to whether they are either context-independent or under-specified resulting in 4,553 labels as training set and 612 labels as test set. These labels are then used to train a classifier that predicts whether a question provides enough context. We fine-tune a RoBERTa-base (Liu et al., 2019) language model achieving an accuracy of 0.8134 and a macro-averaged F1-score of 0.7748 on the test set. Next, we apply this classifier to the whole WIKISQL dataset to filter out all the questions that are predicted as under-specified.",
"cite_spans": [
{
"start": 1006,
"end": 1024,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NQ-TABLES",
"sec_num": null
},
{
"text": "OTT-QA OTT-QA (Chen et al., 2021) is an open-domain multi-hop QA dataset of texts and tables from Wikipedia built upon HybridQA (Chen et al., 2020a) , a closed-domain multi-hop QA dataset. Multi-hop QA refers to the fact that most questions inside the dataset require a combination of different texts and/or tables instead of a single document in order to be answered. To generate an open-domain version of HybridQA, Chen et al. (2021) let crowd workers decontextualize all the questions. Furthermore, they add additional question-answer pairs on newly crawled tables. Since the published annotations contain only the gold tables but not the gold texts, we use OTT-QA only for generating table retrieval training samples and evaluating the uni-modal retrieval of tables. Train Test Ctx-ind. NQ text 58,880 3,610 NQ-TABLES table 9,594 966 WIKISQL table 56,355 15,878 WIKISQL ctx-independent table 7,336 2,101 OTT-QA table 41,469 2,158 MULTIMODALRETRIEVAL text & table 120,239 4,937 to pre-train TAPAS) with Elasticsearch to sample hard negatives using BM25. For each question, the highest ranked passages or tables that do no contain the answer string were chosen, i.e., a question originating from a tabular question-answering dataset can also have a text passage as hard negative, and vice versa.",
"cite_spans": [
{
"start": 14,
"end": 33,
"text": "(Chen et al., 2021)",
"ref_id": "BIBREF3"
},
{
"start": 128,
"end": 148,
"text": "(Chen et al., 2020a)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 771,
"end": 1001,
"text": "Train Test Ctx-ind. NQ text 58,880 3,610 NQ-TABLES table 9,594 966 WIKISQL table 56,355 15,878 WIKISQL ctx-independent table 7,336 2,101 OTT-QA table 41,469 2,158 MULTIMODALRETRIEVAL text & table 120,239",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "NQ-TABLES",
"sec_num": null
},
{
"text": "Our two approaches for the joint retrieval of tables and texts comprise bi-encoder and tri-encoder models based on uni-modal dense retrieval methods by Karpukhin et al. (2020) and Herzig et al. (2021) . In our first approach, the bi-encoder uses one language model to encode the questions and a second model to encode both the tables and the text passages. Our second approach adds a third encoder, such that there is one separate encoder for questions, text passages and tables. In contrast to the bi-encoder where tables and texts are encoded by the same model, the tri-encoder routes tables to the table encoder model and text passages to the text encoder model. We train three different bi-encoders and three different tri-encoders, which differ in the underlying language models as specified in Table 2 . The first multi-modal bi-encoder consists of two different BERT-small instances that serve as question encoder and table and text encoder, respectively. Given that BERT models only allow onedimensional strings of text as input, the twodimensional tables are transformed into one dimension by concatenating the titles of the page and the section the table occurs in, the caption of the (Herzig et al., 2020; gives a performance boost compared to a plain BERT model, the remaining two bi-encoders make use of a TAPAS model that is pre-trained for the task of table retrieval (Herzig et al., 2021) for at least one of their encoders. Thus, the second bi-encoder uses a BERT-small instance as question encoder and a TAPAS-small instance as table and text encoder. The third bi-encoder utilizes two TAPASsmall models, one to encode the questions and the second to encode both text and tables. Using BERTsmall instead of BERT-base or BERT-large models drastically reduces the number of parameters and allows to fit more training samples into one batch. In contrast to a BERT-large model with 24 transformer layers, hidden representations of size 1024, 16 attention heads, and a total number of 335M parameters, BERT-small consists of only 4 transformer layers, hidden representations of size 512, 8 attention heads, and a total number of 29.1M parameters.",
"cite_spans": [
{
"start": 152,
"end": 175,
"text": "Karpukhin et al. (2020)",
"ref_id": "BIBREF11"
},
{
"start": 180,
"end": 200,
"text": "Herzig et al. (2021)",
"ref_id": "BIBREF9"
},
{
"start": 1195,
"end": 1216,
"text": "(Herzig et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 1383,
"end": 1404,
"text": "(Herzig et al., 2021)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 800,
"end": 807,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "The first tri-encoder model uses BERT-small instances for all of its three encoders. Also for the triencoder approach, we analyze the impact of using TAPAS for at least one of the encoders. Therefore,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "Text Table Bi-encoder BERT-small --BERT-small --BERT-small --TAPAS-small --TAPAS-small --TAPAS-small --Tri-encoder BERT-small BERT-small BERT-small BERT-small BERT-small TAPAS-small TAPAS-small BERT-small TAPAS-small the second architecture makes use of a BERT-small model for both the question encoder and the text encoder and utilizes a TAPAS-small instance to encode the tables. The third tri-encoder model uses two TAPAS-small instances to encode questions and tables and a BERT-small instance to encode text passages. Herzig et al. (2021) use an additional downprojection layer to reduce the dimensionality of the question and table embeddings. We evaluated models with and without such a down-projection layer and found that their results do not differ significantly. Given that the models using an additional down-projection layer are more complex than models that directly utilize the embedding of the [CLS]token, we consider only TAPAS-models without a down-projection layer throughout the remainder of this paper, including the experiments. Table 3 specifies the hyperparameters used to train the bi-encoder and tri-encoder models on the training split of the MULTIMODALRETRIEVAL dataset described in Section 3. The learning objective is to create similar embeddings for relevant texts and/or tables with regard to a question. To train the models more efficiently, we make use of in-batch negatives besides each question's hard neg-ative text or table as suggested by Karpukhin et al. (2020) in the context of text retrieval. Given that the training samples inside a batch are randomly selected from all training examples, questions comprising a text passage as gold-label might have tables as negative labels, and vice versa.",
"cite_spans": [
{
"start": 525,
"end": 545,
"text": "Herzig et al. (2021)",
"ref_id": "BIBREF9"
},
{
"start": 1480,
"end": 1503,
"text": "Karpukhin et al. (2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 5,
"end": 23,
"text": "Table Bi-encoder",
"ref_id": null
},
{
"start": 1053,
"end": 1060,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Question",
"sec_num": null
},
{
"text": "We compare the presented dense retrieval models to the sparse retrieval method BM25 and evaluate them based on recall@k with k \u2208 {10, 20, 100}. The search space to evaluate the models needs to consist of both texts and tables. For this purpose, 500,000 text passages are randomly sampled from Karpukhin et al. (2020) 's preprocessed Wikipedia passages making sure that the gold passages are among these passages. Furthermore, besides the text passages, all the tables from WIKI-SQL, OTT-QA, and NQ-TABLES are used, resulting in 656,166 tables and therefore approximately 1.2 million documents in total. The models are evaluated on a random sample of 1,000 questions of each dataset's test split as listed in Table 1 and the full test split of the MULTIMODALRETRIEVAL dataset.",
"cite_spans": [
{
"start": 293,
"end": 316,
"text": "Karpukhin et al. (2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 708,
"end": 715,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Following Karpukhin et al. (2020) , a document retrieved for a question originating from NQ or NQ-TABLES is considered a correct match if the document contains the answer string of the granular answer. Given that the derivation of the granular answer for questions originating from WIKISQL and OTT-QA might need further aggregation, such as summation or counting, and, therefore, the answer string does not need to be present in a relevant document, a retrieved document is only considered a correct match if it is the gold annotated table. This evaluation procedure might have the effect of incorrectly judging non-gold tables that contain the answer to a query as irrelevant. However, since we apply the same evaluation procedure for all models, the numbers should be comparable. Table 4 specifies the evaluation results for BM25, all bi-encoder and all tri-encoder models.",
"cite_spans": [
{
"start": 10,
"end": 33,
"text": "Karpukhin et al. (2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 782,
"end": 789,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The evaluation shows that BM25 outperforms all dense methods on both the full WIKISQL dataset and WIKISQL's context-independent questions. It outperforms the best dense retrieval model on this dataset, the tri-encoder consisting of three BERT models, by 21.9 percentage points on all WIKISQL questions and 30.4 percentage points on contextindependent WIKISQL questions with regard to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Encoders NQ WIKISQL WIKISQL ctx-independent",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Question Text Table R@10 R@20 R@100 R@10 R@20 R@100 R@10 R@20 R@100 Table 4 : Evaluation results of BM25 and bi-encoder and tri-encoder retrieval models on 1000 random samples of the test splits of NQ, WIKISQL, context-independent questions of WIKISQL, OTT-QA, and NQ-TABLES and the full test set of our new MULTIMODALRETRIEVAL dataset with regard to recall@10, recall@20, and recall@100.",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 75,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "recall@10. Analysing the WIKISQL dataset in more detail shows a very high lexical overlap of questions with their accompanying table. A combination of the Jaccard coefficient and Gestalt-Pattern-Matching 2 allows to quantify the lexical overlap of the questions with their corresponding tables without incorporating neither duplicate occurrences of the same word nor the order of the words inside the questions and the tables. This word order independence is particularly important for the analysis, given that the sparse retrieval method BM25 is order-agnostic. Even after lower-casing questions and tables and removing stopwords in the WIKI-SQL dataset, 40.68% of the questions lexically overlap completely with the relevant table, according to the combination of the Jaccard coefficient and Gestalt-Pattern-Matching. This large lexical overlap explains BM25's strong performance on that dataset in Table 4 . In contrast, only 0.27% of the questions in OTT-QA overlap completely with their accompanying table. For the other datasets, 15.73% of the questions in NQ-TABLES, 14.47% of the questions in NQ, and 21.96% of the ques-tions in MULTIMODALRETRIEVAL overlap completely with their accompanying table or text.",
"cite_spans": [],
"ref_spans": [
{
"start": 901,
"end": 908,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "To better understand how the lexical overlap of questions and accompanying tables influences BM25's performance, we split the test sets of WIKI-SQL and WIKISQL ctx-independent into subsets with different ranges of lexical overlap. As can be observed in Figure 1 , the recall of both BM25 and dense retrieval highly correlates with lexical overlap. Furthermore, while BM25 outperforms dense retrieval for questions with high lexical overlap, it is the other way round for questions with low lexical overlap.",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 261,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "On the sample of the NQ-TABLES test set, all dense retrieval models outperform BM25, except for the tri-encoder that consists of BERT instances as question and text passage encoder and a TAPAS instance as table encoder. The best performing model on this dataset is the tri-encoder consisting of three BERT encoders. This model outperforms the sparse retrieval method BM25 by 29.8 percentage points with regard to recall@10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "For the sampled questions of the OTT-QA development set, four out of the six dense retrieval models outperform BM25. The best performing model is again the tri-encoder model that is com- posed of three BERT-small encoders. This model outperforms BM25 by 33.6 percentage points with regard to recall@10. BM25 outperforms the biencoder consisting of a BERT model as question encoder and a TAPAS model as text and table encoder as well as the tri-encoder consisting of two BERT models serving as question encoder and text encoder, respectively, and a TAPAS model serving as table encoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "When it comes to the performance on the text modality, i.e., questions deriving from NQ whose gold-label answer resides in a text passage, four out of the six dense retrieval models outperform BM25. For this case, the best performing model is not a tri-encoder but the bi-encoder comprising two BERT models. This model outperforms BM25 by 16.8 percentage points with regard to recall@10. However, this bi-encoder exceeds the tri-encoder consisting of three BERT encoders only slightly by one percentage point. The sparse retrieval method BM25 beats the bi-encoder consisting of a BERT model as question encoder and a TAPAS model as text and table encoder as well as the tri-encoder consisting of two TAPAS models serving as question encoder and table encoder, respectively, and a BERT model serving as text encoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "In summary, the best performance on the WIKI-SQL test set is achieved by the sparse retrieval method BM25. The tri-encoder consisting of three BERT encoders shows the best performance on the remaining two tabular datasets, OTT-QA and NQ-TABLES. On the NQ dataset, i.e., questions whose answers reside in the textual modality, the bi-encoder consisting of two BERT encoders performs best but is almost on par with the tri-encoder consisting of three BERT models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We can conclude that, under the limited experimental conditions, in particular, on the datasets used in our study, models involving TAPAS as question, text, and/or table encoder perform worse than models that rely only on BERT language models. Herzig et al. (2021) show that to be able to use TAPAS for the retrieval of tables, TAPAS needs to be additionally pre-trained on the table retrieval task. These pre-trained table retrieval models, which are used in the bi-encoders and tri-encoders that involve one or more TAPAS instances as encoder, are, however, pre-trained solely on the task of table retrieval and not text retrieval. Given the fact that the plain TAPAS model cannot be adapted to retrieval from scratch but needs this special pre-training, it might be the case that, to use TAPAS efficiently for the retrieval of both texts and tables, it needs to be pre-trained in a multi-modal setting on the retrieval of both texts and tables. Furthermore, batch size is significant for training retrieval models, as higher batch sizes make the training harder by adding more in-batch negatives. While the training of a bi-encoder does not allow a batch size higher than 38 and the training of a tri-encoder does not allow a batch size higher than 28 on a Tesla V100 GPU with 16 GB of memory, Herzig et al. (2021) make use of a batch size of 256 for training their TAPAS-based table retrieval models. Accordingly, it might be the case that TAPAS is more unstable to train and requires, therefore, larger batch sizes.",
"cite_spans": [
{
"start": 244,
"end": 264,
"text": "Herzig et al. (2021)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "This paper presented a transformer-based approach using bi-encoder and tri-encoder models for multimodal retrieval of tables and texts. With experi-ments on five datasets from related work and one newly created dataset, we show that the presented dense retrieval models outperform the sparse retrieval model BM25 if there is a low lexical overlap of questions and relevant tables and texts. More specifically, the tri-encoder architecture performs better on OTT-QA and NQ-TABLES, which represent the tabular modality, while the bi-encoder architecture performs slightly better on the NQ dataset representing the textual modality. We observe that the best retrieval models are those that rely only on BERT models as encoder and do not make use of TAPAS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "From an application point of view, future work could integrate the presented retrieval models as one component in a multi-modal open-domain QA pipeline and evaluate it on a real-world use case. Such a pipeline would facilitate information access immensely by combining valuable information from both sources rather than relying only on either texts or tables. Another promising path for future work is to extend our approach to more modalities with transformer-based models for images, videos, or speech. These models could serve as encoders for documents of different modalities to jointly train an n-encoder architecture, where one encoder is tailored to the queries and the remaining n\u22121 encoders are tailored to each of the modalities that the user would like to search on. Last but not least, the research community would surely benefit from the creation of more multi-modal datasets to improve training and evaluation of multi-modal retrieval models and we are only making a first step in this direction with creating and releasing a dataset of tables and texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "https://multimodalretrieval.s3. eu-central-1.amazonaws.com/data.zip",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use an implementation from: https://github. com/seatgeek/fuzzywuzzy#token-set-ratio",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Jonathan Herzig and Julian Eisenschlos for taking the time to discuss ideas with us and to give early feedback on experiment results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A latent model for ad hoc table retrieval",
"authors": [
{
"first": "Ebrahim",
"middle": [],
"last": "Bagheri",
"suffix": ""
},
{
"first": "Feras",
"middle": [],
"last": "Al-Obeidat",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Information Retrieval",
"volume": "",
"issue": "",
"pages": "86--93",
"other_ids": {
"DOI": [
"10.1007/978-3-030-45442-5_11"
]
},
"num": null,
"urls": [],
"raw_text": "Ebrahim Bagheri and Feras Al-Obeidat. 2020. A latent model for ad hoc table retrieval. In Advances in In- formation Retrieval, pages 86-93. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Webtables: Exploring the power of tables on the web",
"authors": [
{
"first": "Michael",
"middle": [
"J"
],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Halevy",
"suffix": ""
},
{
"first": "Daisy",
"middle": [
"Zhe"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the VLDB Endowment",
"volume": "1",
"issue": "",
"pages": "538--549",
"other_ids": {
"DOI": [
"10.14778/1453856.1453916"
]
},
"num": null,
"urls": [],
"raw_text": "Michael J. Cafarella, Alon Halevy, Daisy Zhe Wang, Eugene Wu, and Yang Zhang. 2008. Webtables: Ex- ploring the power of tables on the web. Proceedings of the VLDB Endowment, 1(1):538-549.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semantic models for the first-stage retrieval: A comprehensive review",
"authors": [
{
"first": "Yinqiong",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Yixing",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Jiafeng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ruqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xueqi",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2103.04831"
]
},
"num": null,
"urls": [],
"raw_text": "Yinqiong Cai, Yixing Fan, Jiafeng Guo, Fei Sun, Ruqing Zhang, and Xueqi Cheng. 2021. Semantic models for the first-stage retrieval: A comprehensive review. arXiv preprint arXiv:2103.04831.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Open question answering over tables and text. Proceedings of the International Conference on Learning Representations",
"authors": [
{
"first": "Wenhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhu Chen, Ming-Wei Chang, Eva Schlinger, William Wang, and William Cohen. 2021. Open question answering over tables and text. Proceed- ings of the International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "HybridQA: A dataset of multi-hop question answering over tabular and textual data",
"authors": [
{
"first": "Wenhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hanwen",
"middle": [],
"last": "Zha",
"suffix": ""
},
{
"first": "Zhiyu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wenhan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP",
"volume": "",
"issue": "",
"pages": "1026--1036",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.91"
]
},
"num": null,
"urls": [],
"raw_text": "Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Yang Wang. 2020a. HybridQA: A dataset of multi-hop question answer- ing over tabular and textual data. In Findings of the Association for Computational Linguistics: EMNLP, pages 1026-1036. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Table search using a deep contextualized language model",
"authors": [
{
"first": "Zhiyu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Trabelsi",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Heflin",
"suffix": ""
},
{
"first": "Yinan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Brian",
"middle": [
"D"
],
"last": "Davison",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the International Conference on Research and Development in Information Retrieval (SIGIR)",
"volume": "",
"issue": "",
"pages": "589--598",
"other_ids": {
"DOI": [
"10.1145/3397271.3401044"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiyu Chen, Mohamed Trabelsi, Jeff Heflin, Yinan Xu, and Brian D. Davison. 2020b. Table search using a deep contextualized language model. In Proceed- ings of the International Conference on Research and Development in Information Retrieval (SIGIR), page 589-598. Association for Computing Machin- ery.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Turl: Table understanding through representation learning",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Alyssa",
"middle": [],
"last": "Lees",
"suffix": ""
},
{
"first": "You",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Cong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the VLDB Endowment (PVLDB)",
"volume": "14",
"issue": "",
"pages": "307--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Deng, Huan Sun, Alyssa Lees, You Wu, and Cong Yu. 2020. Turl: Table understanding through representation learning. Proceedings of the VLDB Endowment (PVLDB), 14(3):307-319.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies (NAACL-HLT), pages 4171-4186. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Understanding tables with intermediate pre-training",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Eisenschlos",
"suffix": ""
},
{
"first": "Syrine",
"middle": [],
"last": "Krichene",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP",
"volume": "",
"issue": "",
"pages": "281--296",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.27"
]
},
"num": null,
"urls": [],
"raw_text": "Julian Eisenschlos, Syrine Krichene, and Thomas M\u00fcller. 2020. Understanding tables with interme- diate pre-training. In Findings of the Association for Computational Linguistics: EMNLP, pages 281- 296. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Open domain question answering over tables via dense retrieval",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Herzig",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Syrine",
"middle": [],
"last": "Krichene",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Eisenschlos",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "512--519",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.43"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan Herzig, Thomas M\u00fcller, Syrine Krichene, and Julian Eisenschlos. 2021. Open domain question an- swering over tables via dense retrieval. In Proceed- ings of the Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies (NAACL-HLT), pages 512-519. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "TaPas: Weakly supervised table parsing via pre-training",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Herzig",
"suffix": ""
},
{
"first": "Krzysztof",
"middle": [],
"last": "Nowak",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Piccinno",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Eisenschlos",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "4320--4333",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.398"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan Herzig, Pawel Krzysztof Nowak, Thomas M\u00fcller, Francesco Piccinno, and Julian Eisenschlos. 2020. TaPas: Weakly supervised table parsing via pre-training. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics (ACL), pages 4320-4333. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dense passage retrieval for open-domain question answering",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "6769--6781",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.550"
]
},
"num": null,
"urls": [],
"raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics (TACL)",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Kelcey",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"M"
],
"last": "Dai",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "7",
"issue": "",
"pages": "452--466",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00276"
]
},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Al- berti, Danielle Epstein, Illia Polosukhin, Jacob De- vlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question an- swering research. Transactions of the Association for Computational Linguistics (TACL), 7:452-466.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dual reader-parser on hybrid textual and tabular evidence for open domain question answering",
"authors": [
{
"first": "Alexander Hanbo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Henghui",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (ACL-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4078--4088",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Hanbo Li, Patrick Ng, Peng Xu, Henghui Zhu, Zhiguo Wang, and Bing Xiang. 2021. Dual reader-parser on hybrid textual and tabular evidence for open domain question answering. In Pro- ceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (ACL-IJCNLP), pages 4078-4088. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "RoBERTa: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems (NeurIPS)",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {
"DOI": [
"10.5555/2999792.2999959"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Advances in Neural Information Processing Systems (NeurIPS), page 3111-3119. Curran Asso- ciates Inc.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "CLTR: An end-toend, transformer-based system for cell-level table retrieval and table question answering",
"authors": [
{
"first": "Feifei",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Canim",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": "Alfio",
"middle": [],
"last": "Gliozzo",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Fox",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (ACL-IJCNLP)",
"volume": "",
"issue": "",
"pages": "202--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feifei Pan, Mustafa Canim, Michael Glass, Alfio Gliozzo, and Peter Fox. 2021. CLTR: An end-to- end, transformer-based system for cell-level table re- trieval and table question answering. In Proceedings of the Annual Meeting of the Association for Com- putational Linguistics and the International Joint Conference on Natural Language Processing (ACL- IJCNLP), pages 202-209. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Know what you don't know: Unanswerable questions for SQuAD",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "784--789",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2124"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for SQuAD. In Proceedings of the An- nual Meeting of the Association for Computational Linguistics (ACL), pages 784-789. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "RDF2Vec: RDF graph embeddings for data mining",
"authors": [
{
"first": "Petar",
"middle": [],
"last": "Ristoski",
"suffix": ""
},
{
"first": "Heiko",
"middle": [],
"last": "Paulheim",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the International Semantic Web Conference (ISWC)",
"volume": "",
"issue": "",
"pages": "498--514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petar Ristoski and Heiko Paulheim. 2016. RDF2Vec: RDF graph embeddings for data mining. In Proceed- ings of the International Semantic Web Conference (ISWC), pages 498-514. Springer.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Ad hoc table retrieval using intrinsic and extrinsic similarities",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Shraga",
"suffix": ""
},
{
"first": "Haggai",
"middle": [],
"last": "Roitman",
"suffix": ""
},
{
"first": "Guy",
"middle": [],
"last": "Feigenblat",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Canim",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The Web Conference (WWW)",
"volume": "",
"issue": "",
"pages": "2479--2485",
"other_ids": {
"DOI": [
"10.1145/3366423.3379995"
]
},
"num": null,
"urls": [],
"raw_text": "Roee Shraga, Haggai Roitman, Guy Feigenblat, and Mustafa Canim. 2020a. Ad hoc table retrieval using intrinsic and extrinsic similarities. In Proceedings of The Web Conference (WWW), page 2479-2485. As- sociation for Computing Machinery.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Web table retrieval using multimodal deep learning",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Shraga",
"suffix": ""
},
{
"first": "Haggai",
"middle": [],
"last": "Roitman",
"suffix": ""
},
{
"first": "Guy",
"middle": [],
"last": "Feigenblat",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Cannim",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the International Conference on Research and Development in Information Retrieval (SIGIR)",
"volume": "",
"issue": "",
"pages": "1399--1408",
"other_ids": {
"DOI": [
"10.1145/3397271.3401120"
]
},
"num": null,
"urls": [],
"raw_text": "Roee Shraga, Haggai Roitman, Guy Feigenblat, and Mustafa Cannim. 2020b. Web table retrieval us- ing multimodal deep learning. In Proceedings of the International Conference on Research and De- velopment in Information Retrieval (SIGIR), page 1399-1408. Association for Computing Machinery.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multimodalqa: Complex question answering over text, tables and images",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Talmor",
"suffix": ""
},
{
"first": "Ori",
"middle": [],
"last": "Yoran",
"suffix": ""
},
{
"first": "Amnon",
"middle": [],
"last": "Catav",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Lahav",
"suffix": ""
},
{
"first": "Yizhong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Akari",
"middle": [],
"last": "Asai",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Ilharco",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Talmor, Ori Yoran, Amnon Catav, Dan Lahav, Yizhong Wang, Akari Asai, Gabriel Ilharco, Han- naneh Hajishirzi, and Jonathan Berant. 2021. Mul- timodalqa: Complex question answering over text, tables and images. Proceedings of the International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The trec-8 question answering track report",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ellen",
"suffix": ""
},
{
"first": "Dawn",
"middle": [
"M"
],
"last": "Voorhees",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tice",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen M Voorhees and Dawn M Tice. 2000. The trec- 8 question answering track report. In Proceedings of the International Conference on Language Re- sources and Evaluation (LREC).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "TaBERT: Pretraining for joint understanding of textual and tabular data",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "8413--8426",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.745"
]
},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Se- bastian Riedel. 2020. TaBERT: Pretraining for joint understanding of textual and tabular data. In Pro- ceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 8413- 8426. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Ta-ble2Vec: Neural word and entity embeddings for table population and retrieval",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Krisztian",
"middle": [],
"last": "Balog",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference on Research and Development in Information Retrieval (SIGIR)",
"volume": "",
"issue": "",
"pages": "1029--1032",
"other_ids": {
"DOI": [
"10.1145/3331184.3331333"
]
},
"num": null,
"urls": [],
"raw_text": "Li Zhang, Shuo Zhang, and Krisztian Balog. 2019. Ta- ble2Vec: Neural word and entity embeddings for table population and retrieval. In Proceedings of the International Conference on Research and De- velopment in Information Retrieval (SIGIR), page 1029-1032. Association for Computing Machinery.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Ad hoc table retrieval using semantic similarity",
"authors": [
{
"first": "Shuo",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Krisztian",
"middle": [],
"last": "Balog",
"suffix": ""
}
],
"year": 2018,
"venue": "International World Wide Web Conferences Steering Committee",
"volume": "",
"issue": "",
"pages": "1553--1562",
"other_ids": {
"DOI": [
"10.1145/3178876.3186067"
]
},
"num": null,
"urls": [],
"raw_text": "Shuo Zhang and Krisztian Balog. 2018. Ad hoc table retrieval using semantic similarity. In Proceedings of the World Wide Web Conference (WWW), page 1553-1562. International World Wide Web Confer- ences Steering Committee.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Seq2SQL: Generating structured queries from natural language using reinforcement learning",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1709.00103"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2SQL: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Recall@20 of the BM25 model and the BERT-BERT-BERT tri-encoder across different percentage of lexical overlap of question and table. Performance of BM25 drops drastically if the lexical overlap is low.",
"num": null
},
"TABREF0": {
"text": "combine a set of hand-crafted query features, table features and query-table features with semantic similarity of table and queryas additional feature. To get table and query representations, they use the average of pre-trained word2vec",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF2": {
"text": "Modality and number of train and test samples for multi-modal retrieval models. Only WIKISQL is not context-independent (ctx-ind.), which is why we create the subset WIKISQL ctx-independent .",
"content": "<table><tr><td>MULTIMODALRETRIEVAL Given that no</td></tr><tr><td>multi-modal dataset of tables and texts is readily</td></tr><tr><td>available, this paper newly introduces such a</td></tr><tr><td>dataset based on datasets from related work. For</td></tr><tr><td>this purpose, we combine the question-passage</td></tr><tr><td>pairs from NQ as questions requiring a text</td></tr><tr><td>passage to be answered with the question-table</td></tr><tr><td>pairs from NQ-TABLES, WIKISQL ctx-independent ,</td></tr><tr><td>and OTT-QA as questions requiring a table to</td></tr><tr><td>be answered. Both Karpukhin et al. (2020) for</td></tr><tr><td>text retrieval and Herzig et al. (2021) for table</td></tr><tr><td>retrieval show that adding hard negatives as</td></tr><tr><td>training signal boosts the retrieval performance</td></tr><tr><td>of the model significantly. Therefore, we index</td></tr><tr><td>21 million Wikipedia passages (from Karpukhin</td></tr><tr><td>et al. (2020)) and 7 million Wikipedia tables (used</td></tr><tr><td>by</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF3": {
"text": "table, and each row of the table. In order to analyze whether the table-specific language model TAPAS",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF4": {
"text": "Examined multi-modal bi-encoder and triencoder models.",
"content": "<table><tr><td/><td colspan=\"2\">Bi-encoders Tri-encoders</td></tr><tr><td>Learning rate</td><td>1e-5</td><td>1e-5</td></tr><tr><td>LR schedule</td><td>linear</td><td>linear</td></tr><tr><td>Warm-up steps</td><td>10%</td><td>10%</td></tr><tr><td>Batch size</td><td>38</td><td>28</td></tr><tr><td>Epochs</td><td>10</td><td>10</td></tr><tr><td>Optimizer</td><td>Adam</td><td>Adam</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF5": {
"text": "Hyperparameters used to train the bi-encoder and tri-encoder multi-modal retrieval models.",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
}
}
}
}