ACL-OCL / Base_JSON /prefixD /json /dialdoc /2022.dialdoc-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:31:13.802692Z"
},
"title": "Parameter-Efficient Abstractive Question Answering over Tables or Text",
"authors": [
{
"first": "Vaishali",
"middle": [],
"last": "Pal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {}
},
"email": ""
},
{
"first": "Evangelos",
"middle": [],
"last": "Kanoulas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {}
},
"email": ""
},
{
"first": "Maarten",
"middle": [],
"last": "De Rijke",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A long-term ambition of information seeking question answering (QA) systems is to reason over multi-modal contexts and generate natural answers to user queries. Today, memory intensive pre-trained language models are adapted to downstream tasks such as QA by fine-tuning the model on QA data in a specific modality like unstructured text or structured tables. To avoid training such memoryhungry models while utilizing a uniform architecture for each modality, parameter-efficient adapters add and train small task-specific bottleneck layers between transformer layers. In this work, we study parameter-efficient abstractive QA in encoder-decoder models over structured tabular data and unstructured textual data using only 1.5% additional parameters for each modality. We also ablate over adapter layers in both encoder and decoder modules to study the efficiency-performance trade-off and demonstrate that reducing additional trainable parameters down to 0.7%-1.0% leads to comparable results. Our models out-perform current stateof-the-art models on tabular QA datasets such as Tablesum and FeTaQA, and achieve comparable performance on a textual QA dataset such as NarrativeQA using significantly less trainable parameters than fine-tuning.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "A long-term ambition of information seeking question answering (QA) systems is to reason over multi-modal contexts and generate natural answers to user queries. Today, memory intensive pre-trained language models are adapted to downstream tasks such as QA by fine-tuning the model on QA data in a specific modality like unstructured text or structured tables. To avoid training such memoryhungry models while utilizing a uniform architecture for each modality, parameter-efficient adapters add and train small task-specific bottleneck layers between transformer layers. In this work, we study parameter-efficient abstractive QA in encoder-decoder models over structured tabular data and unstructured textual data using only 1.5% additional parameters for each modality. We also ablate over adapter layers in both encoder and decoder modules to study the efficiency-performance trade-off and demonstrate that reducing additional trainable parameters down to 0.7%-1.0% leads to comparable results. Our models out-perform current stateof-the-art models on tabular QA datasets such as Tablesum and FeTaQA, and achieve comparable performance on a textual QA dataset such as NarrativeQA using significantly less trainable parameters than fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Information seeking systems over diverse contexts require model capabilities to reason over unstructured and structured data such as free-form text, tables, and images (Agrawal et al., 2016; Vakulenko et al., 2019; Hudson and Manning, 2019; Zhu et al., 2021; Deldjoo et al., 2021) . Such systems might have the additional requirement of generating natural language responses if deployed as task-oriented conversational agents (Wen et al., 2015; Carnegie and Oh, 2000; Rambow et al., 2001; Ratnaparkhi, 2002) . Recent work on open-domain question answering (QA) predominately addresses these challenges by fine-tuning question question Encoder LM Head Table Adapter Decoder Text Adapter Nx Table document Natural Answer Table Adapter Text Adapter Figure 1 : Parameter-efficient transfer learning using modality-specific (table/text) adapters for Abstractive Question Answering massive pre-trained language models on different modalities such as tables and text (Yin et al., 2020; Herzig et al., 2020 Herzig et al., , 2021 Katsis et al., 2021; Nan et al., 2021) . However, each model trained on a specific input type is incompatible with other modalities and requires modality-specific fine-tuning. For example, in tabular QA (Herzig et al., 2020) , the structure of the table is learnt by training additional position embeddings (row and column identifiers) to identify which row and column a table cell belongs to. This renders such modality specific models incompatible with free-form text-based models. Multi-modal models (Zhu et al., 2021) can reason over both tables and text by concatenating the textual context and the flattened table, leading to longer input sequences and limiting the length of the context that can be encoded.",
"cite_spans": [
{
"start": 168,
"end": 190,
"text": "(Agrawal et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 191,
"end": 214,
"text": "Vakulenko et al., 2019;",
"ref_id": "BIBREF33"
},
{
"start": 215,
"end": 240,
"text": "Hudson and Manning, 2019;",
"ref_id": "BIBREF12"
},
{
"start": 241,
"end": 258,
"text": "Zhu et al., 2021;",
"ref_id": null
},
{
"start": 259,
"end": 280,
"text": "Deldjoo et al., 2021)",
"ref_id": "BIBREF5"
},
{
"start": 426,
"end": 444,
"text": "(Wen et al., 2015;",
"ref_id": "BIBREF34"
},
{
"start": 445,
"end": 467,
"text": "Carnegie and Oh, 2000;",
"ref_id": "BIBREF2"
},
{
"start": 468,
"end": 488,
"text": "Rambow et al., 2001;",
"ref_id": "BIBREF28"
},
{
"start": 489,
"end": 507,
"text": "Ratnaparkhi, 2002)",
"ref_id": "BIBREF29"
},
{
"start": 970,
"end": 988,
"text": "(Yin et al., 2020;",
"ref_id": "BIBREF36"
},
{
"start": 989,
"end": 1008,
"text": "Herzig et al., 2020",
"ref_id": "BIBREF9"
},
{
"start": 1009,
"end": 1030,
"text": "Herzig et al., , 2021",
"ref_id": "BIBREF8"
},
{
"start": 1031,
"end": 1051,
"text": "Katsis et al., 2021;",
"ref_id": null
},
{
"start": 1052,
"end": 1069,
"text": "Nan et al., 2021)",
"ref_id": null
},
{
"start": 1234,
"end": 1255,
"text": "(Herzig et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 1534,
"end": 1552,
"text": "(Zhu et al., 2021)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 651,
"end": 713,
"text": "Table Adapter Decoder Text Adapter Nx Table document",
"ref_id": null
},
{
"start": 729,
"end": 742,
"text": "Table Adapter",
"ref_id": null
},
{
"start": 756,
"end": 764,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address these challenges, we study parameterefficient transfer learning for abstractive QA over tables and over text. We are motivated to use adapter-layers that inject small bottle-neck layers between frozen pre-trained transformer layers as they achieve comparable performance to fine-tuning on a variety of tasks such as multi-lingual translation (Pfeiffer et al., 2020; Philip et al., 2020; Guo et al., 2020) , classification (Houlsby et al., 2019a ), text-to-text generation (Lin et al., 2020) , domain-adaptation in dialogue state tracking, and response generation (Hung et al., 2021) .",
"cite_spans": [
{
"start": 353,
"end": 376,
"text": "(Pfeiffer et al., 2020;",
"ref_id": "BIBREF24"
},
{
"start": 377,
"end": 397,
"text": "Philip et al., 2020;",
"ref_id": "BIBREF25"
},
{
"start": 398,
"end": 415,
"text": "Guo et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 433,
"end": 455,
"text": "(Houlsby et al., 2019a",
"ref_id": "BIBREF10"
},
{
"start": 483,
"end": 501,
"text": "(Lin et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 574,
"end": 593,
"text": "(Hung et al., 2021)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Ablation studies on adapter layers (Ruckl\u00e9 et al., 2020) on masked language models such as BERTbase and RoBERTa over the GLUE benchmark demonstrate that removing beginning adapter layers leads to a minimal drop in performance. Extending adapter layer ablation over separate encoder and decoder modules is non-trivial as the conventional approach of sequential pruning of layers does not extend to consecutive encoder and decoder modules. Our work explores the interaction of adapter layers from both modules in the context of abstractive QA. Lin et al. (2020) explore the impact of the adapter bottle-neck dimension for various language generation tasks over an auto-regressive model such as GPT-2 (Radford et al., 2019) . They do not study tabular data nor ablate adapter layers, which is crucial in understanding impact of individual adapters in sequential transformer module architectures such as encoder-decoder. Our analysis is complementary to (Lin et al., 2020) as we ablate adapter layers to study parameter-performance trade-off whereas they only focus on adapter bottleneck size. Also, we generalize beyond the text-totext setting and explore language generation from structured or unstructured input such as tables and text. This introduces domain-shift in both the task and structure of the downstream data. We propose a system, named Parameter, Efficient, Abstractive Question Answering (PeaQA), shown in Figure 1 , which learns to reason over unstructured and structured input using a shared pre-trained language model and modality-specific adapter layers. We automatically transform hierarchical tables to regular tables to have a uniform representation without breaking associations between table cells. In addition, we extend the study of ablating adapter layers over both encoder and decoder modules.",
"cite_spans": [
{
"start": 35,
"end": 56,
"text": "(Ruckl\u00e9 et al., 2020)",
"ref_id": null
},
{
"start": 542,
"end": 559,
"text": "Lin et al. (2020)",
"ref_id": "BIBREF19"
},
{
"start": 698,
"end": 720,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 950,
"end": 968,
"text": "(Lin et al., 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1418,
"end": 1426,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contributions are summarized as: (1) We perform parameter-efficient abstractive question answering over multi-modal context using only additional 1.5% of trainable parameters for each modality. Our adaptertuned model outperforms existing work by a large margin on tabular QA datasets and achieves comparable performance on a textual QA dataset. (2) We study tabular QA as a new modality that introduces massive input domain shift to pretrained language models. We propose a 2step transformation of hierarchical tables to sequences, which produces a uniform representation to be used by a single, shared pre-trained language model and modality-specific adapter layers. To the best of our knowledge, this is the first work that explores tabular QA question answering in a parameter-efficient manner. (3) We ablate adapter layers in both encoder and decoder modules to study their impact and show that beginning layers from both encoder and decoder can be eliminated without significant drop in performance. We also demonstrate that last encoder adapter layers are indispensable and have greater contribution than decoder layers at the same level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Tabular question answering. Tabular QA systems aim to answer questions from structured tables, which can be regular or hierarchical. Hierarchical tables can have header cells and body cells spanning across multiple rows and columns (Cheng et al., 2021) . In most tabular QA systems (Herzig et al., 2020; Zhu et al., 2021; Katsis et al., 2021) , the structure of the table is encoded in the embedding layer of large language models by introducing table specific position information such as row id and column id. Concurrent to our work, abstractive QA over tables (Nan et al., 2021; Cheng et al., 2021) poses additional challenges of generating natural answers by reasoning and aggregating discontinuous facts from the table.",
"cite_spans": [
{
"start": 232,
"end": 252,
"text": "(Cheng et al., 2021)",
"ref_id": "BIBREF4"
},
{
"start": 282,
"end": 303,
"text": "(Herzig et al., 2020;",
"ref_id": "BIBREF9"
},
{
"start": 304,
"end": 321,
"text": "Zhu et al., 2021;",
"ref_id": null
},
{
"start": 322,
"end": 342,
"text": "Katsis et al., 2021)",
"ref_id": null
},
{
"start": 563,
"end": 581,
"text": "(Nan et al., 2021;",
"ref_id": null
},
{
"start": 582,
"end": 601,
"text": "Cheng et al., 2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Textual question answering. Question answering over text measures a system's ability to comprehend free-form text in the user question and context passage(s) and predict an answer. The answer predicted can be extractive in nature, where the system identifies short text spans in the context passage to answer the user query (Lee et al., 2016; Seo et al., 2016; Rajpurkar et al., 2016; Pearce et al., 2021) , or it can be abstractive, where it is required to generate a free-form answer (Yin et al., 2016; Mitra, 2017; Bauer et al., 2018; Reddy et al., 2019) . Transfer learning. Transfer learning techniques such as fine-tuning pre-trained models for down-stream tasks, require a new set of parameters to be learnt for each new task. To avoid such memory intensive transfer learning methods, adapters have been proposed as a parameter-efficient method of adapting to new domains (Houlsby et al., 2019b; Pfeiffer et al., 2020) . Adapters have been extended to language generation in a variety of generative tasks such as translation, summarization, multiturn dialogue, and task-oriented natural language generation (Lin et al., 2020) . Our work combines all the aforementioned aspects to generate abstractive answers from both tables and text with only 0.7%-1.0% trainable parameters without compromising performance.",
"cite_spans": [
{
"start": 324,
"end": 342,
"text": "(Lee et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 343,
"end": 360,
"text": "Seo et al., 2016;",
"ref_id": "BIBREF32"
},
{
"start": 361,
"end": 384,
"text": "Rajpurkar et al., 2016;",
"ref_id": "BIBREF27"
},
{
"start": 385,
"end": 405,
"text": "Pearce et al., 2021)",
"ref_id": "BIBREF23"
},
{
"start": 486,
"end": 504,
"text": "(Yin et al., 2016;",
"ref_id": "BIBREF35"
},
{
"start": 505,
"end": 517,
"text": "Mitra, 2017;",
"ref_id": "BIBREF20"
},
{
"start": 518,
"end": 537,
"text": "Bauer et al., 2018;",
"ref_id": "BIBREF1"
},
{
"start": 538,
"end": 557,
"text": "Reddy et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 879,
"end": 902,
"text": "(Houlsby et al., 2019b;",
"ref_id": "BIBREF11"
},
{
"start": 903,
"end": 925,
"text": "Pfeiffer et al., 2020)",
"ref_id": "BIBREF24"
},
{
"start": 1114,
"end": 1132,
"text": "(Lin et al., 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We focus on encoder-decoder models for the task of abstractive question answering. We use a BART (Lewis et al., 2019) encoder-decoder architecture which comprises of a bidirectional encoder and an auto-regressive decoder. The input sequence consists of the question, the context title and context sequence preceded with prompts indicating the beginning of the each sub-sequence. Formally, the input sequence is represented as <question> q",
"cite_spans": [
{
"start": 97,
"end": 117,
"text": "(Lewis et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "0 q 1 . . . q m <title> t 1 t 2 . . . t p <context> c 0 c 1 . . . c n ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "where q i is the i-th question token, t j is the j-th title token, and c k is the k-th context token. The context can either be a text passage or a flattened table. The parameters of the pre-trained BART model are frozen during training. Modality specific adapter layers added to the model are trained on either tabular context or textual context to generate natural answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "To study multi-modal abstractive QA, we first focus on free-form text as context to the system. We train adapter layers for textual context on the Narra-tiveQA dataset (Ko\u010disk\u00fd et al., 2018) . NarrativeQA is a complex abstractive question answering dataset over stories. The dataset contains 32, 747 samples in the training set, 3, 461 samples in the validation set, and 10, 557 samples in the test set. For our task, we have selected the input context passage to be the human annotated summary of each sample which is the Wikipedia page summary of the story and represented as a paragraph. The input to the model is the question, title and summary of each passage and the target is the abstractive answer.",
"cite_spans": [
{
"start": 168,
"end": 190,
"text": "(Ko\u010disk\u00fd et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Textual Question Answering",
"sec_num": "4"
},
{
"text": "We study tabular QA as a new modality which introduces massive input domain shift to pre-trained language models. Tables enforce structural constraints in their representation which is incompatible with the expected input format of pre-trained language models. To achieve our goal of parameter efficiency by utilizing a uniform pre-trained language model, we only train table specific adapter layers while keeping the pre-trained model frozen. However, this necessitates a uniform input representation for both tables and text. An additional challenge is introduced to maintain uniformity across different table types (regular, hierarchical).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Question Answering",
"sec_num": "5"
},
{
"text": "For our task, we explore 2 tabular QA datasets, namely, Tablesum and FeTaQA (Nan et al., 2021) . Tablesum consists of 200 unique Wikipedia tables over which questions and abstractive answers are manually annotated; 40% of the samples are questions over hierarchical tables but the tables in their released data are missing information in the hierarchical cells and their work do not handle hierarchies. We address this issue by extracting the wikitables from the respective Wikepedia pages and release a clean version of the dataset. 1 FeTaQA (Nan et al., 2021 ) is a larger abstractive tabular QA dataset consisting of question and free-form answers over 10, 330 regular tables. The dataset consists of 7, 326 samples in the training set, 1, 001 in the validation set, and 2, 003 in the test set. FeTaQA consists of human-annotated answers containing explanations involving entities and relations.",
"cite_spans": [
{
"start": 76,
"end": 94,
"text": "(Nan et al., 2021)",
"ref_id": null
},
{
"start": 534,
"end": 535,
"text": "1",
"ref_id": null
},
{
"start": 543,
"end": 560,
"text": "(Nan et al., 2021",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Question Answering",
"sec_num": "5"
},
{
"text": "For our work, we choose to represent all tables uniformly in a two-step process: 1 one. We depict this process in Figure 2a , which yields a linear header a(d), a(d), b, e(f ). Linearizing table body. Multi-span table body cells are parsed differently than headers. Each table body cell is replicated with one or multiple header cells depending on its span across columns. Cells that span across multiple rows are replicated with all the spanned rows. This process leads to a regular table. We flatten the regular table in row-major form, concatenating rows sequentially. Each row is a sequence of (key, value) pairs where a key is a column header and the value is the cell value of that column as depicted in Figure 2b .",
"cite_spans": [],
"ref_spans": [
{
"start": 114,
"end": 123,
"text": "Figure 2a",
"ref_id": "FIGREF1"
},
{
"start": 710,
"end": 719,
"text": "Figure 2b",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Table Representation",
"sec_num": "5.1"
},
{
"text": "We seek to answer the following research questions with our experiments: (RQ1) How does adaptertuning perform compared to fine-tuning in the context of multi-modal input? (RQ2) Do all adapter layers across the encoder and decoder contribute equally to performance across tasks/modalities?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6"
},
{
"text": "We perform all our experiments on the large variant of BART model. We fine-tune the BART-large model over the 3 datasets as the state-of-the-art fine-tuned models utilize different architectures for different datasets making comparison with adaptertuning difficult. We treat our fine-tuned BART models on the 3 datasets as baselines. We sweep learning rates from {8e \u22124 , 6e \u22124 , 3e \u22124 , 1e \u22124 , 5e \u22125 , 4 e \u22125, 3e \u22125 , 2e \u22125 , 1e \u22125 } and select the best performing learning rate for each dataset. We select 4e \u22125 for fine-tuning on Tablesum, 8e \u22124 on Fe-TaQA datasets and 2e \u22125 to fine-tune NarrativeQA. We use a batch size of 4 and gradient accumulation of 8 to emulate an effective batch size of 32. The maximum target sequence length is set to 200 for tabular QA datasets and to 100 for the textual QA dataset. On the Tablesum dataset, we follow 5-fold cross validation as described in the original work to evaluate our models. On FeTaQA and Narra-tiveQA, we utilize the test split for evaluating our models. We train the model on each dataset for 15 epochs and evaluate on Rouge-2, Rouge-L and sacreBLEU metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning",
"sec_num": "6.1"
},
{
"text": "We perform adapter-tuning as a parameter-efficient alternative to adapt BART-large model to the abstractive question answering task across different modalities. We first freeze all layers of the pretrained BART-large model which was trained on text reconstruction as mentioned in the original BART paper (Lewis et al., 2019) . We add bottleneck adapter layers from the Houlsby adapter configuration (Houlsby et al., 2019a) which are trained to adapt to the downstream abstractive question answering task and also to modality specific input context. Each adapter layer has a bottle-neck embedding size of 64. As mentioned in Section 6.1, we sweep learning rates and select the best performing learning rate for each dataset. We select 6e \u22124 for the tabular QA datasets Tablesum and FeTaQA, and select 1e \u22121 to train the textual QA dataset NarrativeQA. We use the same batch size and maximum target sequence length as finetuning for effective comparison. A summary of hyper-parameters are mentioned in Table 1 ",
"cite_spans": [
{
"start": 304,
"end": 324,
"text": "(Lewis et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 399,
"end": 422,
"text": "(Houlsby et al., 2019a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 1000,
"end": 1007,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Adapter-Tuning",
"sec_num": "6.2"
},
{
"text": "Adapter-layer pruning has been explored on the GLUE benchmark in (Ruckl\u00e9 et al., 2020) , which demonstrates that removing adapter layers from the beginning of BERT-base and RoBERTa models leads to minimal performance drop. We extend adapter layer ablation to encoder-decoder architectures and hypothesize that this phenomenon should be observed on both the encoder and decoder modules. However, it is non-trivial how the adapterlayers in the encoder and decoder interact with each other and contribute to performance. Previous studies (Ruckl\u00e9 et al., 2020) on adapter ablation prune consecutive adapter layers in masked language models. This approach does not extend directly to sequential modules of encoder-decoder where intra-module adapters not only contribute to their respective objective of encoding and decoding but also contributes to inter-module interaction and performance. To measure the impact of the adapter layers in different modules, we perform adapter ablation in both the encoder and decoder. First, we uniformly remove adapter layers from both encoder and decoder modules starting from the beginning layers of both modules and finally deleting all layers. This leads to 12 experiments corresponding to eliminating 12 encoder and 12 decoder adapter layers. To study interaction across inter-module adapters at different levels, we conduct 36 experiments of different configurations of adapter elimination from the last 6 levels of encoder and decoder. We analyze the performance by each configuration in Section 7.3.",
"cite_spans": [
{
"start": 65,
"end": 86,
"text": "(Ruckl\u00e9 et al., 2020)",
"ref_id": null
},
{
"start": 535,
"end": 556,
"text": "(Ruckl\u00e9 et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study: Adapter Pruning",
"sec_num": "6.3"
},
{
"text": "We compare the results of our baseline fine-tuned models with the state-of-the-art fine-tuned mod-els in Section 7.1. We address (RQ1) \"How does adapter-tuning perform compared to fine-tuning in the context of multi-modal input?\" in Section 7.2 and (RQ2) \"Do all adapter layers across the encoder and decoder contribute equally to performance across tasks/modalities?\" in 7.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "We study the results of our baseline fine-tuned models with the state-of-the-art fine-tuned models for the 3 datasets. The results of the experiments are shown in Table 2 . We observe that for the Tablesum dataset, our fine-tuned model outperform the best state-of-art T5 model on Rouge-1 by 3.8% , Rouge-2 by 4.3% and Rouge-L score by 4%. This can be attributed to fine-tuning our model on the clean version of the dataset. Our fine-tuned models perform comparably to the state-of-the-art T5-large on Fe-TaQA dataset, i.e, 0.2% on Rouge-1, 0.01% higher on Rouge-2, and 0.04% higher on Rouge-L. Our fine-tuning results on NarrativeQA are lower than state-of-the-art models trained with sophisticated reasoning architecture. The focus of this work was primarily on comparing fine-tuning and adaptertuning and hence we leave explicit reasoning as part of future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 170,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Fine-Tuned Models",
"sec_num": "7.1"
},
{
"text": "We address (RQ1) by comparing the performance of adapter-tuned models to our baseline fine-tuned models. For Tablesum, as observed in Table 2 finetuning(baseline) marginally outperforms adaptertuning with 0.7% higher Rouge-1 and 0.4% higher Rouge-L scores while having the same Rouge-2 score. For FeTaQA, adapter-tune shows a larger Question: What and when were Akhila Kishore's first two films? Target: akhila kishore made her debut in the kannada film padhe padhe (2013), and appeared in kathai thiraikathai vasanam iyakkam (2014). 1937, uski tamanna in 1939, and, in 1949 , aiye. (French, 1999; Kirkpatrick et al., 2017; induced by differences in the distribution of downstream tabular data format from the original text data format of pretraining.",
"cite_spans": [
{
"start": 534,
"end": 574,
"text": "1937, uski tamanna in 1939, and, in 1949",
"ref_id": null
},
{
"start": 583,
"end": 597,
"text": "(French, 1999;",
"ref_id": "BIBREF6"
},
{
"start": 598,
"end": 623,
"text": "Kirkpatrick et al., 2017;",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 134,
"end": 141,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Adapter-Tuned Models",
"sec_num": "7.2"
},
{
"text": "To explore this phenomenon further, we analyse examples from FeTaQA dataset in Table 3 where adapter-tuning outperforms fine-tuning. We observe that the fine-tuned model is unable to disambiguate surface-form similarities from the column semantics in the first example. The intended semantics of the named-entity Akhila Kishore in the question is Actor. While the surface-form is similar to the column value Akhila, the intended semantics is that of the column header Role. The fine-tuned model wrongly predicts the second and third row of the tabular context as correct grounding of information while adapter-tuning is able to disambiguate and predicts information from the first 2 rows as answer. We observe that the fine-tuned model also predicts information from the wrong column Director instead of Cast in the second example. Adaptertune correctly identifies the column but partially generates the required information in the prediction. The third example depicts both non-factual and non-fluent prediction by the fine-tuned model.",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 3",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Adapter-Tuned Models",
"sec_num": "7.2"
},
{
"text": "We demonstrate an example of a hierarchical table of Tablesum in Table 4 . The question requires aggregation on the table cells and has various summary-like targets associated with it. The hierarchical table mentions Ed Sheeran 3 times, but the actual number of occurrence is 10 times, from Question: how many times was ed sheeran listed as the performer? Targets:",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 72,
"text": "Table 4",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Adapter-Tuned Models",
"sec_num": "7.2"
},
{
"text": "\u2022 Ed Sheeran was listed as a performer twice in the table documenting the top hits of 2014 in Sweden. Other English-Language top performers included Bruce Springsteen, Sam Smith, and Coldplay, implying that English-Language music has significant success in Sweden. \u2022 According to the table, in 2014, Ed Sheeran was only listed as the performer one time. It was for the song that he performed that is called \"\"I See Fire\"\", which was out in January and February of 2014 Adaper-tune: I found a table of the year 2014. It shows the week of the week, the song title, the artist, the title of the song, the album title, and the performer. There are 11 times that Ed Sheeran was listed as the performer in the year. The song title is \"Timber\" and the album is True. Fine-tune: I found a table of Ed Sheeran's year in 2014. He was listed as the performer 14 times in the year 2014. The first time he was listed was on 3 January 2014 with the song \"Timber\" and the last time was on 4 April 2014 with \"I See Fire\". Table 4 , both models generates long answers summarizing information from the context table. However, as the models do not explicitly handle cell aggregation, we observe factual mistakes in both adapter-tuned and fine-tuned models. The models find Tablesum samples challenging even though the generated language is fluent and readable. For textual QA, on the NarrativeQA dataset, adapter-tuning performs comparable to fine-tuning with the adapter-tuned model achieving 0.8% lower Rouge-1, 1.8% higher Rouge-2 and 1.5% lower Rouge-L scores than fine-tuning.",
"cite_spans": [],
"ref_spans": [
{
"start": 1006,
"end": 1013,
"text": "Table 4",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Adapter-Tuned Models",
"sec_num": "7.2"
},
{
"text": "We conclude that adapter-tuning performs better than fine-tuning for out-of-domain tabular data and comparable performance on in-domain text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapter-Tuned Models",
"sec_num": "7.2"
},
{
"text": "We study (RQ2) by ablating adapter layers in both the encoder and decoder modules. We uniformly eliminate successive adapter layers from both encoder and decoder starting from the first layer in both modules and finally deleting all layers. This leads to 12 experiments corresponding to 12 en- coder and 12 decoder adapter layers. We number the encoder adapter layers from 0-11 and the decoder adapter layers from 12-23. We measure the performance of the models using Rouge-2, Rouge-L 2 and sacreBLEU 3 scores. The F-scores for each dataset (NarrativeQA, Tablesum, FeTaQA) are shown in Figure 4 , 5 and 6, respectively. We observe that as more adapter layers are eliminated, the performance drops across all datasets. However, the performance drop is minimal until the last adapter layers are also deleted. The inflection point varies across dataset but is limited to the last 2 layers of the encoder and decoder. For the Narra- Figure 6 : Adapter layer ablation sacreBLEU F-scores. The X-axis depicts encoder-adapter layers (0-11) and decoder adapter layers (12-23) deleted progressively. Each (x\u2212y) (r\u2212s) represents F-score with encoder layers p to q deleted and decoder layers r to s deleted.",
"cite_spans": [
{
"start": 541,
"end": 572,
"text": "(NarrativeQA, Tablesum, FeTaQA)",
"ref_id": null
},
{
"start": 1095,
"end": 1100,
"text": "(x\u2212y)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 586,
"end": 594,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 929,
"end": 937,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation of adapter layers",
"sec_num": "7.3"
},
{
"text": "tiveQA dataset, this point is when all layers till the second last adapter layer from both the encoder and decoder are deleted. For the FeTaQA and Tablesum datasets, the performance drops sharply only when the last encoder and decoder layers are removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation of adapter layers",
"sec_num": "7.3"
},
{
"text": "To analyze contribution of the i-th adapter layer of encoder and decoder to performance, we perform ablation of adapter layers (0-6), (0-7), . . . , (0-11) from encoder and adapter layers (12-18), (12-19), . . . , (12-23) from decoder (decoder layers are numbered 12-23). This leads to 36 configurations where a configuration (p-q, r-s) represents removal of all encoder adapters from p-th to q-th layer and all decoder adapters from r-th to s-th. The results are shown in Figure 3 . We observe that performance remains comparable as we progressively eliminate adapter layers from encoder and decoder until the last layers. The performance drops steeply when we remove the last encoder and decoder adapter layers depicted towards the topright corner of RougeL scores in Figures 3a, 3b , and 3c and BLEU scores in Figures 3d, 3e, and 3f . This implies that last adapter layers learns most of the domain information.",
"cite_spans": [],
"ref_spans": [
{
"start": 473,
"end": 481,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 770,
"end": 784,
"text": "Figures 3a, 3b",
"ref_id": "FIGREF2"
},
{
"start": 813,
"end": 835,
"text": "Figures 3d, 3e, and 3f",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Ablation of adapter layers",
"sec_num": "7.3"
},
{
"text": "We also observe that the last encoder and decoder layers contribute differently to performance. Removing the last encoder layer (column 0-11) leads to substantial score drop across all decoder layers. This indicates that the last encoder layer is indispensable. Keeping only the last decoder adapter (row 12-23) is comparable to keeping last two last encoder layers (column 0-10). We also observe that retaining just the last 50% of adapter layers from both encoder and decoder increases parameter efficiency by 0.7% parameters as summarized in Table 5 without significant compromise to performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 545,
"end": 552,
"text": "Table 5",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Ablation of adapter layers",
"sec_num": "7.3"
},
{
"text": "We are the first to study parameter-efficient transfer learning over tables and text for abstractive question answering using adapters. We demonstrate that parameter efficient adapter-tuning outperforms finetuning on out-of-domain tabular data and achieves comparable results on in-domain textual data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "We propose a transformation from hierarchical tables to regular ones and further into a sequential form compatible with pre-trained model. We extend an existing ablation study of adapter layers to encoder-decoder setting and demonstrate that adapter layers from the end of the encoder is indispensable to encoding modality specific information than decoder adapter layers at the same level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Our results are useful for exploring scalability of QA models in memory constrained situations with comparable performance while scaling across modalities using light-weight adapters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "One of the limitations of our work is that our models do not explicitly reason and aggregate over the table cells. This might lead to fluent but factually incorrect answers on challenging Tablesum dataset. Addressing this limitation is left as future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "We would like to thank Elsevier for their support throughout this project and funding this work. This work was also supported by the NWO Innovational Research Incentives Scheme Vidi (016.Vidi.189.039), the NWO Smart Culture -Big Data / Digital Humanities (314-99-301), the H2020-EU.3.4. -SOCIETAL CHALLENGES -Smart, Green And Integrated Transport (814961). All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "9"
},
{
"text": "We provide further details on statistics of the datasets used (Appendix A) and on the Rouge-2 scores for an encoder-decoder adapter layer ablation study (Appendix B).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "APPENDICES",
"sec_num": null
},
{
"text": "Statistics of the three datasets, i.e., Tablesum, Fe-TaQA and NarrativeQA are listed in Table 6 . Tablesum has the longest answer length. The answers are summary-like, often, describing aspects of the table contents. The FeTaQA dataset contains answers of mostly single sentences and targeted towards specific facts asked in the question. The Narra-tiveQA dataset focuses on questions from stories. The answer lengths vary from single words to long sentences. For the tabularQA dataset, Tablesum contains larger tables than the FeTaQA dataset even though it is limited to 200 unique tables over which questions are asked. The FeTaQA dataset's tables contain more columns on average than Tablesum.",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 95,
"text": "Table 6",
"ref_id": "TABREF13"
},
{
"start": 98,
"end": 106,
"text": "Tablesum",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Dataset Statistics",
"sec_num": null
},
{
"text": "Open Modality Table Table-type Regular Training samples 798 Validation samples 200 Test samples -Max question length 114 Max target length 1, 579 Max table row 155 Max table column 8 FeTaQA Domain Open Modality Table Table- Train max table rows 34 Train max table columns 30 Val max question length 182 Val target length 325 Val max table rows 34 Val max table columns 22 Test max question length 193 Test max target length 295 Test max table lows 34 Test max table columns Ablation results (Rouge-2 F-scores) of 36 configurations of adapter layers deleted from the later half of the encoder and decoder. Deleting the last encoder adapter layers leads to massive drop in performance as observed in the last three columns of Figures 7a, 7b and 7c. However, deleting the last decoder adapter layers results in better performance in comparison to the encoder layers at the same level as observed from the top 3 rows. Figure 7 : Adapter layer Rouge-2 ablation scores. The X-axis represents range of encoder adapter layers deleted, the Y-Axis represents range of decoder adapter layers deleted.",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 197,
"text": "Table Table-type Regular Training samples 798 Validation samples 200 Test samples -Max question length 114 Max target length 1, 579 Max table row 155 Max table column 8",
"ref_id": "TABREF2"
},
{
"start": 226,
"end": 239,
"text": "Table Table-",
"ref_id": null
},
{
"start": 240,
"end": 508,
"text": "Train max table rows 34 Train max table columns 30 Val max question length 182 Val target length 325 Val max table rows 34 Val max table columns 22 Test max question length 193 Test max target length 295 Test max table lows 34 Test max table columns",
"ref_id": "TABREF2"
},
{
"start": 949,
"end": 957,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Domain",
"sec_num": null
},
{
"text": "x-y implies all adapter layers from x to y inclusive. There are 36 model ablation configurations displayed. The ablation starts from 0 to 6 encoder adapter layers removal and 12 to 18 decoder adapter layer removal represented by the bottom left cell ((0-6), (12-18)) and progressively increases deletion of encoder adapter layers along the X-axis and decoder adapter layers along the Y-axis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain",
"sec_num": null
},
{
"text": "The cleaned data and code can be found at https:// github.com/kolk/Pea-QA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://pypi.org/project/rouge-score/ 3 https://github.com/mjpost/sacreBLEU",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "VQA: Visual question answering",
"authors": [
{
"first": "Aishwarya",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Stanislaw",
"middle": [],
"last": "Antol",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "C",
"middle": [
"Lawrence"
],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1505.00468"
]
},
"num": null,
"urls": [],
"raw_text": "Aishwarya Agrawal, Jiasen Lu, Stanislaw Antol, Mar- garet Mitchell, C. Lawrence Zitnick, Dhruv Batra, and Devi Parikh. 2016. VQA: Visual question an- swering. arXiv preprint arXiv:1505.00468.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Commonsense for generative multi-hop question answering tasks",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Yicheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lisa Bauer, Yicheng Wang, and Mohit Bansal. 2018. Commonsense for generative multi-hop question an- swering tasks. In EMNLP.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Stochastic language generation for spoken dialogue systems",
"authors": [
{
"first": "Alice",
"middle": [],
"last": "Oh Carnegie",
"suffix": ""
},
{
"first": "Alice",
"middle": [
"H"
],
"last": "Oh",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of the ANLP/NAACL 2000 Wrkshp. on Conversational Systems",
"volume": "",
"issue": "",
"pages": "27--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alice Oh Carnegie and Alice H. Oh. 2000. Stochastic language generation for spoken dialogue systems. In In Proc. of the ANLP/NAACL 2000 Wrkshp. on Conversational Systems, pages 27-32.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Recall and learn: Fine-tuning deep pretrained language models with less forgetting",
"authors": [
{
"first": "Sanyuan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yutai",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiangzhan",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7870--7881",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.634"
]
},
"num": null,
"urls": [],
"raw_text": "Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, and Xiangzhan Yu. 2020. Recall and learn: Fine-tuning deep pretrained language models with less forgetting. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 7870-7881, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Hitab: A hierarchical table dataset for question answering and natural language generation",
"authors": [
{
"first": "Zhoujun",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Haoyu",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Zhiruo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ran",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Jiaqi",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Shi",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Jian-Guang",
"middle": [],
"last": "Lou",
"suffix": ""
},
{
"first": "Dongmei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2108.06712"
]
},
"num": null,
"urls": [],
"raw_text": "Zhoujun Cheng, Haoyu Dong, Zhiruo Wang, Ran Jia, Jiaqi Guo, Yan Gao, Shi Han, Jian-Guang Lou, and Dongmei Zhang. 2021. Hitab: A hierarchical table dataset for question answering and natural language generation. arXiv preprint arXiv:2108.06712.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Towards multi-modal conversational information seeking",
"authors": [
{
"first": "Yashar",
"middle": [],
"last": "Deldjoo",
"suffix": ""
},
{
"first": "Johanne",
"middle": [
"R"
],
"last": "Trippas",
"suffix": ""
},
{
"first": "Hamed",
"middle": [],
"last": "Zamani",
"suffix": ""
}
],
"year": 2021,
"venue": "SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event",
"volume": "",
"issue": "",
"pages": "1577--1587",
"other_ids": {
"DOI": [
"10.1145/3404835.3462806"
]
},
"num": null,
"urls": [],
"raw_text": "Yashar Deldjoo, Johanne R. Trippas, and Hamed Za- mani. 2021. Towards multi-modal conversational information seeking. In SIGIR '21: The 44th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, pages 1577-1587. ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Catastrophic forgetting in connectionist networks",
"authors": [
{
"first": "Robert",
"middle": [
"M"
],
"last": "French",
"suffix": ""
}
],
"year": 1999,
"venue": "Trends in Cognitive Sciences",
"volume": "3",
"issue": "4",
"pages": "128--135",
"other_ids": {
"DOI": [
"10.1016/S1364-6613(99)01294-2"
]
},
"num": null,
"urls": [],
"raw_text": "Robert M. French. 1999. Catastrophic forgetting in con- nectionist networks. Trends in Cognitive Sciences, 3(4):128-135.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Incorporating bert into parallel sequence decoding with adapters",
"authors": [
{
"first": "Junliang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Zhirui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Linli",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hao-Ran",
"suffix": ""
},
{
"first": "Boxing",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Enhong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems",
"volume": "33",
"issue": "",
"pages": "10843--10854",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junliang Guo, Zhirui Zhang, Linli Xu, Hao-Ran Wei, Boxing Chen, and Enhong Chen. 2020. Incorpo- rating bert into parallel sequence decoding with adapters. In Advances in Neural Information Pro- cessing Systems, volume 33, pages 10843-10854. Curran Associates, Inc.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Open domain question answering over tables via dense retrieval",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Herzig",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Syrine",
"middle": [],
"last": "Krichene",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Eisenschlos",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.43"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan Herzig, Thomas M\u00fcller, Syrine Krichene, and Julian Eisenschlos. 2021. Open domain question an- swering over tables via dense retrieval. Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "TaPas: Weakly supervised table parsing via pre-training",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Herzig",
"suffix": ""
},
{
"first": "Krzysztof",
"middle": [],
"last": "Nowak",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Piccinno",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Eisenschlos",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4320--4333",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.398"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan Herzig, Pawel Krzysztof Nowak, Thomas M\u00fcller, Francesco Piccinno, and Julian Eisenschlos. 2020. TaPas: Weakly supervised table parsing via pre-training. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4320-4333, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Parameter-efficient transfer learning for NLP",
"authors": [
{
"first": "Neil",
"middle": [],
"last": "Houlsby",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Giurgiu",
"suffix": ""
},
{
"first": "Stanislaw",
"middle": [],
"last": "Jastrzebski",
"suffix": ""
},
{
"first": "Bruna",
"middle": [],
"last": "Morrone",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "De Laroussilhe",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Gesmundo",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Attariyan",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Gelly",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning",
"volume": "97",
"issue": "",
"pages": "2790--2799",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019a. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2790-2799. PMLR.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Parameter-efficient transfer learning for NLP",
"authors": [
{
"first": "Neil",
"middle": [],
"last": "Houlsby",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Giurgiu",
"suffix": ""
},
{
"first": "Stanislaw",
"middle": [],
"last": "Jastrzebski",
"suffix": ""
},
{
"first": "Bruna",
"middle": [],
"last": "Morrone",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "De Laroussilhe",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Gesmundo",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Attariyan",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Gelly",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.00751"
]
},
"num": null,
"urls": [],
"raw_text": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Ges- mundo, Mona Attariyan, and Sylvain Gelly. 2019b. Parameter-efficient transfer learning for NLP. arXiv preprint arXiv:1902.00751.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "GQA: A new dataset for real-world visual reasoning and compositional question answering",
"authors": [
{
"first": "A",
"middle": [],
"last": "Drew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hudson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.09506"
]
},
"num": null,
"urls": [],
"raw_text": "Drew A. Hudson and Christopher D. Manning. 2019. GQA: A new dataset for real-world visual reason- ing and compositional question answering. arXiv preprint arXiv:1902.09506.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "DS-TOD: Efficient domain specialization for task oriented dialog",
"authors": [
{
"first": "Chia-Chien",
"middle": [],
"last": "Hung",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Lauscher",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2110.08395"
]
},
"num": null,
"urls": [],
"raw_text": "Chia-Chien Hung, Anne Lauscher, Simone Paolo Ponzetto, and Goran Glava\u0161. 2021. DS-TOD: Ef- ficient domain specialization for task oriented dialog. arXiv preprint arXiv:2110.08395.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Jaydeep Sen, Karthik Sankaranarayanan, and Soumen Chakrabarti. 2021. AIT-QA: Question answering dataset over complex tables in the airline industry",
"authors": [
{
"first": "Yannis",
"middle": [],
"last": "Katsis",
"suffix": ""
},
{
"first": "Saneem",
"middle": [],
"last": "Chemmengath",
"suffix": ""
},
{
"first": "Vishwajeet",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Samarth",
"middle": [],
"last": "Bharadwaj",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Canim",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": "Alfio",
"middle": [],
"last": "Gliozzo",
"suffix": ""
},
{
"first": "Feifei",
"middle": [],
"last": "Pan",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2106.12944"
]
},
"num": null,
"urls": [],
"raw_text": "Yannis Katsis, Saneem Chemmengath, Vishwajeet Ku- mar, Samarth Bharadwaj, Mustafa Canim, Michael Glass, Alfio Gliozzo, Feifei Pan, Jaydeep Sen, Karthik Sankaranarayanan, and Soumen Chakrabarti. 2021. AIT-QA: Question answering dataset over complex tables in the airline industry. arXiv preprint arXiv:2106.12944.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell",
"authors": [
{
"first": "James",
"middle": [],
"last": "Kirkpatrick",
"suffix": ""
},
{
"first": "Razvan",
"middle": [],
"last": "Pascanu",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Rabinowitz",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Veness",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Desjardins",
"suffix": ""
},
{
"first": "Andrei",
"middle": [
"A"
],
"last": "Rusu",
"suffix": ""
},
{
"first": "Kieran",
"middle": [],
"last": "Milan",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Quan",
"suffix": ""
},
{
"first": "Tiago",
"middle": [],
"last": "Ramalho",
"suffix": ""
}
],
"year": 2017,
"venue": "Overcoming catastrophic forgetting in neural networks",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.00796"
]
},
"num": null,
"urls": [],
"raw_text": "James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Ag- nieszka Grabska-Barwinska, Demis Hassabis, Clau- dia Clopath, Dharshan Kumaran, and Raia Hadsell. 2017. Overcoming catastrophic forgetting in neural networks. arXiv preprint arXiv:1612.00796.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The NarrativeQA reading comprehension challenge",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u00fd",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Schwarz",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "G\u00e1bor",
"middle": [],
"last": "Melis",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "317--328",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00023"
]
},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Ko\u010disk\u00fd, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G\u00e1bor Melis, and Ed- ward Grefenstette. 2018. The NarrativeQA reading comprehension challenge. Transactions of the Asso- ciation for Computational Linguistics, 6:317-328.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning recurrent span representations for extractive question answering",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Shimi",
"middle": [],
"last": "Salant",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01436"
]
},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, and Jonathan Berant. 2016. Learning recurrent span representations for ex- tractive question answering. arXiv preprint arXiv:1611.01436.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Abdelrahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ves",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: De- noising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Exploring versatile generative language model via parameter-efficient transfer learning",
"authors": [
{
"first": "Zhaojiang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Madotto",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "441--459",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.41"
]
},
"num": null,
"urls": [],
"raw_text": "Zhaojiang Lin, Andrea Madotto, and Pascale Fung. 2020. Exploring versatile generative language model via parameter-efficient transfer learning. In Find- ings of the Association for Computational Linguistics: EMNLP 2020, pages 441-459, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A generative approach to question answering",
"authors": [
{
"first": "Rajarshee",
"middle": [],
"last": "Mitra",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.06238"
]
},
"num": null,
"urls": [],
"raw_text": "Rajarshee Mitra. 2017. A generative approach to ques- tion answering. arXiv preprint arXiv:1711.06238.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Caiming Xiong, and Dragomir Radev. 2021. Fetaqa: Free-form table question answering",
"authors": [
{
"first": "Linyong",
"middle": [],
"last": "Nan",
"suffix": ""
},
{
"first": "Chiachun",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Ziming",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Xi",
"middle": [],
"last": "Victoria Lin",
"suffix": ""
},
{
"first": "Neha",
"middle": [],
"last": "Verma",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Kry\u015bci\u0144ski",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Schoelkopf",
"suffix": ""
},
{
"first": "Riley",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Xiangru",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Murori",
"middle": [],
"last": "Mutuma",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2104.00369"
]
},
"num": null,
"urls": [],
"raw_text": "Linyong Nan, Chiachun Hsieh, Ziming Mao, Xi Victoria Lin, Neha Verma, Rui Zhang, Wojciech Kry\u015bci\u0144ski, Nick Schoelkopf, Riley Kong, Xiangru Tang, Murori Mutuma, Ben Rosand, Isabel Trindade, Renusree Bandaru, Jacob Cunningham, Caiming Xiong, and Dragomir Radev. 2021. Fetaqa: Free-form table ques- tion answering. arXiv preprint arXiv:2104.00369.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Multi-style generative reading comprehension",
"authors": [
{
"first": "Kyosuke",
"middle": [],
"last": "Nishida",
"suffix": ""
},
{
"first": "Itsumi",
"middle": [],
"last": "Saito",
"suffix": ""
},
{
"first": "Kosuke",
"middle": [],
"last": "Nishida",
"suffix": ""
},
{
"first": "Kazutoshi",
"middle": [],
"last": "Shinoda",
"suffix": ""
},
{
"first": "Atsushi",
"middle": [],
"last": "Otsuka",
"suffix": ""
},
{
"first": "Hisako",
"middle": [],
"last": "Asano",
"suffix": ""
},
{
"first": "Junji",
"middle": [],
"last": "Tomita",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2273--2284",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1220"
]
},
"num": null,
"urls": [],
"raw_text": "Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazu- toshi Shinoda, Atsushi Otsuka, Hisako Asano, and Junji Tomita. 2019. Multi-style generative reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2273-2284, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A comparative study of transformer-based language models on extractive question answering",
"authors": [
{
"first": "Kate",
"middle": [],
"last": "Pearce",
"suffix": ""
},
{
"first": "Tiffany",
"middle": [],
"last": "Zhan",
"suffix": ""
},
{
"first": "Aneesh",
"middle": [],
"last": "Komanduri",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Zhan",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2110.03142"
]
},
"num": null,
"urls": [],
"raw_text": "Kate Pearce, Tiffany Zhan, Aneesh Komanduri, and Justin Zhan. 2021. A comparative study of transformer-based language models on ex- tractive question answering. arXiv preprint arXiv:2110.03142.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Pfeiffer",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7654--7673",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.617"
]
},
"num": null,
"urls": [],
"raw_text": "Jonas Pfeiffer, Ivan Vuli\u0107, Iryna Gurevych, and Se- bastian Ruder. 2020. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7654-7673, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Monolingual adapters for zero-shot neural machine translation",
"authors": [
{
"first": "Jerin",
"middle": [],
"last": "Philip",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Berard",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Gall\u00e9",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "4465--4470",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.361"
]
},
"num": null,
"urls": [],
"raw_text": "Jerin Philip, Alexandre Berard, Matthias Gall\u00e9, and Laurent Besacier. 2020. Monolingual adapters for zero-shot neural machine translation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4465-4470, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Squad: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/d16-1264"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Natural language generation in dialog systems",
"authors": [
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the First International Conference on Human Language Technology Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Owen Rambow, Srinivas Bangalore, and Marilyn Walker. 2001. Natural language generation in di- alog systems. In Proceedings of the First Interna- tional Conference on Human Language Technology Research.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Trainable approaches to surface natural language generation and their application to conversational dialog systems",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 2002,
"venue": "Computer Speech & Language",
"volume": "16",
"issue": "3",
"pages": "435--455",
"other_ids": {
"DOI": [
"10.1016/S0885-2308(02)00025-6"
]
},
"num": null,
"urls": [],
"raw_text": "Adwait Ratnaparkhi. 2002. Trainable approaches to surface natural language generation and their appli- cation to conversational dialog systems. Computer Speech & Language, 16(3):435-455. Spoken Lan- guage Generation.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "CoQA: A conversational question answering challenge",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association of Computational Linguistics (TACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association of Com- putational Linguistics (TACL).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Nils Reimers, and Iryna Gurevych. 2020. Adapterdrop: On the efficiency of adapters in transformers",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Ruckl\u00e9",
"suffix": ""
},
{
"first": "Gregor",
"middle": [],
"last": "Geigle",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Glockner",
"suffix": ""
},
{
"first": "Tilman",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Pfeiffer",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.11918"
]
},
"num": null,
"urls": [],
"raw_text": "Andreas Ruckl\u00e9, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2020. Adapterdrop: On the effi- ciency of adapters in transformers. arXiv preprint arXiv:2010.11918.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Bidirectional attention flow for machine comprehension",
"authors": [
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01603"
]
},
"num": null,
"urls": [],
"raw_text": "Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "QRFA: A data-driven model of information-seeking dialogues",
"authors": [
{
"first": "Svitlana",
"middle": [],
"last": "Vakulenko",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Revoredo",
"suffix": ""
},
{
"first": "Claudio",
"middle": [
"Di"
],
"last": "Ciccio",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "De Rijke",
"suffix": ""
}
],
"year": 2019,
"venue": "ECIR 2019: 41st European Conference on Information Retrieval",
"volume": "",
"issue": "",
"pages": "541--557",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Svitlana Vakulenko, Kate Revoredo, Claudio Di Ciccio, and Maarten de Rijke. 2019. QRFA: A data-driven model of information-seeking dialogues. In ECIR 2019: 41st European Conference on Information Re- trieval, pages 541-557. Springer.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Semantically conditioned LSTM-based natural language generation for spoken dialogue systems",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1711--1721",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Milica Ga\u0161i\u0107, Nikola Mrk\u0161i\u0107, Pei- Hao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned LSTM-based natural lan- guage generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711-1721. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Neural generative question answering",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop on Human-Computer Question Answering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/w16-0106"
]
},
"num": null,
"urls": [],
"raw_text": "Jun Yin, Xin Jiang, Zhengdong Lu, Lifeng Shang, Hang Li, and Xiaoming Li. 2016. Neural generative ques- tion answering. Proceedings of the Workshop on Human-Computer Question Answering.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Tabert: Pretraining for joint understanding of textual and tabular data",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.745"
]
},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Se- bastian Riedel. 2020. Tabert: Pretraining for joint understanding of textual and tabular data. Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Summarizing and exploring tabular data in conversational search",
"authors": [
{
"first": "Shuo",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhuyun",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Krisztian",
"middle": [],
"last": "Balog",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Callan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuo Zhang, Zhuyun Dai, Krisztian Balog, and Jamie Callan. 2020. Summarizing and exploring tabular data in conversational search. In Proceedings of the 43rd International ACM SIGIR Conference on Re- search and Development in Information Retrieval.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Fuli Feng, and Tat-Seng Chua. 2021. TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance",
"authors": [
{
"first": "Fengbin",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Wenqiang",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Youcheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiancheng",
"middle": [],
"last": "Lv",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2105.07624"
]
},
"num": null,
"urls": [],
"raw_text": "Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and Tat- Seng Chua. 2021. TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance. arXiv preprint arXiv:2105.07624.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Linearize regular table to a sequence of key:value pairs."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Table representation."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Adapter layer ablation scores. The X-axis represents range of encoder adapter layers deleted, the Y-Axis represents range of decoder adapter layers deleted. x-y implies all adapter layers from x to y inclusive. There are 36 model ablation configurations displayed. The ablation starts from 0 to 6 encoder adapter layers removal and 12 to 18 decoder adapter layer removal represented by the bottom left cell ((0-6), (12-18)) and progressively increases deletion of encoder adapter layers along the X-axis and decoder adapter layers along the Y-axis."
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Adapter layer ablation Rouge2 F-scores. The X-axis depicts encoder-adapter layers (0-11) and decoder adapter layers (12-23) deleted progressively. Each(x\u2212y) (r\u2212s) represents F-score with encoder layers p to q deleted and decoder layers r to s deleted."
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Adapter layer ablation Rouge-L scores. The Xaxis depicts encoder-adapter layers (0-11) and decoder adapter layers (12-23) deleted progressively. Each(x\u2212y) (r\u2212s) represents F-score with encoder layers p to q deleted and decoder layers r to s deleted."
},
"TABREF0": {
"text": "Transformation of a hierarchical table into a regular table; and (2) Linearization of a regular table into a flattened sequence which can be encoded with a language model. Linearize hierarchical table headers. Hierarchical table headers are linearized into a single row of headers by the following process. A header cell spanning multiple columns is duplicated and split into multiple cells. Next, the cell values over which this header spans are concatenated with the entire split. Repeating this process over all header rows flattens the hierarchical header into a sequential",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF1": {
"text": ".",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>Dataset</td><td>Params</td><td colspan=\"2\">ATune FTune</td></tr><tr><td>All</td><td>scheduler batch size</td><td>linear 32</td><td>linear 32</td></tr><tr><td/><td>seed</td><td>6</td><td>6</td></tr><tr><td/><td>max epochs</td><td>15</td><td>15</td></tr><tr><td>Tablesum</td><td>learning rate input length</td><td>6e-4 200</td><td>4e-5 200</td></tr><tr><td>FeTaQA</td><td>learning rate input length</td><td>6e-4 100</td><td>8e-4 100</td></tr><tr><td>NarrativeQA</td><td>learning rate input length</td><td>1e-4 50</td><td>2e-5 50</td></tr></table>"
},
"TABREF2": {
"text": "Hyper-parameters for training. ATune indicates Adapter-tuning, FTune indicates Fine-tuning, All indicates all 3 datasets.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF4": {
"text": "",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF5": {
"text": "",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td>Year</td><td>Film</td><td/><td>Role</td><td>Language</td></tr><tr><td/><td>2013</td><td colspan=\"2\">Padhe Padhe</td><td>Kanchana Kannada</td></tr><tr><td/><td colspan=\"3\">2014 Kathai Thiraikathai Vasanam Iyakkam</td><td>Daksha</td><td>Tamil</td></tr><tr><td/><td>2015</td><td colspan=\"2\">Inimey Ippadithaan</td><td>Akhila</td><td>Tamil</td></tr><tr><td/><td>...</td><td>...</td><td/><td>...</td></tr><tr><td colspan=\"5\">Adaper-tune: akhila kishore made her debut in the kannada film padhe padhe (2013) and kathai thiraikathai vasanam iyakkam (2014).</td></tr><tr><td colspan=\"5\">Fine-tune: kathai thiraikathai vasanam iyakkam (2014) and inimey ippadithaan (2015) were kannada films.</td></tr><tr><td colspan=\"5\">Question: Who is the starring actor of Aastik? Target: aastik is a 1956 hindi film starring shahu modak, paro devi and meenakshi. Title Director Cast</td></tr><tr><td>Table:</td><td colspan=\"2\">... Aastik Alam Ara Nanubhai Vakil ... S. P. Kalla</td><td colspan=\"2\">... Shahu Modak, Paro Devi, Meenakshi, B. M. Vyas, Praveen Paul Daljeet, Chitra, Tiwari, Niranjan Sharma, Minu Mumtaz,...</td></tr><tr><td/><td>...</td><td>...</td><td/><td>...</td></tr></table>"
},
"TABREF6": {
"text": "",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>Year</td><td>Film</td><td>Director</td></tr><tr><td>...</td><td>...</td><td>...</td></tr><tr><td>1937</td><td>Sagar Ka Sher (Lion of Sagar)</td><td>Yakub</td></tr><tr><td>...</td><td>...</td><td>...</td></tr><tr><td colspan=\"2\">1939 Uski Tamanna (Her Last Desire)</td><td>Yakub</td></tr><tr><td>...</td><td>...</td><td>...</td></tr><tr><td>1949</td><td>Aiye</td><td>Yakub</td></tr><tr><td>...</td><td>...</td><td>...</td></tr><tr><td colspan=\"3\">Adaper-tune: yakub directed three films: sagar ka sher (lion of sagar) in 1937, uski tamanna (her last desire) in 1939 and aiye in 1949.</td></tr><tr><td colspan=\"2\">Fine-tune: y directed by yakub.</td><td/></tr></table>"
},
"TABREF7": {
"text": "Samples where adapter-tune outperforms fine-tune",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>performance gain with 1.9% on Rouge-1 and</td></tr><tr><td>Rouge-L and 2.1% on Rouge-2 compared to fine-</td></tr><tr><td>tuning. The insignificant gains of fine-tuning over</td></tr><tr><td>adapter-tuning in tabular QA can be attributed to</td></tr><tr><td>catastrophic forgetting</td></tr></table>"
},
"TABREF8": {
"text": "",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>Week</td><td>Date</td><td>Song title</td><td>Performer</td><td>Album title</td><td>Performer</td></tr><tr><td>1 2</td><td>3 Jan 2014 10 Jan 2014</td><td>\"Timber\"</td><td>Pitbull feat. ...</td><td>True</td><td>Avicii</td></tr><tr><td>3 4</td><td>17 Jan 2014 24 Jan 2014</td><td/><td/><td>High Hopes</td><td>Bruce ...</td></tr><tr><td>5</td><td>31 Jan 2014</td><td/><td/><td>True</td><td>Avicii</td></tr><tr><td>6 7</td><td>7 Feb 2014 14 Feb 2014</td><td>\"I See Fire\"</td><td>Ed Sheeran</td><td>Christer Sj\u00f6gren sjunger Sinatra</td><td>Christer Sj\u00f6gren</td></tr><tr><td>8</td><td>21 Feb 2014</td><td/><td/><td/><td/></tr><tr><td>9</td><td>28 Feb 2014</td><td/><td/><td>True</td><td>Avicii</td></tr><tr><td>...</td><td>...</td><td>...</td><td>...</td><td/><td/></tr><tr><td>31</td><td>31 July 2014</td><td/><td/><td>X</td><td>Ed Sheeran</td></tr><tr><td>32</td><td>7 Aug 2014</td><td colspan=\"2\">Prayer in C... Lilly Wood &amp;...</td><td>Honky Tonk Rebels</td><td>Lasse Stefanz</td></tr><tr><td>...</td><td>...</td><td/><td/><td>...</td><td>...</td></tr><tr><td>42 43</td><td>16 Oct 2014 23 Oct 2014</td><td>\"The Days\"</td><td>Avicii</td><td>X</td><td>Ed Sheeran</td></tr><tr><td>44 ...</td><td>30 Oct 2014 ...</td><td>...</td><td>...</td><td>Songs for Daddy</td><td>Jill Johnson</td></tr></table>"
},
"TABREF9": {
"text": "Example from the Tablesum dataset.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Adapter-tune Encoder adapters removed Decoder adapters removed</td><td>#Trainable parameters</td></tr><tr><td>-</td><td>-</td><td>6, 343, 680 (1.56%)</td></tr><tr><td>0-2</td><td>12-14</td><td>4, 757, 760 (1.17%)</td></tr><tr><td>0-4</td><td>12-16</td><td>3, 700, 480 (0.91%)</td></tr><tr><td>0-6</td><td>12-18</td><td>2, 643, 200 (0.65%)</td></tr><tr><td>0-8</td><td>12-20</td><td>1, 585, 920 (0.39%)</td></tr><tr><td>0-10</td><td>12-22</td><td>528, 640 (0.13%)</td></tr><tr><td>0-11</td><td>12-22</td><td>264, 320 (0.07%)</td></tr><tr><td>fine-tune</td><td/><td>406, 291, 456 (100%)</td></tr></table>"
},
"TABREF10": {
"text": "Trainable parameters in the encoder and decoder. Encoder adapter layers are numbered from 0-11 and decoder adapter layers are numbered from 12-22.x-y implies all adapter layers from x to y inclusive.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF13": {
"text": "Dataset Statistics",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>B Encoder-Decoder Adapter Layer Ablation Rouge-2 Scores</td></tr></table>"
}
}
}
}