ACL-OCL / Base_JSON /prefixE /json /emnlp /2020.emnlp-demos.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:57:20.652515Z"
},
"title": "AdapterHub: A Framework for Adapting Transformers",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Pfeiffer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Technical University of Darmstadt",
"location": {}
},
"email": ""
},
{
"first": "Andreas",
"middle": [],
"last": "R\u00fcckl\u00e9",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Technical University of Darmstadt",
"location": {}
},
"email": ""
},
{
"first": "Clifton",
"middle": [],
"last": "Poth",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Technical University of Darmstadt",
"location": {}
},
"email": ""
},
{
"first": "Aishwarya",
"middle": [],
"last": "Kamath",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University",
"location": {}
},
"email": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {}
},
"email": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University",
"location": {}
},
"email": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Technical University of Darmstadt",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The current modus operandi in NLP involves downloading and fine-tuning pre-trained models consisting of hundreds of millions, or even billions of parameters. Storing and sharing such large trained models is expensive, slow, and time-consuming, which impedes progress towards more general and versatile NLP methods that learn from and for many tasks. Adapters-small learnt bottleneck layers inserted within each layer of a pretrained model-ameliorate this issue by avoiding full fine-tuning of the entire model. However, sharing and integrating adapter layers is not straightforward. We propose AdapterHub, a framework that allows dynamic \"stichingin\" of pre-trained adapters for different tasks and languages. The framework, built on top of the popular HuggingFace Transformers library, enables extremely easy and quick adaptations of state-of-the-art pre-trained models (e.g., BERT, RoBERTa, XLM-R) across tasks and languages. Downloading, sharing, and training adapters is as seamless as possible using minimal changes to the training scripts and a specialized infrastructure. Our framework enables scalable and easy access to sharing of task-specific models, particularly in lowresource scenarios. AdapterHub includes all recent adapter architectures and can be found at AdapterHub.ml.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The current modus operandi in NLP involves downloading and fine-tuning pre-trained models consisting of hundreds of millions, or even billions of parameters. Storing and sharing such large trained models is expensive, slow, and time-consuming, which impedes progress towards more general and versatile NLP methods that learn from and for many tasks. Adapters-small learnt bottleneck layers inserted within each layer of a pretrained model-ameliorate this issue by avoiding full fine-tuning of the entire model. However, sharing and integrating adapter layers is not straightforward. We propose AdapterHub, a framework that allows dynamic \"stichingin\" of pre-trained adapters for different tasks and languages. The framework, built on top of the popular HuggingFace Transformers library, enables extremely easy and quick adaptations of state-of-the-art pre-trained models (e.g., BERT, RoBERTa, XLM-R) across tasks and languages. Downloading, sharing, and training adapters is as seamless as possible using minimal changes to the training scripts and a specialized infrastructure. Our framework enables scalable and easy access to sharing of task-specific models, particularly in lowresource scenarios. AdapterHub includes all recent adapter architectures and can be found at AdapterHub.ml.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent advances in NLP leverage transformerbased language models (Vaswani et al., 2017) , pretrained on large amounts of text data (Devlin et al., 2019; Conneau et al., 2020) . These models are fine-tuned on a target task and achieve state-of-the-art (SotA) performance for most natural language understanding tasks. Their performance has been shown to scale with their size (Kaplan et al., 2020) and recent models have reached billions of parameters (Raffel et al., 2019; Brown et al., 2020) . While fine-tuning large pre-trained models on target task data can be done fairly efficiently (Howard and Ruder, 2018) , training them for multiple tasks and sharing trained models is often prohibitive. This precludes research on more modular architectures , task composition (Andreas et al., 2016) , and injecting biases and external information (e.g., world or linguistic knowledge) into large models (Lauscher et al., 2019; Wang et al., 2020) .",
"cite_spans": [
{
"start": 65,
"end": 87,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF42"
},
{
"start": 131,
"end": 152,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF13"
},
{
"start": 153,
"end": 174,
"text": "Conneau et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 375,
"end": 396,
"text": "(Kaplan et al., 2020)",
"ref_id": null
},
{
"start": 451,
"end": 472,
"text": "(Raffel et al., 2019;",
"ref_id": "BIBREF30"
},
{
"start": 473,
"end": 492,
"text": "Brown et al., 2020)",
"ref_id": null
},
{
"start": 589,
"end": 613,
"text": "(Howard and Ruder, 2018)",
"ref_id": "BIBREF19"
},
{
"start": 771,
"end": 793,
"text": "(Andreas et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 898,
"end": 921,
"text": "(Lauscher et al., 2019;",
"ref_id": "BIBREF22"
},
{
"start": 922,
"end": 940,
"text": "Wang et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Adapters (Houlsby et al., 2019) have been introduced as an alternative lightweight fine-tuning strategy that achieves on-par performance to full fine-tuning on most tasks. They consist of a small set of additional newly initialized weights at every layer of the transformer. These weights are then trained during fine-tuning, while the pre-trained parameters of the large model are kept frozen/fixed. This enables efficient parameter sharing between tasks by training many task-specific and language-specific adapters for the same model, which can be exchanged and combined post-hoc. Adapters have recently achieved strong results in multi-task and cross-lingual transfer learning (Pfeiffer et al., 2020a,b) .",
"cite_spans": [
{
"start": 9,
"end": 31,
"text": "(Houlsby et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 681,
"end": 707,
"text": "(Pfeiffer et al., 2020a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, reusing and sharing adapters is not straightforward. Adapters are rarely released individually; their architectures differ in subtle yet important ways, and they are model, task, and language dependent. To mitigate these issues and facilitate transfer learning with adapters in a range of settings, we propose AdapterHub, a framework that enables seamless training and sharing of adapters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "AdapterHub is built on top of the popular transformers framework by HuggingFace 1 (Wolf et al., 2020) , which provides access to stateof-the-art pre-trained language models. We en-hance transformers with adapter modules that can be combined with existing SotA models with minimal code edits. We additionally provide a website that enables quick and seamless upload, download, and sharing of pre-trained adapters. Adapter-Hub is available online at: AdapterHub.ml.",
"cite_spans": [
{
"start": 82,
"end": 101,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "AdapterHub for the first time enables NLP researchers and practitioners to easily and efficiently share and obtain access to models that have been trained for particular tasks, domains, and languages. This opens up the possibility of building on and combining information from many more sources than was previously possible, and makes research such as intermediate task training (Pruksachatkun et al., 2020) , composing information from many tasks (Pfeiffer et al., 2020a) , and training models for very low-resource languages (Pfeiffer et al., 2020b ) much more accessible.",
"cite_spans": [
{
"start": 379,
"end": 407,
"text": "(Pruksachatkun et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 448,
"end": 472,
"text": "(Pfeiffer et al., 2020a)",
"ref_id": "BIBREF26"
},
{
"start": 527,
"end": 550,
"text": "(Pfeiffer et al., 2020b",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose an easy-to-use and extensible adapter training and sharing framework for transformer-based models such as BERT, RoBERTa, and XLM(-R); 2) we incorporate it into the HuggingFace transformers framework, requiring as little as two additional lines of code to train adapters with existing scripts; 3) our framework automatically extracts the adapter weights, storing them separately to the pre-trained transformer model, requiring as little as 1Mb of storage; 4) we provide an open-source framework and website that allows the community to upload their adapter weights, making them easily accessible with only one additional line of code; 5) we incorporate adapter composition as well as adapter stacking out-of-the-box and pave the way for a wide range of other extensions in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contributions. 1)",
"sec_num": null
},
{
"text": "While the predominant methodology for transfer learning is to fine-tune all weights of the pre-trained model, adapters have recently been introduced as an alternative approach, with applications in computer vision (Rebuffi et al., 2017) as well as the NLP domain (Houlsby et al., 2019; Bapna and Firat, 2019; Wang et al., 2020; Pfeiffer et al., 2020a,b) .",
"cite_spans": [
{
"start": 214,
"end": 236,
"text": "(Rebuffi et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 263,
"end": 285,
"text": "(Houlsby et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 286,
"end": 308,
"text": "Bapna and Firat, 2019;",
"ref_id": "BIBREF7"
},
{
"start": 309,
"end": 327,
"text": "Wang et al., 2020;",
"ref_id": null
},
{
"start": 328,
"end": 353,
"text": "Pfeiffer et al., 2020a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adapters",
"sec_num": "2"
},
{
"text": "Adapters are neural modules with a small amount of additional newly introduced parameters \u03a6 within a large pre-trained model with parameters \u0398. The parameters \u03a6 are learnt on a target task while keeping \u0398 fixed; \u03a6 thus learn to encode task-specific representations in intermediate layers of the pretrained model. Current work predominantly focuses on training adapters for each task separately (Houlsby et al., 2019; Bapna and Firat, 2019; Pfeiffer et al., 2020a,b) , which enables parallel training and subsequent combination of the weights.",
"cite_spans": [
{
"start": 394,
"end": 416,
"text": "(Houlsby et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 417,
"end": 439,
"text": "Bapna and Firat, 2019;",
"ref_id": "BIBREF7"
},
{
"start": 440,
"end": 465,
"text": "Pfeiffer et al., 2020a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adapter Architecture",
"sec_num": "2.1"
},
{
"text": "In NLP, adapters have been mainly used within deep transformer-based architectures (Vaswani et al., 2017) . At each transformer layer l, a set of adapter parameters \u03a6 l is introduced. The placement and architecture of adapter parameters \u03a6 within a pre-trained model is non-trivial and may impact their efficacy: Houlsby et al. (2019) experiment with different adapter architectures, empirically validating that a two-layer feed-forward neural network with a bottleneck works well. While this down-and up-projection has largely been agreed upon, the actual placement of adapters within each transformer block, as well as the introduction of new LayerNorms 2 (Ba et al., 2016) varies in the literature (Houlsby et al., 2019; Bapna and Firat, 2019; Stickland and Murray, 2019; Pfeiffer et al., 2020a) . In order to support standard adapter architectures from the literature, as well as to enable easy extensibility, AdapterHub provides a configuration file where the architecture settings can be defined dynamically. We illustrate the different configuration possibilities in Figure 3 , and describe them in more detail in \u00a73.",
"cite_spans": [
{
"start": 83,
"end": 105,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF42"
},
{
"start": 700,
"end": 722,
"text": "(Houlsby et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 723,
"end": 745,
"text": "Bapna and Firat, 2019;",
"ref_id": "BIBREF7"
},
{
"start": 746,
"end": 773,
"text": "Stickland and Murray, 2019;",
"ref_id": "BIBREF40"
},
{
"start": 774,
"end": 797,
"text": "Pfeiffer et al., 2020a)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 1073,
"end": 1081,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Adapter Architecture",
"sec_num": "2.1"
},
{
"text": "Adapters provide numerous benefits over fully finetuning a model such as scalability, modularity, and composition. We now provide a few use-cases for adapters to illustrate their usefulness in practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Adapters?",
"sec_num": "2.2"
},
{
"text": "Task-specific Layer-wise Representation Learning. Prior to the introduction of adapters, in order to achieve SotA performance on downstream tasks, the entire pre-trained transformer model needs to be fine-tuned . Adapters have been shown to work on-par with full fine-tuning, by adapting the representations at every layer. We present the results of fully fine-tuning the model compared to two different adapter architectures on the GLUE benchmark (Wang et al., 2018) in Table 1 . The adapters of Houlsby et al. (2019, Figure 3c ) and Pfeiffer et al. (2020a, Figure 3b ) comprise two and one down-and up-projection RTE (Wang et al., 2018) 66.2 70.8 69.8 MRPC (Dolan and Brockett, 2005) 90.5 89.7 91.5 STS-B (Cer et al., 2017) 88.8 89.0 89.2 CoLA (Warstadt et al., 2019) 59.5 58.9 59.1 SST-2 (Socher et al., 2013) 92.6 92.2 92.8 QNLI (Rajpurkar et al., 2016) 91.3 91.3 91.2 MNLI (Williams et al., 2018) 84.1 84.1 84.1 QQP (Iyer et al., 2017) 91.4 90.5 90.8 Table 1 : Mean development scores over 3 runs on GLUE (Wang et al., 2018) leveraging the BERT-Base pre-trained weights. We present the results with full fine-tuning (Full) and with the adapter architectures of Pfeiffer et al. (2020a, Pfeif., Figure 3b ) and Houlsby et al. (2019, Houl., Figure 3c ) both with bottleneck size 48. We show F1 for MRPC, Spearman rank correlation for STS-B, and accuracy for the rest. RTE is a combination of datasets (Dagan et al., 2005; Bar-Haim et al., 2006; Giampiccolo et al., 2007) .",
"cite_spans": [
{
"start": 448,
"end": 467,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF43"
},
{
"start": 619,
"end": 638,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF43"
},
{
"start": 707,
"end": 725,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 746,
"end": 769,
"text": "(Warstadt et al., 2019)",
"ref_id": "BIBREF45"
},
{
"start": 791,
"end": 812,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF39"
},
{
"start": 833,
"end": 857,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF31"
},
{
"start": 878,
"end": 901,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF46"
},
{
"start": 921,
"end": 940,
"text": "(Iyer et al., 2017)",
"ref_id": null
},
{
"start": 1010,
"end": 1029,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF43"
},
{
"start": 1403,
"end": 1423,
"text": "(Dagan et al., 2005;",
"ref_id": "BIBREF12"
},
{
"start": 1424,
"end": 1446,
"text": "Bar-Haim et al., 2006;",
"ref_id": "BIBREF8"
},
{
"start": 1447,
"end": 1472,
"text": "Giampiccolo et al., 2007)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 471,
"end": 478,
"text": "Table 1",
"ref_id": null
},
{
"start": 519,
"end": 528,
"text": "Figure 3c",
"ref_id": "FIGREF3"
},
{
"start": 559,
"end": 568,
"text": "Figure 3b",
"ref_id": "FIGREF3"
},
{
"start": 956,
"end": 963,
"text": "Table 1",
"ref_id": null
},
{
"start": 1198,
"end": 1207,
"text": "Figure 3b",
"ref_id": "FIGREF3"
},
{
"start": 1243,
"end": 1252,
"text": "Figure 3c",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Why Adapters?",
"sec_num": "2.2"
},
{
"text": "within each transformer layer, respectively. The former adapter thus has more capacity at the cost of training and inference speed. We find that for all settings, there is no large difference in terms of performance between the model architectures, verifying that training adapters is a suitable and lightweight alternative to full fine-tuning in order to achieve SotA performance on downstream tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Adapters?",
"sec_num": "2.2"
},
{
"text": "Small, Scalable, Shareable. Transformer-based models are very deep neural networks with millions or billions of weights and large storage requirements, e.g., around 2.2Gb of compressed storage space is needed for XLM-R Large (Conneau et al., 2020) . Fully fine-tuning these models for each task separately requires storing a copy of the fine-tuned model for each task. This impedes both iterating and parallelizing training, particularly in storage-restricted environments. Adapters mitigate this problem. Depending on the model size and the adapter bottleneck size, a single task requires as little as 0.9Mb storage space. We present the storage requirements in Table 2 . This highlights that > 99% of the parameters required for each target task are fixed during training and can be shared across all models for inference. For instance, for the popular Bert-Base model with a size of 440Mb, storing 2 fully fine-tuned models amounts to the same storage space required by 125 models with adapters, when using a bottleneck size of 48 and adapters of Pfeiffer et al. (2020a) . Moreover, when performing inference on a mobile device, adapters can be leveraged to save a significant amount of storage space, while supporting a large number of target tasks. Additionally, due to the small size of the adapter modules-which in many cases do not exceed the file size of an image-new tasks can be added on-the-fly. Overall, these factors make adapters a much more computationally-and ecologically (Strubell et al., 2019) -viable option compared to updating entire models (R\u00fcckl\u00e9 et al., 2020) . Easy access to fine-tuned models may also improve reproducibility as researchers will be able to easily rerun and evaluate trained models of previous work.",
"cite_spans": [
{
"start": 225,
"end": 247,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 1050,
"end": 1073,
"text": "Pfeiffer et al. (2020a)",
"ref_id": "BIBREF26"
},
{
"start": 1490,
"end": 1513,
"text": "(Strubell et al., 2019)",
"ref_id": "BIBREF41"
},
{
"start": 1564,
"end": 1585,
"text": "(R\u00fcckl\u00e9 et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 663,
"end": 670,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Why Adapters?",
"sec_num": "2.2"
},
{
"text": "Modularity of Representations. Adapters learn to encode information of a task within designated parameters. Due to the encapsulated placement of adapters, wherein the surrounding parameters are fixed, at each layer an adapter is forced to learn an output representation compatible with the subsequent layer of the transformer model. This setting allows for modularity of components such that adapters can be stacked on top of each other, or replaced dynamically. In a recent example, Pfeiffer et al. (2020b) successfully combine adapters that have been independently trained for specific tasks and languages. This demonstrates that adapters are modular and that output representations of different adapters are compatible. As NLP tasks become more complex and require knowledge that is not directly accessible in a single monolithic pre-trained model , adapters will provide NLP researchers and practitioners with many more sources of relevant information that can be easily combined in an efficient and modular way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Adapters?",
"sec_num": "2.2"
},
{
"text": "Sharing information across tasks has a longstanding history in machine learning (Ruder, 2017) . Multi-task learning (MTL), which shares a set of parameters between tasks, has arguably received the most attention. However, MTL suffers from problems such as catastrophic forgetting where information learned during earlier stages of training is \"overwritten\" (de Masson d'Autume et al., 2019), catastrophic interference where the performance of a set of tasks deteriorates when adding new tasks (Hashimoto et al., 2017) , and intricate task weighting for tasks with different distributions (Sanh et al., 2019) . The encapsulation of adapters forces them to learn output representations that are compatible across tasks. When training adapters on different downstream tasks, they store the respective information in their designated parameters. Multiple adapters can then be combined, e.g., with attention (Pfeiffer et al., 2020a) . Because the respective adapters are trained separately, the necessity of sampling heuristics due to skewed data set sizes no longer arises. By separating knowledge extraction and composition, adapters mitigate the two most common pitfalls of multi-task learning, catastrophic forgetting and catastrophic interference.",
"cite_spans": [
{
"start": 80,
"end": 93,
"text": "(Ruder, 2017)",
"ref_id": "BIBREF34"
},
{
"start": 493,
"end": 517,
"text": "(Hashimoto et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 588,
"end": 607,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 903,
"end": 927,
"text": "(Pfeiffer et al., 2020a)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Interfering Composition of Information.",
"sec_num": null
},
{
"text": "Overcoming these problems together with the availability of readily available trained task-specific adapters enables researchers and practitioners to leverage information from specific tasks, domains, or languages that is often more relevant for a specific application-rather than more general pretrained counterparts. Recent work (Howard and Ruder, 2018; Phang et al., 2018; Pruksachatkun et al., 2020; Gururangan et al., 2020) has shown the benefits of such information, which was previously only available by fully fine-tuning a model on the data of interest prior to task-specific fine-tuning.",
"cite_spans": [
{
"start": 331,
"end": 355,
"text": "(Howard and Ruder, 2018;",
"ref_id": "BIBREF19"
},
{
"start": 356,
"end": 375,
"text": "Phang et al., 2018;",
"ref_id": "BIBREF28"
},
{
"start": 376,
"end": 403,
"text": "Pruksachatkun et al., 2020;",
"ref_id": "BIBREF29"
},
{
"start": 404,
"end": 428,
"text": "Gururangan et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Interfering Composition of Information.",
"sec_num": null
},
{
"text": "AdapterHub consists of two core components: 1) A library built on top of HuggingFace transformers, and 2) a website that dynamically provides analysis and filtering of pre-trained adapters. AdapterHub provides tools for the entire life-cycle of adapters, illustrated in Figure 1 and discussed in what follows: x introducing new adapter weights \u03a6 into pre-trained transformer weights \u0398; y training adapter weights \u03a6 on a downstream task (while keeping \u0398 frozen); z automatic extraction of the trained adapter weights \u03a6 and opensourcing the adapters; { automatic visualization of the adapters with configuration filters; | on-thefly downloading/caching the pre-trained adapter weights \u03a6 and stitching the adapter into the pre- trained transformer model \u0398; } performing inference with the trained adapter transformer model.",
"cite_spans": [],
"ref_spans": [
{
"start": 270,
"end": 278,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "AdapterHub",
"sec_num": "3"
},
{
"text": "x Adapters in Transformer Layers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AdapterHub",
"sec_num": "3"
},
{
"text": "We minimize the required changes to existing HuggingFace training scripts, resulting in only two additional lines of code. In Figure 2 we present the required code to add adapter weights (line 3) and freeze all the transformer weights \u0398 (line 4). In this example, the model is prepared to train a task adapter on the binary version of the Stanford Sentiment Treebank (SST; Socher et al., 2013) using the adapter architecture of Pfeiffer et al. (2020a) . Similarly, language adapters can be added by setting the type parameter to AdapterType.text language, and other adapter architectures can be chosen accordingly. While we provide ready-made configuration files for well-known architectures in the current literature, adapters are dynamically configurable, which makes it possible to define a multitude of architectures. We illustrate the configurable components as dashed lines and objects in Figure 3 . The configurable components are placements of new weights, residual connections as well as placements of Lay-erNorm layers (Ba et al., 2016) .",
"cite_spans": [
{
"start": 428,
"end": 451,
"text": "Pfeiffer et al. (2020a)",
"ref_id": "BIBREF26"
},
{
"start": 1029,
"end": 1046,
"text": "(Ba et al., 2016)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 126,
"end": 134,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 895,
"end": 903,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "AdapterHub",
"sec_num": "3"
},
{
"text": "The code changes within the HuggingFace transformers framework are realized through MixIns, which are inherited by the respective transformer classes. This minimizes the amount of code changes of our proposed extensions and en- capsulates adapters as designated classes. It further increases readability as adapters are clearly separated from the main transformers code base, which makes it easy to keep both repositories in sync as well as to extend AdapterHub.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AdapterHub",
"sec_num": "3"
},
{
"text": "Adapters are trained in the same manner as full finetuning of the model. The information is passed through the different layers of the transformer where additionally to the pre-trained weights at every layer the representations are additionally passed through the adapter parameters. However, in contrast to full fine-tuning, the pre-trained weights \u0398 are fixed and only the adapter weights \u03a6 and the prediction head are trained. Because \u0398 is fixed, the adapter weights \u03a6 are encapsuled within the transformer weights, forcing them to learn compatible representations across tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "y Training Adapters",
"sec_num": null
},
{
"text": "z Extracting and Open-Sourcing Adapters When training adapters instead of full fine-tuning, it is no longer necessary to store checkpoints of the entire model. Instead, only the adapter weights \u03a6 , as well as the prediction head need to be stored, as the base model's weights \u0398 remain the same. This is integrated automatically as soon as adapters are trained, which significantly reduces the required storage space during training and enables storing a large number of checkpoints simultaneously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "y Training Adapters",
"sec_num": null
},
{
"text": "When adapter training has completed, the parameter file together with the corresponding adapter configuration file are zipped and uploaded to a public server. The user then enters the metadata (e.g., URL to weights, user info, description of training procedure, data set used, adapter architecture, GitHub handle, Twitter handle) into a designated YAML file and issues a pull request to the Adapter-Hub GitHub repository. When all automatic checks pass, the AdapterHub.ml website is automatically regenerated with the newly available adapter, which makes it possible for users to immediately find All new weights \u03a6 are illustrated within the pink boxes, everything outside belongs to the pre-trained weights \u0398. In addition, we provide pre-set configuration files for architectures in the literature. The resulting configurations for the architecture proposed by Pfeiffer et al. (2020a) and Houlsby et al. (2019) are illustrated in (b) and (c) respectively. We also provide a configuration file for the architecture proposed by Bapna and Firat (2019) , not shown here.",
"cite_spans": [
{
"start": 862,
"end": 885,
"text": "Pfeiffer et al. (2020a)",
"ref_id": "BIBREF26"
},
{
"start": 890,
"end": 911,
"text": "Houlsby et al. (2019)",
"ref_id": "BIBREF18"
},
{
"start": 1027,
"end": 1049,
"text": "Bapna and Firat (2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "y Training Adapters",
"sec_num": null
},
{
"text": "and use these new weights described by the metadata. We hope that the ease of sharing pre-trained adapters will further facilitate and speed up new developments in transfer learning in NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "y Training Adapters",
"sec_num": null
},
{
"text": "1 from transformers import AutoModelForSequenceClassification, AdapterType 2 model = AutoModelForSequenceClassification.from_pretrained(\"roberta-base\") 3 model.load_adapter(\"sst-2\", config=\"pfeiffer\") Figure 4 : | After the correct adapter has been identified by the user on the explore page of AdapterHub.ml, they can load and stitch the pre-trained adapter weights \u03a6 into the transformer \u0398 (line 3).",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 209,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "y Training Adapters",
"sec_num": null
},
{
"text": "The website AdapterHub.ml provides a dynamic overview of the currently available pre-trained adapters. Due to the large number of tasks in many different languages as well as different transformer models, we provide an intuitively understandable hierarchical structure, as well as search options. This makes it easy for users to find adapters that are suitable for their use-case. Namely, Adapter-Hub's explore page is structured into three hierarchical levels. At the first level, adapters can be viewed by task or language. The second level allows for a more fine-grained distinction separating adapters into data sets of higher-level NLP tasks following a categorization similar to paperswithcode.com. For languages, the second level distinguishes the adapters by the language they were trained on. The third level separates adapters into individual datasets or domains such as SST for sentiment analysis or Wikipedia for Swahili.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "{ Finding Pre-Trained Adapters",
"sec_num": null
},
{
"text": "When a specific dataset has been selected, the user can see the available pre-trained adapters for this setting. Adapters depend on the transformer model they were trained on and are otherwise not compatible. 3 The user selects the model architecture and certain hyper-parameters and is shown the compatible adapters. When selecting one of the adapters, the user is provided with additional information about the adapter, which is available in the metadata (see z again for more information).",
"cite_spans": [
{
"start": 209,
"end": 210,
"text": "3",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "{ Finding Pre-Trained Adapters",
"sec_num": null
},
{
"text": "Pre-trained adapters can be stitched into the large transformer model as easily as adding randomly initialized weights; this requires a single line of code, see Figure 4 , line 3. When selecting an adapter on the website (see { again) the user is provided with sample code, which corresponds to the configuration necessary to include the specific weights. 4",
"cite_spans": [],
"ref_spans": [
{
"start": 161,
"end": 169,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "| Stitching-In Pre-Trained Adapters",
"sec_num": null
},
{
"text": "Inference with a pre-trained model that relies on adapters is in line with the standard inference practice based on full fine-tuning. Similar to training adapters, during inference the active adapter name is passed into the model together with the text tokens. At every transformer layer the information is passed through the transformer layers and the corresponding adapter parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "} Inference with Adapters",
"sec_num": null
},
{
"text": "The adapters can be used for inference in the designated task they were trained on. To this end, we provide an option to upload the prediction heads together with the adapter weights. In addition, they can be used for further research such as transferring the adapter to a new task, stacking multiple adapters, fusing the information from diverse adapters, or enriching AdapterHub with adapters for other modalities, among many other possible modes of usage and future directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "} Inference with Adapters",
"sec_num": null
},
{
"text": "We have introduced AdapterHub, a novel easy-touse framework that enables simple and effective transfer learning via training and community sharing of adapters. Adapters are small neural modules that can be stitched into large pre-trained transformer models to facilitate, simplify, and speed up transfer learning across a range of languages and tasks. AdapterHub is built on top of the commonly used HuggingFace transformers, and it requires only adding as little as two lines of code to existing training scripts. Using adapters in Adapter-Hub has numerous benefits such as improved reproducibility, much better efficiency compared to full fine-tuning, easy extensibility to new models and new tasks, and easy access to trained models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "4"
},
{
"text": "With AdapterHub, we hope to provide a suitable and stable framework for the community to train, search, and use adapters. We plan to continuously improve the framework, extend the composition and modularity possibilities, and support other transformer models, even the ones yet to come. tive (Hesse, Germany) within the emergenCITY center. Andreas R\u00fcckl\u00e9 is supported by the German Federal Ministry of Education and Research and the Hessen State Ministry for Higher Education, Research and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE, and by the German Research Foundation under grant EC 503/1-1 and GU 798/21-1. Aishwarya Kamath is supported in part by a DeepMind PhD Fellowship. The work of Ivan Vuli\u0107 is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). Kyunghyun Cho is supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Research (Improving Deep Learning using Latent Structure).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "4"
},
{
"text": "We would like to thank Isabel Pfeiffer for the illustrations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "4"
},
{
"text": "https://github.com/huggingface/transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Layer normalization learns to normalize the inputs across the features. This is usually done by introducing a new set of features for mean and variance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We plan to look into mapping adapters between different models as future work.4 When selecting an adapter based on a name, we allow for string matching as long as there is no ambiguity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Jonas Pfeiffer is supported by the LOEWE initia-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "AdapterType 2 model = AutoModelForSequenceClassification.from_pretrained",
"authors": [],
"year": null,
"venue": "from transformers import AutoModelForSequenceClassification",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "from transformers import AutoModelForSequenceClassification, AdapterType 2 model = AutoModelForSequenceClassification.from_pretrained(\"roberta-base\")",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "AdapterType.text_task, config=\"pfeiffer\") 4 model.train_adapter",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "model.add_adapter(\"sst-2\", AdapterType.text_task, config=\"pfeiffer\") 4 model.train_adapter([\"sst-2\"])",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "adapters/text-task/sst-2",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "model.save_adapter(\"adapters/text-task/sst-2/\", \"sst-2\")",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "# Push link to zip file to AdapterHub",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "# Push link to zip file to AdapterHub ...",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning to compose neural networks for question answering",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "References",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Darrell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2016,
"venue": "The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1545--1554",
"other_ids": {
"DOI": [
"10.18653/v1/n16-1181"
]
},
"num": null,
"urls": [],
"raw_text": "References Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Learning to compose neural net- works for question answering. In NAACL HLT 2016, The 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, San Diego Califor- nia, USA, June 12-17, 2016, pages 1545-1554.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Simple, scalable adaptation for neural machine translation",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Bapna",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1538--1548",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1165"
]
},
"num": null,
"urls": [],
"raw_text": "Ankur Bapna and Orhan Firat. 2019. Simple, scal- able adaptation for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 1538- 1548.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The second pascal recognising textual entailment challenge",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Bar-Haim",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Ferro",
"suffix": ""
},
{
"first": "Danilo",
"middle": [],
"last": "Giampiccolo",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
},
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the PASCAL@ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second pascal recognising tex- tual entailment challenge. In Proceedings of the PASCAL@ACL 2006.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of SemEval-2017",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2001"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, I\u00f1igo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of SemEval-2017.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Conference of the Association for Computational Linguistics, ACL 2020, Virtual Conference",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Conference of the Associ- ation for Computational Linguistics, ACL 2020, Vir- tual Conference, July 6-8, 2020, pages 8440-8451.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The PASCAL recognising textual entailment challenge",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Glickman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2005,
"venue": "Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop, MLCW 2005",
"volume": "",
"issue": "",
"pages": "177--190",
"other_ids": {
"DOI": [
"10.1007/11736790_9"
]
},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges, Eval- uating Predictive Uncertainty, Visual Object Classi- fication and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop, MLCW 2005, Southampton, UK, April 11-13, 2005, Revised Selected Papers, pages 177-190.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 4171-4186.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatically constructing a corpus of sentential paraphrases",
"authors": [
{
"first": "B",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Third International Workshop on Paraphrasing, IWP@IJCNLP 2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William B. Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing, IWP@IJCNLP 2005, Jeju Island, Korea, October 2005, 2005.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The third pascal recognizing textual entailment challenge",
"authors": [
{
"first": "Danilo",
"middle": [],
"last": "Giampiccolo",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the PASCAL@ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third pascal recognizing textual entailment challenge. In Proceedings of the PASCAL@ACL 2007.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Don't stop pretraining: Adapt language models to domains and tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovic",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "2020",
"issue": "",
"pages": "8342--8360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8342-8360.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A joint many-task model: Growing a neural network for multiple NLP tasks",
"authors": [
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1923--1933",
"other_ids": {
"DOI": [
"10.18653/v1/d17-1206"
]
},
"num": null,
"urls": [],
"raw_text": "Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsu- ruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple NLP tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9- 11, 2017, pages 1923-1933.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Parameter-efficient transfer learning for NLP",
"authors": [
{
"first": "Neil",
"middle": [],
"last": "Houlsby",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Giurgiu",
"suffix": ""
},
{
"first": "Stanislaw",
"middle": [],
"last": "Jastrzkebski",
"suffix": ""
},
{
"first": "Bruna",
"middle": [],
"last": "Morrone",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "De Laroussilhe",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Gesmundo",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Attariyan",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Gelly",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning, ICML 2019",
"volume": "",
"issue": "",
"pages": "2790--2799",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzkeb- ski, Bruna Morrone, Quentin de Laroussilhe, An- drea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 2790-2799.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Universal Language Model Fine-tuning for Text Classification",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018",
"volume": "1",
"issue": "",
"pages": "328--339",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1031"
]
},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 328-339.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "First quora dataset release: Question pairs",
"authors": [
{
"first": "Shankar",
"middle": [],
"last": "Iyer",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Dandekar",
"suffix": ""
},
{
"first": "Kornel",
"middle": [],
"last": "Csernai",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shankar Iyer, Nikhil Dandekar, and Kornel Csernai. First quora dataset release: Question pairs [online].",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "and Dario Amodei. 2020. Scaling Laws for Neural Language Models",
"authors": [
{
"first": "Jared",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Mccandlish",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Henighan",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"B"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Chess",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Gray",
"suffix": ""
},
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling Laws for Neural Language Models. arXiv preprint.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Specializing unsupervised pretraining models for word-level semantic similarity",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Lauscher",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Edoardo",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Ponti",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Lauscher, Ivan Vuli\u0107, Edoardo Maria Ponti, Anna Korhonen, and Goran Glava\u0161. 2019. Specializing unsupervised pretraining models for word-level se- mantic similarity. arXiv preprint.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Episodic memory in lifelong language learning",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Cyprien De Masson D'autume",
"suffix": ""
},
{
"first": "Lingpeng",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yogatama",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "13122--13131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cyprien de Masson d'Autume, Sebastian Ruder, Ling- peng Kong, and Dani Yogatama. 2019. Episodic memory in lifelong language learning. In Advances in Neural Information Processing Systems 32: An- nual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 13122-13131.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "To tune or not to tune? adapting pretrained representations to diverse tasks",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 4th Workshop on Representation Learning for NLP, RepL4NLP@ACL 2019",
"volume": "",
"issue": "",
"pages": "7--14",
"other_ids": {
"DOI": [
"10.18653/v1/w19-4302"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pre- trained representations to diverse tasks. In Proceed- ings of the 4th Workshop on Representation Learn- ing for NLP, RepL4NLP@ACL 2019, Florence, Italy, August 2, 2019, pages 7-14.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "AdapterFusion: Non-destructive task composition for transfer learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Pfeiffer",
"suffix": ""
},
{
"first": "Aishwarya",
"middle": [],
"last": "Kamath",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "R\u00fcckl\u00e9",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Pfeiffer, Aishwarya Kamath, Andreas R\u00fcckl\u00e9, Kyunghyun Cho, and Iryna Gurevych. 2020a. AdapterFusion: Non-destructive task composition for transfer learning. arXiv preprint.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "MAD-X: An Adapter-based Framework for Multi-task Cross-lingual Transfer",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Pfeiffer",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Pfeiffer, Ivan Vuli\u0107, Iryna Gurevych, and Se- bastian Ruder. 2020b. MAD-X: An Adapter-based Framework for Multi-task Cross-lingual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Virtual Conference.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Phang",
"suffix": ""
},
{
"first": "Thibault",
"middle": [],
"last": "F\u00e9vry",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Phang, Thibault F\u00e9vry, and Samuel R Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Intermediate-task transfer learning with pretrained language models: When and why does it work?",
"authors": [
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Phang",
"suffix": ""
},
{
"first": "Haokun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiaoyi",
"middle": [],
"last": "Phu Mon Htut",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"Yuanzhe"
],
"last": "Zhang",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Vania",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Kann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020",
"volume": "",
"issue": "",
"pages": "5231--5247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bowman. 2020. Intermediate-task transfer learning with pretrained language models: When and why does it work? In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, ACL 2020, Online, July 5-10, 2020, pages 5231-5247.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the Lim- its of Transfer Learning with a Unified Text-to-Text Transformer. arXiv preprint.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "SQuAD: 100,000+ Questions for Machine Comprehension of Text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {
"DOI": [
"10.18653/v1/d16-1264"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceed- ings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2383-2392.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Learning multiple visual domains with residual adapters",
"authors": [
{
"first": "Hakan",
"middle": [],
"last": "Sylvestre-Alvise Rebuffi",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Bilen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vedaldi",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "506--516",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2017. Learning multiple visual domains with residual adapters. In Advances in Neural Infor- mation Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4- 9 December 2017, Long Beach, CA, USA, pages 506-516.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Nils Reimers, and Iryna Gurevych. 2020. AdapterDrop: On the Efficiency of Adapters in Transformers",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "R\u00fcckl\u00e9",
"suffix": ""
},
{
"first": "Gregor",
"middle": [],
"last": "Geigle",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Glockner",
"suffix": ""
},
{
"first": "Tilman",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Pfeiffer",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas R\u00fcckl\u00e9, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2020. AdapterDrop: On the Efficiency of Adapters in Transformers. arXiv preprint.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "An Overview of Multi-Task Learning in Deep Neural Networks",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder. 2017. An Overview of Multi-Task Learning in Deep Neural Networks. arXiv preprint.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Transfer learning in natural language processing",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Matthew E Peters, Swabha Swayamdipta, and Thomas Wolf. 2019. Trans- fer learning in natural language processing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Tutorials.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A hierarchical multi-task approach for learning embeddings from semantic tasks",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2019,
"venue": "The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence",
"volume": "2019",
"issue": "",
"pages": "6949--6956",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.33016949"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Thomas Wolf, and Sebastian Ruder. 2019. A hierarchical multi-task approach for learning em- beddings from semantic tasks. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 -February 1, 2019, pages 6949-6956.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Azalia",
"middle": [],
"last": "Mirhoseini",
"suffix": ""
},
{
"first": "Krzysztof",
"middle": [],
"last": "Maziarz",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. Hinton, and Jeff Dean. 2017. Outrageously large neural net- works: The sparsely-gated mixture-of-experts layer.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "5th International Conference on Learning Representations",
"authors": [],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In 5th International Conference on Learning Repre- sentations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Process- ing, EMNLP 2013, 18-21 October 2013, Grand Hy- att Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1631-1642.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "BERT and pals: Projected attention layers for efficient adaptation in multi-task learning",
"authors": [
{
"first": "Asa",
"middle": [
"Cooper"
],
"last": "Stickland",
"suffix": ""
},
{
"first": "Iain",
"middle": [],
"last": "Murray",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning, ICML 2019",
"volume": "",
"issue": "",
"pages": "5986--5995",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asa Cooper Stickland and Iain Murray. 2019. BERT and pals: Projected attention layers for efficient adaptation in multi-task learning. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 5986-5995.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Energy and policy considerations for deep learning in NLP",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Ananya",
"middle": [],
"last": "Ganesh",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019",
"volume": "1",
"issue": "",
"pages": "3645--3650",
"other_ids": {
"DOI": [
"10.18653/v1/p19-1355"
]
},
"num": null,
"urls": [],
"raw_text": "Emma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-Au- gust 2, 2019, Volume 1: Long Papers, pages 3645- 3650.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Attention Is All You Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 Decem- ber 2017, Long Beach, CA, USA, pages 5998-6008.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP, Black-boxNLP@EMNLP 2018",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {
"DOI": [
"10.18653/v1/w18-5446"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A multi-task benchmark and anal- ysis platform for natural language understand- ing. In Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP, Black- boxNLP@EMNLP 2018, Brussels, Belgium, Novem- ber 1, 2018, pages 353-355.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Neural network acceptability judgments",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Warstadt",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "625--641",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Warstadt, Amanpreet Singh, and Samuel R. Bow- man. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {
"DOI": [
"10.18653/v1/n18-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel R. Bow- man. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1112-1122.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Anthony Moi an-dArt Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, and Jamie Brew",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi an- dArt Pierric Cistac, Tim Rault, R\u00e9mi Louf, Mor- gan Funtowicz, and Jamie Brew. 2020. Hugging- Face's Transformers: State-of-the-art Natural Lan- guage Processing. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing, EMNLP 2020, Virtual Conference, 2020 Proceedings of EMNLP: Systems Demonstrations.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "The AdapterHub Process graph. Adapters \u03a6 are introduced into a pre-trained transformer \u0398 (step x) and are trained (y). They can then be extracted and open-sourced (z) and visualized ({). Pre-trained adapters are downloaded on-the-fly (|) and stitched into a model that is used for inference (}).",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "x Adding new adapter weights \u03a6 to pre-trained RoBERTa-Base weights \u0398 (line 3), and freezing \u0398 (line 4). z Extracting and storing the trained adapter weights \u03a6 (line 7).",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "Dynamic customization possibilities where dashed lines in (a) show the current configuration options. These options include the placements of new weights \u03a6 (including down and up projections as well as new LayerNorms), residual connections, bottleneck sizes as well as activation functions.",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"num": null,
"content": "<table/>",
"text": "Number of additional parameters and compressed storage space of the adapter of Pfeiffer et al. (2020a) in (Ro)BERT(a)-Base and Large transformer architectures. The adapter of Houlsby et al.(2019)requires roughly twice as much space. CRate refers to the adapter's compression rate: e.g., a. rate of 64 means that the adapter's bottleneck layer is 64 times smaller than the underlying model's hidden layer size.",
"type_str": "table"
}
}
}
}