ACL-OCL / Base_JSON /prefixR /json /repl4nlp /2021.repl4nlp-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:59:38.155557Z"
},
"title": "Revisiting Pretraining with Adapters",
"authors": [
{
"first": "Seungwon",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": {}
},
"email": ""
},
{
"first": "Alex",
"middle": [],
"last": "Shum",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Nathan",
"middle": [],
"last": "Susanj",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Jonathan",
"middle": [],
"last": "Hilgart",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Pretrained language models have served as the backbone for many state-of-the-art NLP results. These models are large and expensive to train. Recent work suggests that continued pretraining on task-specific data is worth the effort as pretraining leads to improved performance on downstream tasks. We explore alternatives to full-scale task-specific pretraining of language models through the use of adapter modules, a parameter-efficient approach to transfer learning. We find that adapter-based pretraining is able to achieve comparable results to task-specific pretraining while using a fraction of the overall trainable parameters. We further explore direct use of adapters without pretraining and find that the direct finetuning performs mostly on par with pretrained adapter models, contradicting previously proposed benefits of continual pretraining in full pretraining fine-tuning strategies. Lastly, we perform an ablation study on task-adaptive pretraining to investigate how different hyperparameter settings can change the effectiveness of the pretraining.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Pretrained language models have served as the backbone for many state-of-the-art NLP results. These models are large and expensive to train. Recent work suggests that continued pretraining on task-specific data is worth the effort as pretraining leads to improved performance on downstream tasks. We explore alternatives to full-scale task-specific pretraining of language models through the use of adapter modules, a parameter-efficient approach to transfer learning. We find that adapter-based pretraining is able to achieve comparable results to task-specific pretraining while using a fraction of the overall trainable parameters. We further explore direct use of adapters without pretraining and find that the direct finetuning performs mostly on par with pretrained adapter models, contradicting previously proposed benefits of continual pretraining in full pretraining fine-tuning strategies. Lastly, we perform an ablation study on task-adaptive pretraining to investigate how different hyperparameter settings can change the effectiveness of the pretraining.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Pretrained Language Models (PLM) are predominant in tackling current Natural Language Processing (NLP) tasks. Most PLMs based on the Transformer architecture (Vaswani et al., 2017) are first trained on massive text corpora with the selfsupervised objective to learn word representations (Devlin et al., 2019; Liu et al., 2019) , and then are fine-tuned for a specific target task. The pretraining and fine-tuning of PLMs achieves state-ofthe-art (SOTA) performance in many NLP tasks. Inspired by the benefits of pretraining, there have been studies demonstrate the effects of continued pretraining on the domain of a target task or the target task dataset (Mitra et al., 2020; Han and Eisenstein, 2019; Gururangan et al., 2020) . Gururangan et al., 2020 adapt PLMs on the target task by further pretraining RoBERTa (Liu et al., 2019) on the target text corpus before it is fine-tuned for the corresponding task and showed that this task adaptation consistently improves the performance for text classification tasks.",
"cite_spans": [
{
"start": 158,
"end": 180,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 287,
"end": 308,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 309,
"end": 326,
"text": "Liu et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 656,
"end": 676,
"text": "(Mitra et al., 2020;",
"ref_id": null
},
{
"start": 677,
"end": 702,
"text": "Han and Eisenstein, 2019;",
"ref_id": "BIBREF4"
},
{
"start": 703,
"end": 727,
"text": "Gururangan et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 730,
"end": 753,
"text": "Gururangan et al., 2020",
"ref_id": "BIBREF3"
},
{
"start": 815,
"end": 833,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, this full process of pretraining and then fine-tuning can be parameter inefficient for recent PLMs that have millions or billions of parameters (Devlin et al., 2019; Radford et al., 2018) . This parameter inefficiency becomes even worse when one continues pre-training all the parameters of PLMs on the task-specific corpus. Furthermore, recent PLMs need more than 100s of MB to store all the weights (Liu et al., 2019; Radford et al., 2018) , making it difficult to download and share the pre-trained models on the fly.",
"cite_spans": [
{
"start": 153,
"end": 174,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 175,
"end": 196,
"text": "Radford et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 410,
"end": 428,
"text": "(Liu et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 429,
"end": 450,
"text": "Radford et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, adapters have been proposed as an alternative approach to decrease the substantial number of parameters of PLMs in the fine-tuning stage (Houlsby et al., 2019) . Finetuning with adapters mostly matches the performance of those with the full fine-tuning strategy on many NLP tasks including GLUE benchmark (Wang et al., 2018) and reduces the size of the model from 100s of MB to the order of MB (Pfeiffer et al., 2020b) . As such, a natural question arises from the successes of the adapter approach: can the adapter alone adapt PLMs to the target task when it is used in the second phase of the pretraining stage and thus lead to the improvement of the performance on the corresponding task?",
"cite_spans": [
{
"start": 147,
"end": 169,
"text": "(Houlsby et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 315,
"end": 334,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 404,
"end": 428,
"text": "(Pfeiffer et al., 2020b)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we explore task-adaptive pretraining, termed TAPT (Gururangan et al., 2020) , with adapters to address this question and overcome the limitations of the conventional full pretraining and fine-tuning. We only train the adapter modules in the second phase of pretraining as well as the fine-tuning stage to achieve both parameter efficiency and the benefits of continual pretraining and compare those with the adapter-based model without pretraining. Surprisingly, we find that directly fine-tuning adapters performs mostly on par with the pre-trained adapter model and outperforms the full TAPT, contradicting the previously proposed benefits of continual pretraining in the full pretraining fine-tuning scheme. As directly fine-tuning adapters skips the second phase of pretraining and the training steps of adapters are faster than those of the full model, it substantially reduces the training time. We further investigate different hyperparameter settings that affect the effectiveness of pretraining.",
"cite_spans": [
{
"start": 60,
"end": 90,
"text": "TAPT (Gururangan et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Pre-trained language model We use RoBERTa (Liu et al., 2019) , a Transformer-based language model that is pre-trained on a massive text corpus, following Gururangan et al., 2020. RoBERTa is an extension of BERT (Devlin et al., 2019) with optimized hyperparameters and a modification of the pretraining objective, which excludes next sentence prediction and only uses the randomly masked tokens in the input sentence. To evaluate the performance of RoBERTa on a certain task, a classification layer is appended on top of the language model after the pretraining and all the parameters in RoBERTa are trained in a supervised way using the label of the dataset. In this paper, training word representations using RoBERTa on a masked language modeling task will be referred to as pretraining. Further, taking this pretrained model and adding a classification layer with additional updates to the language model parameters will be referred to as fine-tuning.",
"cite_spans": [
{
"start": 42,
"end": 60,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 211,
"end": 232,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretraining and Adapters",
"sec_num": "2"
},
{
"text": "Task-adaptive pretraining (TAPT) Although RoBERTa achieves strong performance by simply fine-tuning the PLMs on a target task, there can be a distributional mismatch between the pretraining and target corpora. To address this issue, pretraining on the target task or the domain of the target task can be usefully employed to adapt the language models to the target task and it further improves the performance of the PLMs. Such methods can be referred to as Domain-Adaptive Pretraining (DAPT) or Task Adaptive-Pretraining (TAPT) (Gururangan et al., 2020) . In this paper, we limit the scope of our works to TAPT as domain text corpus is not always available for each task, whereas TAPT can be easily applied by directly using the dataset of the target task while its performance often matches with DAPT (Gururangan et al., 2020) . In TAPT, the second phase of pretraining is per- Figure 1 : The adapter achitecture in the Transformer layer (Pfeiffer et al., 2020a) formed with RoBERTa using the unlabeled text corpus of the target task, and then it is fine-tuned on the target task.",
"cite_spans": [
{
"start": 529,
"end": 554,
"text": "(Gururangan et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 798,
"end": 828,
"text": "DAPT (Gururangan et al., 2020)",
"ref_id": null
},
{
"start": 940,
"end": 964,
"text": "(Pfeiffer et al., 2020a)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 880,
"end": 888,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pretraining and Adapters",
"sec_num": "2"
},
{
"text": "Adapter Adapter modules have been employed as a feature extractor in computer vision (Rebuffi et al., 2017) and have been recently adopted in the NLP literature as an alternative approach to fully fine-tuning PLMs. Adapters are sets of new weights that are typically embedded in each transformer layer of PLMs and consist of feed-forward layers with normalizations, residual connections, and projection layers. The architectures of adapters vary with respect to the different configuration settings. We use the configuration proposed by Pfeiffer et al., 2020a in Figure 1 , which turned out to be effective on diverse NLP tasks, and add the adapter layer to each transformer layer.",
"cite_spans": [
{
"start": 85,
"end": 107,
"text": "(Rebuffi et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 563,
"end": 571,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pretraining and Adapters",
"sec_num": "2"
},
{
"text": "Pfeiffer et al., 2020c use two types of adapter: language-specific adapters and taskspecific adapters for cross-lingual transfer. These two types of adapter modules have similar architecture as in Figure 1 . However, the language adapters involve invertible adapters after the embedding layer to capture token-level language representation when those are trained via masked language modeling in the pretraining stage, whereas the task adapters are simply embedded in each transformer layer and trained in the fine-tuning stage to learn the task representation. Following Pfeiffer et al., 2020c, we employ language adapter modules with invertible adapter layers to perform pretraining adapters on the unlabeled target dataset. However, we perform fine-tuning pre-trained parameters of the language adapter modules for evaluation to align with (Maas et al., 2011) ) and low-resource (CHEMPROT (Kringelum et al., 2016) , ACL-ARC (Jurgens et al., 2018) , SCIERC (Luan et al., 2018) , HYPERPARTISAN (Kiesel et al., 2019) settings.",
"cite_spans": [
{
"start": 842,
"end": 861,
"text": "(Maas et al., 2011)",
"ref_id": "BIBREF11"
},
{
"start": 891,
"end": 915,
"text": "(Kringelum et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 926,
"end": 948,
"text": "(Jurgens et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 958,
"end": 977,
"text": "(Luan et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 994,
"end": 1015,
"text": "(Kiesel et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 197,
"end": 205,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pretraining and Adapters",
"sec_num": "2"
},
{
"text": "TAPT, whereas Pfeiffer et al., 2020c employ both the language and the task adapters by stacking task adapters on top of the language adapters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pretraining and Adapters",
"sec_num": "2"
},
{
"text": "We now propose an adapter-based approach that is a parameter efficient variant of Task-Adaptive Pretraining (TAPT) and measure the margin of the performance between the pre-trained adapter model and the adapter model without pretraining. For pretraining adapters, we added the adapter module in each transformer layer of RoBERTa using adaptertransformer (Pfeiffer et al., 2020b) 1 and continued pretraining all the weights in adapter layers on target text corpus while keeping the original parameters in RoBERTa fixed. After finishing the second phase of pretraining, we performed fine-tuning of RoBERTa by training the weights in the adapters and the final classification layers while keeping all of the parameters in RoBERTa frozen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Following Gururangan et al., 2020 2 , we consider 8 classification tasks from 4 different domains. The specification of each task is shown in Table 1 . We covered news and review texts that are similar to the pretraining corpus of RoBERTa as well as scientific domains in which text corpora can have largely different distributions from those of RoBERTa. Furthermore, the pretraining corpora of the target tasks include both large and small cases to determine whether the adapter-based approach can be applicable in both low and high-resource settings.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 149,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "1 https://github.com/Adapter-Hub/ adapter-transformers 2 Downloadble link for task dataset: https://github. com/allenai/dont-stop-pretraining",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "Our implementation is based on HuggingFace since we found AllenNLP (Gardner et al., 2018) used in Gururangan et al., 2020 is incompatible with adapter-transformer (Pfeiffer et al., 2020b) . We follow the hyperparameters setting in Gururangan et al., 2020, and each model in the pretraining and fine-tuning stage is trained on a single GPU (NVIDIA RTX 3090). Details of hyperparameters are described in Appendix A. Note that for the pretraining step, we use a batch size of 8 and accumulate the gradient for every 32 steps to be consistent with the hyperparameter setting in Gururangan et al., 2020.",
"cite_spans": [
{
"start": 67,
"end": 89,
"text": "(Gardner et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 163,
"end": 187,
"text": "(Pfeiffer et al., 2020b)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "3.2"
},
{
"text": "We perform pretraining with the self-supervised objectives, which are randomly masked tokens, with a probability of 15% for each epoch and we do not apply validation to pretraining and save the model at the end of the training from a single seed. For TAPT, we train the entire parameters of the RoBERTa via masked language modeling (MLM) on the target dataset, whereas for the adapter-based model, we embed the language adapters in each transformer layer and add invertible adapters after the embedding layers to perform MLM while freezing the original parameters of RoBERTa, following Pfeiffer et al., 2020c. Fine-tuning step is straightforward. We perform fine-tuning parameters that are pretrained via MLM for both TAPT and the adapter model. Validation is performed after each epoch and the best checkpoint is loaded at the end of the training to evaluate the performance on the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "3.2"
},
{
"text": "Experiments cover four different models. First, we reproduce the performance of RoBERTa and TAPT in Gururangan et al., 2020 as presented in Appendix C. Then we proceed to the adapter-based approach. -/100% 100% /100% -/1.42% 1.32%/1.65% Relative training speed (PT/FT) -/1.0 1.0/1.0 -/1.29 1.14/1.24 Relative inference speed (PT/FT) -/1.0 1.0/1.0 -/0.98 0.88/0.98 Table 2 : Average F 1 score with standard deviation on test set. Each score is averaged over 5 random seeds. Evaluation metric is macro-F 1 scores on test set for each task except for CHMEPROT and RCT which use micro-F 1 . We report the results of baseline RoBERTa and TAPT from Gururangan et al., 2020. Following R\u00fcckl\u00e9 et al., 2020, we measure the average relative speed for the training and the inference time across all tasks except for the the inference speed in fine-tuning stage, which excludes low-resource tasks. PT and FT indicate pretraining and fine-tuning respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 364,
"end": 371,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "3.3"
},
{
"text": "To investigate the benefits of task-adaptive pretraining with adapters, we compare the performance of the pre-trained adapter model with the model without pretraining, i.e., directly fine-tuning adapters in RoBERTa on the target task. For the adapter-based approach, we compare the adapter-based model with the second phase of pretraining and the model without the pretraining. Since the weights of the adapters are randomly initialized, we empirically found that a larger learning rate worked well compared to the full fine-tuning experiments. We sweep the learning rates in {2e-5, 1e-4, 3e-4, 6e-4} and the number of epochs in {10, 20} on the validation set and report the test score that performs the best on the validation set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "3.3"
},
{
"text": "The results are summarized in Table 2 . Surprisingly, for the average F 1 score, the adapter-based model without task-adaptive pretraining performs best, followed by the other adapter with the pretraining model, TAPT, and the baseline RoBERTa. Except for Hyperpartisan news, the adapter model without pretraining performs mostly on par with the counterpart adapter model that involves pretraining on target text corpus, suggesting that the benefits of additional task-adaptive pretraining diminish when we use the adapter-based approach. Furthermore, directly fine-tuned adapter model only trains 1.42% of the entire parameters which leads to the 30% faster-training step than the full model and skips the pretraining stage that typically expensive to train than the fine-tuning, substantially reducing Figure 2 : F 1 score as a function of learning rate on test set with log scale on x-axis. F 1 score is averaged over 5 random seeds for low-resource tasks (CHEMPROT, ACL-ARC, SCIERC, HYPER) due to the high variance. For high-resource tasks (RCT, AGNEWS, HELPFULNESS, IMDB), we report the F 1 score from a single random seed for each task. For RoBERTa and TAPT, we follow the hyper-parameter settings in Gururangan et al., 2020 except for the learning rate.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 2",
"ref_id": null
},
{
"start": 803,
"end": 811,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.4"
},
{
"text": "the training time while the relative speed for the inference only decreases by 2% to the full model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.4"
},
{
"text": "We analyze how the adapter alone can surpass or perform on par with both the full model and adapter model with task-adaptive pretraining. Since we sweep the learning rates and the number of epochs in the range that includes larger figures compared to those in the full model when fine-tuning adapters and kept the other hyper-parameters the same as in Gururangan et al., 2020, we hypothesize that Table 3 : Best performance of baseline RoBERTa and TAPT (Gururangan et al., 2020) on our implementation. Each score is averaged over 5 random seeds. Best configuration settings for each task is described in Appendix Table 8 .",
"cite_spans": [],
"ref_spans": [
{
"start": 397,
"end": 404,
"text": "Table 3",
"ref_id": null
},
{
"start": 613,
"end": 620,
"text": "Table 8",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.5"
},
{
"text": "the larger learning rate zeroes out the benefits of pretraining. Figure 2 . shows the average F 1 score across all tasks as a function of learning rate.",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 73,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.5"
},
{
"text": "The adapter model without a second phase of pretraining consistently outperforms or performs on par with the adapter model with pretraining from 1e-4 to 6e-4, demonstrating that the additional pretraining turns out to be ineffective. In contrast, TAPT outperforms baseline RoBERTa from 2e-5, where both TAPT and baseline RoBERTa perform best. The results show that different learning rates used in the fine-tuning stage can affect the effectiveness of pretraining and demonstrate that directly fine-tuning a fraction of parameters can provide comparable performance to the full-model as well as the adapter model with pretraining while substantially reducing the training time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.5"
},
{
"text": "Inspired by the results of the adapter models, we perform the same experiments for the full model (baseline RoBERTa and TAPT) on our implementation by sweeping the learning rates and the number of epochs. We hypothesize that proper hyperparameter settings such as a larger learning rate or increasing the number of training steps in the fine-tuning stage can improve the performance of baseline RoBERTa, making pretraining on the unlabeled target task less effective. We sweep the learning rates in {1e-5, 2e-5, 3e-5} and the number of epochs in {10, 20} on the validation set and report the test score that performs the best on the validation set. Table 3 shows the best performance of the full models for each task among different hyper-parameter settings. The average F 1 score of baseline RoBERTa greatly increases and surprisingly, it surpasses the performance of TAPT in some tasks. The results ensure that although pretraining PLMs on the target task results in better performance, one can achieve comparable performance by simply using a larger learning rate or increasing training steps in the fine-tuning stage while skipping the pretraining step that is computationally demanding compared to the fine-tuning.",
"cite_spans": [],
"ref_spans": [
{
"start": 649,
"end": 656,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.5"
},
{
"text": "Our work demonstrates that adapters provide a competitive alternative to large-scale task-adaptive pretraining for NLP classification tasks. We show that it is possible to achieve similar performance to TAPT with pretraining training just 1.32% of the parameters through pretraining with adapters. However, the most computationally efficient option is to skip pretraining and only perform fine-tuning with adapters. We found that skipping pretraining altogether and just fine-tuning with adapters outperforms or performs mostly on par with TAPT and the adapter model with pretraining across our tasks while substantially reducing the training time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems, volume 28, pages 649-657. Curran Associates, Inc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "Details of hyperparameter setting including the learning rates for the best performing results are provided in Table 4 , 5, and 6.",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 118,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "A Hyperparameter Details",
"sec_num": null
},
{
"text": "We present validation performance in Table 7 and Figure 3 and 8.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 44,
"text": "Table 7",
"ref_id": null
},
{
"start": 49,
"end": 57,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Validation Results",
"sec_num": null
},
{
"text": "We provide replication results of Gururangan et al., 2020 in Table 9 . Table 5 : Details of hyperparameters used in fine-tuning experiments. For baseline RoBERTa and TAPT, we used 10 number of epochs with patience of 3 and the learning rate of 2e-5. For adapter experiments, see Table 6 . 1e-4, 10, 3 1e-4, 10, 3 ACL-ARC 6e-4, 10, 3 6e-4, 20, 5 SCIERC 3e-4, 20, 5 6e-4, 20, 5 HYPER 3e-4, 20, 5 1e-4, 20, 5 AGNEWS 1e-4, 10, 3 1e-4, 10, 3 HELPFUL 3e-4, 20, 5 1e-4, 20, 5 IMDB 1e-4, 10, 3 1e-4, 10, 3 Table 7 : Validation performance of adapter experiments. Each score is averaged over 5 random seeds. Evaluation metric is macro-F 1 scores for each task except for CHMEPROT and RCT which use micro-F 1 . Figure 3 : F 1 score as a function of learning rate on development setwith log scale on x-axis. F 1 score is averaged over 5 random seeds for low-resource tasks (CHEMPROT, ACL-ARC, SCIERC, HYPER) due to the high variance. For high-resource tasks (RCT, AGNEWS, HELPFULNESS, IMDB), we report the F 1 score from a single random seed for each task. Here we sweep the learning rates in {1e-4, 3e-4, 6e-4}, the number of epochs in {10, 20}, and the patience factor in {3, 5}. Table 9 : Reproducing Baseline RoBERTa and TAPT Results, average F 1 Scores with standard deviation. F 1 score is averaged over 5 random seeds. We use the same hyper-parameters in Gururangan et al., 2020.",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 68,
"text": "Table 9",
"ref_id": null
},
{
"start": 71,
"end": 78,
"text": "Table 5",
"ref_id": null
},
{
"start": 279,
"end": 286,
"text": "Table 6",
"ref_id": "TABREF6"
},
{
"start": 498,
"end": 505,
"text": "Table 7",
"ref_id": null
},
{
"start": 701,
"end": 709,
"text": "Figure 3",
"ref_id": null
},
{
"start": 1171,
"end": 1178,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "C Replication results",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Zsolt Kira, Mandeep Baines, Shruti Bhosale, and Siddharth Goyal for helpful feedback and suggestions. We also would like to thank anonymous reviewers for their insightful comments on the earlier version of the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "PubMed 200k RCT: a dataset for sequential sentence classification in medical abstracts",
"authors": [
{
"first": "Franck",
"middle": [],
"last": "Dernoncourt",
"suffix": ""
},
{
"first": "Ji",
"middle": [
"Young"
],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "308--313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franck Dernoncourt and Ji Young Lee. 2017. PubMed 200k RCT: a dataset for sequential sentence clas- sification in medical abstracts. In Proceedings of the Eighth International Joint Conference on Natu- ral Language Processing (Volume 2: Short Papers), pages 308-313, Taipei, Taiwan. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "AllenNLP: A deep semantic natural language processing platform",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Grus",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Workshop for NLP Open Source Software (NLP-OSS)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2501"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language pro- cessing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1- 6, Melbourne, Australia. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Don't stop pretraining: Adapt language models to domains and tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8342--8360",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.740"
]
},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised domain adaptation of contextualized embeddings for sequence labeling",
"authors": [
{
"first": "Xiaochuang",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaochuang Han and Jacob Eisenstein. 2019. Unsuper- vised domain adaptation of contextualized embed- dings for sequence labeling. In EMNLP.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Parameter-efficient transfer learning for NLP",
"authors": [
{
"first": "Neil",
"middle": [],
"last": "Houlsby",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Giurgiu",
"suffix": ""
},
{
"first": "Stanislaw",
"middle": [],
"last": "Jastrzebski",
"suffix": ""
},
{
"first": "Bruna",
"middle": [],
"last": "Morrone",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "De Laroussilhe",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Gesmundo",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Attariyan",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Gelly",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning",
"volume": "97",
"issue": "",
"pages": "2790--2799",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2790-2799. PMLR.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Measuring the evolution of a scientific field through citation frames",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Srijan",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Raine",
"middle": [],
"last": "Hoover",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Mc-Farland",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "391--406",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00028"
]
},
"num": null,
"urls": [],
"raw_text": "David Jurgens, Srijan Kumar, Raine Hoover, Dan Mc- Farland, and Dan Jurafsky. 2018. Measuring the evo- lution of a scientific field through citation frames. Transactions of the Association for Computational Linguistics, 6:391-406.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "SemEval-2019 task 4: Hyperpartisan news detection",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Kiesel",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Mestre",
"suffix": ""
},
{
"first": "Rishabh",
"middle": [],
"last": "Shukla",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Payam",
"middle": [],
"last": "Adineh",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Corney",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "829--839",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2145"
]
},
"num": null,
"urls": [],
"raw_text": "Johannes Kiesel, Maria Mestre, Rishabh Shukla, Em- manuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. 2019. SemEval- 2019 task 4: Hyperpartisan news detection. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 829-839, Minneapo- lis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Chemprot-3.0: a global chemical biology diseases mapping",
"authors": [
{
"first": "Jens",
"middle": [],
"last": "Kringelum",
"suffix": ""
},
{
"first": "Sonny",
"middle": [
"Kim"
],
"last": "Kjaerulff",
"suffix": ""
},
{
"first": "S\u00f8ren",
"middle": [],
"last": "Brunak",
"suffix": ""
},
{
"first": "Ole",
"middle": [],
"last": "Lund",
"suffix": ""
},
{
"first": "Tudor",
"middle": [
"I"
],
"last": "Oprea",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Taboureau",
"suffix": ""
}
],
"year": 2016,
"venue": "Database",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jens Kringelum, Sonny Kim Kjaerulff, S\u00f8ren Brunak, Ole Lund, Tudor I Oprea, and Olivier Taboureau. 2016. Chemprot-3.0: a global chemical biology dis- eases mapping. Database, 2016.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3219--3232",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1360"
]
},
"num": null,
"urls": [],
"raw_text": "Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of enti- ties, relations, and coreference for scientific knowl- edge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 3219-3232, Brussels, Bel- gium. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning word vectors for sentiment analysis",
"authors": [
{
"first": "Andrew",
"middle": [
"L"
],
"last": "Maas",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"E"
],
"last": "Daly",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"T"
],
"last": "Pham",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "142--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analy- sis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 142-150, Port- land, Oregon, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Image-based recommendations on styles and substitutes",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Mcauley",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Targett",
"suffix": ""
},
{
"first": "Qinfeng",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hengel",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "43--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. 2015. Image-based recom- mendations on styles and substitutes. In Proceed- ings of the 38th international ACM SIGIR confer- ence on research and development in information re- trieval, pages 43-52.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Kuntal Kumar Pal",
"authors": [
{
"first": "Arindam",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Pratyay",
"middle": [],
"last": "Banerjee",
"suffix": ""
}
],
"year": null,
"venue": "Swaroop Ranjan Mishra, and Chitta Baral. 2020. Exploring ways to incorporate additional knowledge to improve natural language commonsense question answering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.08855v3"
]
},
"num": null,
"urls": [],
"raw_text": "Arindam Mitra, Pratyay Banerjee, Kuntal Kumar Pal, Swaroop Ranjan Mishra, and Chitta Baral. 2020. Exploring ways to incorporate additional knowledge to improve natural language commonsense question answering. arXiv:1909.08855v3.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adapterfusion: Non-destructive task composition for transfer learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Pfeiffer",
"suffix": ""
},
{
"first": "Aishwarya",
"middle": [],
"last": "Kamath",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "R\u00fcckl\u00e9",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00247"
]
},
"num": null,
"urls": [],
"raw_text": "Jonas Pfeiffer, Aishwarya Kamath, Andreas R\u00fcckl\u00e9, Kyunghyun Cho, and Iryna Gurevych. 2020a. Adapterfusion: Non-destructive task composi- tion for transfer learning. arXiv preprint arXiv:2005.00247.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "AdapterHub: A framework for adapting transformers",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Pfeiffer",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "R\u00fcckl\u00e9",
"suffix": ""
},
{
"first": "Clifton",
"middle": [],
"last": "Poth",
"suffix": ""
},
{
"first": "Aishwarya",
"middle": [],
"last": "Kamath",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "46--54",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.7"
]
},
"num": null,
"urls": [],
"raw_text": "Jonas Pfeiffer, Andreas R\u00fcckl\u00e9, Clifton Poth, Aish- warya Kamath, Ivan Vuli\u0107, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020b. AdapterHub: A framework for adapting transform- ers. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 46-54, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Pfeiffer",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7654--7673",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.617"
]
},
"num": null,
"urls": [],
"raw_text": "Jonas Pfeiffer, Ivan Vuli\u0107, Iryna Gurevych, and Se- bastian Ruder. 2020c. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7654-7673, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning multiple visual domains with residual adapters",
"authors": [
{
"first": "Hakan",
"middle": [],
"last": "Sylvestre-Alvise Rebuffi",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Bilen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vedaldi",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "506--516",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2017. Learning multiple visual domains with residual adapters. In Advances in Neural Infor- mation Processing Systems, volume 30, pages 506- 516. Curran Associates, Inc.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Nils Reimers, and Iryna Gurevych. 2020. Adapterdrop: On the efficiency of adapters in transformers",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "R\u00fcckl\u00e9",
"suffix": ""
},
{
"first": "Gregor",
"middle": [],
"last": "Geigle",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Glockner",
"suffix": ""
},
{
"first": "Tilman",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Pfeiffer",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.11918"
]
},
"num": null,
"urls": [],
"raw_text": "Andreas R\u00fcckl\u00e9, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2020. Adapterdrop: On the effi- ciency of adapters in transformers. arXiv preprint arXiv:2010.11918.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30, pages 5998-6008. Cur- ran Associates, Inc.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5446"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table/>",
"text": "Datasets used for experimentation. Datasets include both high-resource (RCT(Dernoncourt and Lee, 2017),AGNEWS (Zhang et al., 2015),HELPFULNESS (McAuley et al., 2015), IMDB",
"html": null,
"type_str": "table",
"num": null
},
"TABREF5": {
"content": "<table><tr><td>Hyper-parameter</td><td>Value</td></tr><tr><td>Optimizer</td><td>Adam</td></tr><tr><td>Adam epsilon</td><td>1e-8, 0.999</td></tr><tr><td>Batch size</td><td>16</td></tr><tr><td>Gradient accumulation step</td><td>1</td></tr><tr><td>Epochs</td><td>10 or 20</td></tr><tr><td>Patience</td><td>3 or 5</td></tr><tr><td>Adapter reduction factor</td><td>12</td></tr><tr><td>Dropout</td><td>0.1</td></tr><tr><td>Feedforward layer</td><td>1</td></tr><tr><td>Feedforward nonlinearity</td><td>tanh</td></tr><tr><td>Classification layer</td><td>1</td></tr><tr><td>Learning rate</td><td>see Table 6</td></tr><tr><td>Learning rate decay</td><td>linear</td></tr><tr><td>Warmup proportion</td><td>0.06</td></tr><tr><td>Maximum sequence length</td><td>512</td></tr></table>",
"text": "Details of hyperparameters used in pretraining experiments. We used 40 number of epochs for HELP-FULNESS and 100 for the other tasks.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF6": {
"content": "<table><tr><td>Dataset</td><td colspan=\"2\">Adapter w/o pretraining Adapter w/ pretraining</td></tr><tr><td>CHEMPROT</td><td>83.77 0.5</td><td>84.02 0.7</td></tr><tr><td>RCT</td><td>88.16 0.1</td><td>88.13 0.1</td></tr><tr><td>ACL-ARC</td><td>72.41 2.2</td><td>77.31 2.9</td></tr><tr><td>SCIERC</td><td>86.86 0.5</td><td>87.87 0.3</td></tr><tr><td>HYPER</td><td>86.33 1.4</td><td>86.00 3.5</td></tr><tr><td>AGNEWS</td><td>94.28 0.1</td><td>94.57 0.1</td></tr><tr><td>HELPFUL</td><td>70.83 1.2</td><td>70.8 0.7</td></tr><tr><td>IMDB</td><td>95.52 0.1</td><td>95.6 0.1</td></tr><tr><td>Average F1</td><td>84.77</td><td>85.54</td></tr></table>",
"text": "Learning rate, the nubmer of epochs and patience for best-performing models. For adapter experiments, we sweep the learning rates in {1e-4, 3e-4, 6e-4}, the number of epochs in {10, 20}, and patience factor in {3, 5} on validation set.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF8": {
"content": "<table><tr><td/><td>Original Results</td><td colspan=\"2\">Original Results Our Results</td><td>Our Results</td></tr><tr><td>Dataset</td><td colspan=\"2\">Baseline RoBERTa TAPT</td><td colspan=\"2\">Baseline RoBERTa TAPT</td></tr><tr><td>CHEMPROT</td><td>81.9 1.0</td><td>82.6 0.4</td><td>81.64 0.8</td><td>82.58 0.5</td></tr><tr><td>RCT</td><td>87.2 0.1</td><td>87.7 0.1</td><td>86.89 0.1</td><td>87.4 0.2</td></tr><tr><td>ACL-ARC</td><td>63.0 5.8</td><td>67.4 1.8</td><td>64.12 5.5</td><td>66.11 4.6</td></tr><tr><td>SCIERC</td><td>77.3 1.9</td><td>79.3 1.5</td><td>78.89 2.7</td><td>79.94 0.7</td></tr><tr><td>HYPER</td><td>86.6 0.9</td><td>90.4 5.2</td><td>85.03 6.0</td><td>91.56 2.5</td></tr><tr><td>AGNEWS</td><td>93.9 0.2</td><td>94.5 0.1</td><td>93.72 0.2</td><td>94.05 0.1</td></tr><tr><td colspan=\"2\">HELPFULNESS 65.1 3.4</td><td>68.5 1.9</td><td>69.2 1.4</td><td>71.24 0.7</td></tr><tr><td>IMDB</td><td>95.0 0.2</td><td>95.5 0.1</td><td>95.15 0.1</td><td>95.33 0.1</td></tr><tr><td>Average F1</td><td>81.3</td><td>83.24</td><td>81.83</td><td>83.53</td></tr></table>",
"text": "Validation performance of Baseline RoBERTa and TAPT experiments that corresponds toTable 3. Each score is averaged over 5 random seeds.",
"html": null,
"type_str": "table",
"num": null
}
}
}
}