ACL-OCL / Base_JSON /prefixB /json /bigscience /2022.bigscience-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:11:38.515832Z"
},
"title": "Diverse Lottery Tickets Boost Ensemble from a Single Pretrained Model",
"authors": [
{
"first": "Sosuke",
"middle": [],
"last": "Kobayashi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Shun",
"middle": [],
"last": "Kiyono",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Ensembling is a popular method used to improve performance as a last resort. However, ensembling multiple models finetuned from a single pretrained model has been not very effective; this could be due to the lack of diversity among ensemble members. This paper proposes Multi-Ticket Ensemble, which finetunes different subnetworks of a single pretrained model and ensembles them. We empirically demonstrated that winning-ticket subnetworks produced more diverse predictions than dense networks, and their ensemble outperformed the standard ensemble on some tasks. Repeat and use final Pruning mask pretrained model before each finetuning. We ex-088 pect that, during finetuning, each sub-network ac-089 quires different views using different sub-spaces of 090 the pretrained knowledge. This idea has two chal-091 lenges: the diversity and the accuracy of the sub-092 networks. Recent studies on the lottery ticket hy-093 pothesis (Frankle and Carbin, 2019) suggest that a 094 dense neural network at an initialization contains a 095 sub-network, called winning ticket, whose accuracy 096 becomes comparable with that of the dense network 097 after the same training steps. A pretrained BERT 098 also has sparse sub-networks (e.g., 50%), which 099 can achieve the same accuracy with the entire net-100 work when finetuning on downstream tasks (Chen 101 et al., 2020). However, it is still unclear how di-102 verse winning tickets exist and how to find them 103",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Ensembling is a popular method used to improve performance as a last resort. However, ensembling multiple models finetuned from a single pretrained model has been not very effective; this could be due to the lack of diversity among ensemble members. This paper proposes Multi-Ticket Ensemble, which finetunes different subnetworks of a single pretrained model and ensembles them. We empirically demonstrated that winning-ticket subnetworks produced more diverse predictions than dense networks, and their ensemble outperformed the standard ensemble on some tasks. Repeat and use final Pruning mask pretrained model before each finetuning. We ex-088 pect that, during finetuning, each sub-network ac-089 quires different views using different sub-spaces of 090 the pretrained knowledge. This idea has two chal-091 lenges: the diversity and the accuracy of the sub-092 networks. Recent studies on the lottery ticket hy-093 pothesis (Frankle and Carbin, 2019) suggest that a 094 dense neural network at an initialization contains a 095 sub-network, called winning ticket, whose accuracy 096 becomes comparable with that of the dense network 097 after the same training steps. A pretrained BERT 098 also has sparse sub-networks (e.g., 50%), which 099 can achieve the same accuracy with the entire net-100 work when finetuning on downstream tasks (Chen 101 et al., 2020). However, it is still unclear how di-102 verse winning tickets exist and how to find them 103",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Ensembling (Levin et al., 1989; Domingos, 1997) has long been an easy and effective approach to improve model performance by averaging the outputs of multiple comparable but independent models. Allen-Zhu and Li (2020) explain that different models obtain different views for judgments, and the ensemble uses complementary views to make more robust decisions. A good ensemble requires diverse member models. However, how to encourage diversity without sacrificing the accuracy of each model is non-trivial (Liu and Yao, 1999; Kirillov et al., 2016; Rame and Cord, 2021) .",
"cite_spans": [
{
"start": 11,
"end": 31,
"text": "(Levin et al., 1989;",
"ref_id": "BIBREF17"
},
{
"start": 32,
"end": 47,
"text": "Domingos, 1997)",
"ref_id": "BIBREF9"
},
{
"start": 505,
"end": 524,
"text": "(Liu and Yao, 1999;",
"ref_id": "BIBREF20"
},
{
"start": 525,
"end": 547,
"text": "Kirillov et al., 2016;",
"ref_id": "BIBREF15"
},
{
"start": 548,
"end": 568,
"text": "Rame and Cord, 2021)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The pretrain-then-finetune paradigm has become another best practice for achieving state-of-the-art performance on NLP tasks (Devlin et al., 2019) . The cost of large-scale pretraining, however, is enormously high (Sharir et al., 2020) ; This often makes it difficult to independently pretrain multiple models. Therefore, most researchers and practitioners only use a single pretrained model, which is distributed by resource-rich organizations.",
"cite_spans": [
{
"start": 125,
"end": 146,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 214,
"end": 235,
"text": "(Sharir et al., 2020)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This situation brings up a novel question to ensemble learning: Can we make an effective ensemble from only a single pre-trained model? Although ensembles can be combined with the pretrain-then-finetune paradigm, an ensemble of Figure 1 : When finetuning from a single pretrained model (left), the models are less diverse (center). If we finetune different sparse subnetworks, they become more diverse and make the ensemble effective (right). models finetuned from a single pretrained model is much less effective than that using different pretrained models from scratch in many tasks (Raffel et al., 2020) . Na\u00efve ensemble offers limited improvements, possibly due to the lack of diversity of finetuning from the same initial parameters.",
"cite_spans": [
{
"start": 585,
"end": 606,
"text": "(Raffel et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 228,
"end": 236,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a simple yet effective method called Multi-Ticket Ensemble, ensembling finetuned winning-ticket subnetworks (Frankle and Carbin, 2019) in a single pretrained model. We empirically demonstrate that pruning a single pretrained model can make diverse models, and their ensemble can outperform the na\u00efve dense ensemble if winning-ticket subnetworks are found.",
"cite_spans": [
{
"start": 134,
"end": 160,
"text": "(Frankle and Carbin, 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we discuss the most standard way of ensemble, which averages the outputs of multiple neural networks; each has the same architecture but different parameters. That is, let f (x; \u03b8) be the output of a model with the parameter vector \u03b8 given the input x, the output of an ensemble is f M (x) = \u03b8\u2208M f (x; \u03b8)/|M|, where M = {\u03b8 1 , ..., \u03b8 |M| } is the member parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity in a Single Pretrained Model",
"sec_num": "2"
},
{
"text": "As discussed, when constructing an ensemble f M by finetuning from a single pretrained model multiple times with different random seeds {s 1 , ..., s |M| }, the boost in performance tends to be only marginal. In the case of BERT (Devlin et al., 2019) and its variants, three sources of diversities can be considered: random initialization of the task-specific layer, dataset shuffling for stochastic gradient descent (SGD), and dropout. However, empirically, such finetuned parameters tend not to be largely different from the initial parameters, and they do not lead to diverse models (Radiya-Dixit and Wang, 2020) . Of course, if one adds significant noise to the parameters, it leads to diversity; however, it would also hurt accuracy.",
"cite_spans": [
{
"start": 229,
"end": 250,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 586,
"end": 615,
"text": "(Radiya-Dixit and Wang, 2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity from Finetuning",
"sec_num": "2.1"
},
{
"text": "To make models ensuring both accuracy and diversity, we focus on subnetworks in the pretrained model. Different subnetworks employ different subspaces of the pre-trained knowledge (Radiya-Dixit and Wang, 2020; Zhao et al., 2020; Cao et al., 2021) ; this would help the subnetworks to acquire different views, which can be a source of desired diversity 1 . Also, in terms of accuracy, recent studies on the lottery ticket hypothesis (Frankle and Carbin, 2019) suggest that a dense network at initialization contains a subnetwork, called the winning ticket, whose accuracy becomes comparable to that of the dense one after the same training. Interestingly, the pretrained BERT also has a winning ticket for finetuning on downstream tasks (Chen et al., 2020) . Thus, if we can find diverse winning tickets, they can be good ensemble members with the two desirable properties: diversity and accuracy.",
"cite_spans": [
{
"start": 180,
"end": 209,
"text": "(Radiya-Dixit and Wang, 2020;",
"ref_id": "BIBREF24"
},
{
"start": 210,
"end": 228,
"text": "Zhao et al., 2020;",
"ref_id": "BIBREF43"
},
{
"start": 229,
"end": 246,
"text": "Cao et al., 2021)",
"ref_id": "BIBREF3"
},
{
"start": 432,
"end": 458,
"text": "(Frankle and Carbin, 2019)",
"ref_id": "BIBREF11"
},
{
"start": 736,
"end": 755,
"text": "(Chen et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity from Pruning",
"sec_num": "2.2"
},
{
"text": "We propose a simple yet effective method, multiticket ensemble, which finetunes different subnetworks instead of dense networks. Because it could be a key how to find subnetworks, we explore three variants based on iterative magnitude pruning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subnetwork Exploration",
"sec_num": "3"
},
{
"text": "tasks using the BERT in this paper, the same problem happens in other settings generally. In the results by Raffel et al. (2020) , we found that this happened on almost all the tasks of GLUE (Wang et al., 2018) , SuperGLUE (Wang et al., 2019) , SQuAD (Rajpurkar et al., 2016) , summarization, and machine translations using the T5 models. also has sparse sub-networks (e.g., 50%), which 099 can achieve the same accuracy with the entire net-100 work when finetuning on downstream tasks (Chen 101 et al., 2020) . However, it is still unclear how di-102 verse winning tickets exist and how to find them 103 tasks using the BERT in this paper, the same problem happens in other settings generally. In the results by Raffel et al. (2020) , we found that this happened on almost all the tasks of GLUE (Wang et al., 2018) , SuperGLUE (Wang et al., 2019) , SQuAD (Rajpurkar et al., 2016) , summarization, and machine translations using the T5 models. on a random seed s. Also, let \u2713 s represent the parameter of the pretrained BERT and the task-specific layer, whose parameter is randomly initialized by random seed s. After finetuning \u2713 s to FINE(\u2713 s , s), we identify and prune the parameters with 10% lowest magnitudes in FINE(\u2713 s , s). We also get the corresponding binary mask of pruning, m s,10% 2 {0, 1} |\u2713s| , where the surviving positions have 1 otherwise 0. The pruning of parameters \u2713 by a mask m can be also represented as \u2713 m, where is the element-wise product. Next, we replay finetuning but from \u2713 s m s,10% and get FINE(\u2713 s m s,10% , s) as well as 20%-pruning mask m s,20% . By repeating iterative magnitude pruning, we obtain the parameter FINE(\u2713 s m s,P % , s). In our experiments, we set P = 30, i.e., evaluate ensemble of 30%-pruning sub-networks, where verse winning tickets exist and how to find them 103 tasks using the BERT in this paper, the same problem happens in other settings generally. In the results by Raffel et al. (2020) , we found that this happened on almost all the tasks of GLUE (Wang et al., 2018) , SuperGLUE (Wang et al., 2019) , SQuAD (Rajpurkar et al., 2016) , summarization, and machine translations using the T5 models. also has sparse sub-networks (e.g., 50%), which 099 can achieve the same accuracy with the entire net-100 work when finetuning on downstream tasks (Chen 101 et al., 2020). However, it is still unclear how di-102 verse winning tickets exist and how to find them 103 tasks using the BERT in this paper, the same problem happens in other settings generally. In the results by Raffel et al. (2020) , we found that this happened on almost all the tasks of GLUE (Wang et al., 2018) , SuperGLUE (Wang et al., 2019) , SQuAD (Rajpurkar et al., 2016) , summarization, and machine translations using the T5 models. the parameter of the pretrained BERT and the task-specific layer, whose parameter is randomly initialized by random seed s. After finetuning \u2713 s to FINE(\u2713 s , s), we identify and prune the parameters with 10% lowest magnitudes in FINE(\u2713 s , s). We also get the corresponding binary ",
"cite_spans": [
{
"start": 108,
"end": 128,
"text": "Raffel et al. (2020)",
"ref_id": "BIBREF25"
},
{
"start": 191,
"end": 210,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 223,
"end": 242,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 251,
"end": 275,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 486,
"end": 509,
"text": "(Chen 101 et al., 2020)",
"ref_id": null
},
{
"start": 713,
"end": 733,
"text": "Raffel et al. (2020)",
"ref_id": "BIBREF25"
},
{
"start": 796,
"end": 815,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 828,
"end": 847,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 856,
"end": 880,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 1928,
"end": 1948,
"text": "Raffel et al. (2020)",
"ref_id": "BIBREF25"
},
{
"start": 2011,
"end": 2030,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 2043,
"end": 2062,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 2071,
"end": 2095,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 2306,
"end": 2315,
"text": "(Chen 101",
"ref_id": null
},
{
"start": 2532,
"end": 2552,
"text": "Raffel et al. (2020)",
"ref_id": "BIBREF25"
},
{
"start": 2615,
"end": 2634,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 2647,
"end": 2666,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 2675,
"end": 2699,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subnetwork Exploration",
"sec_num": "3"
},
{
"text": "M = {FINE(\u2713 s1 m s1,30% , s 1 ), ..., FINE(\u2713 s |M| 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subnetwork Exploration",
"sec_num": "3"
},
{
"text": "{FINE(\u2713 s1 m s1,30% , s 1 ), ..., FINE(\u2713 s |M| 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subnetwork Exploration",
"sec_num": "3"
},
{
"text": "We also did not prune the embedding layer, following Chen et al. 2020 verse winning tickets exist and how to find them 103 tasks using the BERT in this paper, the same problem happens in other settings generally. In the results by Raffel et al. (2020) , we found that this happened on almost all the tasks of GLUE (Wang et al., 2018) , SuperGLUE (Wang et al., 2019) , SQuAD (Rajpurkar et al., 2016) , summarization, and machine translations using the T5 models.",
"cite_spans": [
{
"start": 231,
"end": 251,
"text": "Raffel et al. (2020)",
"ref_id": "BIBREF25"
},
{
"start": 314,
"end": 333,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 346,
"end": 365,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 374,
"end": 398,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subnetwork Exploration",
"sec_num": "3"
},
{
"text": "First, we describe iterative magnitude pruning, the Figure 2: Overview of iterative magnitude pruning (Section 3.1). We can also use regularizers during finetuning to diversify pruning (Section 3.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subnetwork Exploration",
"sec_num": "3"
},
{
"text": "We employ iterative magnitude pruning (Frankle and Carbin, 2019) to find winning tickets for simplicity. Other sophisticated options are left for future work. Here, we explain the algorithm (refer to the paper for details). The algorithm explores a good pruning mask via rehearsals of finetuning. First, it completes a finetuning procedure of an initialized dense network and identifies the parameters with the 10% lowest magnitudes as the targets of pruning. Then, it makes the pruned subnetwork and resets its parameters to the originally-initialized (sub-)parameters. This finetune-prune-reset process is repeated until reaching the desired pruning ratio. We used 30% as pruning ratio.",
"cite_spans": [
{
"start": 38,
"end": 64,
"text": "(Frankle and Carbin, 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Iterative Magnitude Pruning",
"sec_num": "3.1"
},
{
"text": "We discussed that finetuning with different random seeds did not lead to diverse parameters in Section 2.1. Therefore, iterative magnitude pruning with different seeds could also produce less diverse subnetworks. Thus, we also explore means of diversifying pruning patterns by enforcing different parameters to have lower magnitudes. Motivated by this, we experiment with a simple approach, applying an L 1 regularizer (i.e., magnitude decay) to different parameters selectively depending on the random seeds. Specifically, we explore two policies to determine which parameters are decayed and how strongly they are, i.e., the element-wise coefficients of the L 1 regularizer, l s \u2208 R \u22650 |\u03b8| . During finetuning (for pruning), we add a regularization term \u03c4 ||\u03b8 s \u2299 l s || 1 with a positive scalar coefficient \u03c4 into the loss of the task (e.g., cross entropy for classification), where \u2299 is element-wise product. This softly enforces various parameters to have a lower magnitude among a set of random seeds and could lead various parameters to be pruned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning with Regularizer",
"sec_num": "3.2"
},
{
"text": "Active Masking To maximize the diversity of the surviving parameters of member models, it is necessary to prune the surviving parameters of the random seed s 1 when building a model with the next random seed s 2 . Thus, during finetuning with seed s 2 , we apply the L 1 regularizer on the first surviving parameters. Likewise, with the following seeds s 3 , s 4 , ..., s i , ..., s |M| , we cumulatively use the average of the surviving masks as the regularizer coefficient mask. Let m s j \u2208 {0, 1} |\u03b8| be the pruning mask indicating surviving parameters from seed s j , the coefficient mask with seed s i is l s i = j<i m s j /(i \u2212 1). We call this affirmative policy as active masking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning with Regularizer",
"sec_num": "3.2"
},
{
"text": "Random Masking In active masking, each coefficient mask has a sequential dependence on the preceding random seeds. Thus, the training of ensemble members cannot be parallelized. Therefore, we also experiment with a simpler and parallelizable variant, random masking, where a mask is independently and randomly generated from a random seed. With a random seed s i , we generate the seed-dependent random binary mask, i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning with Regularizer",
"sec_num": "3.2"
},
{
"text": "l s = m rand s i \u2208 {0, 1} |\u03b8| ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning with Regularizer",
"sec_num": "3.2"
},
{
"text": "where each element is sampled from Bernoulli distribution and 0's probability equals to the target pruning ratio.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning with Regularizer",
"sec_num": "3.2"
},
{
"text": "We evaluate the performance of ensembles using four finetuning schemes: (1) finetuning without pruning (BASELINE), (2) finetuning of lotteryticket subnetworks found with the na\u00efve iterative magnitude pruning (BASE-LT), and (3) with L 1 regularizer by the active masking (ACTIVE-LT) or (4) random masking (RANDOM-LT). We also compare with (5) BAGGING-based ensemble, which trains dense models on different random 90% training subsets. We use the GLUE benchmark (Wang et al., 2018) as tasks. The implementation and settings follow Chen et al. (2020) 2 using the Transformers library (Wolf et al., 2020) and its bert-baseuncased pretrained model. We report the average performance using twenty different random seeds. Ensembles are evaluated using exhaustive combinations of five members. We also perform Student's t-test for validating statistical significance 3 Table 1 : The performances (single, ens.) and the improvements by ensembling (diff.). Italic indicates that the value is significantly larger than that of BASELINE. Bold-italic indicates significantly larger than that of both BASELINE and BASE-LT. Underline indicates the best. that, while the experiments focus on using BERT, we believe that the insights would be helpful to other pretrain-then-finetune settings in general 4 .",
"cite_spans": [
{
"start": 460,
"end": 479,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 581,
"end": 600,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF28"
},
{
"start": 859,
"end": 860,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 861,
"end": 868,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We show the results on MRPC (Dolan and Brockett, 2005) and STS-B (Cer et al., 2017) in Table 1 . Multi-ticket ensembles (*-LT) outperform BASE-LINE and BAGGING significantly (p < 0.001). This result supports the effectiveness of multi-ticket ensemble. Note that the improvements of *-LT are attributable to ensembling (diff.) rather than to any performance gains of the individual models (single). We also plot the improvements (ens. values relative to BASELINE) as a function of the number of ensemble members on MRPC and STS-B in Figure 3. This also clearly shows that while the single models of *-LT have accuracy similar to BASE-LINE, the gains appear when ensembling them. While multi-ticket ensemble works well even with the naive pruning method (BASE-LT), RANDOM-LT and ACTIVE-LT achieve the better ensembling effect on average; this suggests the effectiveness of regularizers. Interestingly, RANDOM-LT is simpler but more effective than ACTIVE-LT.",
"cite_spans": [
{
"start": 65,
"end": 83,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 87,
"end": 94,
"text": "Table 1",
"ref_id": null
},
{
"start": 532,
"end": 538,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracy",
"sec_num": "4.1"
},
{
"text": "When Winning Tickets are Less Accurate Does multi-ticket ensemble work well on any tasks? The answer is no. To enjoy the benefit from multi-ticket ensemble, we have to find diverse winning-ticket subnetworks sufficiently comparable to their dense network. When winning tickets are less accurate than the baseline, their ensembles often fail to outperform the baseline's ensemble. It happened to CoLA (Warstadt et al., 2019) , QNLI (Rajpurkar et al., 2016) , SST-2 (Socher et al., 2013) , MNLI (Williams et al., 2018) ; the naive iterative magnitude pruning did not find comparable winning-ticket subnetworks (with or sometimes even without regularizers) 567 . Note that, even in such a case, RANDOM-LT often yielded a higher effect of ensembling (diff.), while the degradation of single models canceled out the effect in total, and BAGGING also failed to improve. More sophisticated pruning methods (Blalock et al., 2020; Sanh et al., 2020) or tuning will find better winningticket subnetworks and maximize the opportunities for multi-ticket ensemble in future work.",
"cite_spans": [
{
"start": 400,
"end": 423,
"text": "(Warstadt et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 431,
"end": 455,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 458,
"end": 485,
"text": "SST-2 (Socher et al., 2013)",
"ref_id": null
},
{
"start": 493,
"end": 516,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF37"
},
{
"start": 899,
"end": 921,
"text": "(Blalock et al., 2020;",
"ref_id": "BIBREF2"
},
{
"start": 922,
"end": 940,
"text": "Sanh et al., 2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy",
"sec_num": "4.1"
},
{
"text": "As an auxiliary analysis of behaviors, we show that each subnetwork produces diverse predictions. Because any existing diversity scores do not completely explain or justify the ensemble performance 8 , we discuss only rough trends in five popular metrics of classification diversity; Q statistic (Yule, 1900) , ratio errors (Aksela, 2003) , negative double fault (Giacinto and Roli, 2001) , disagreement measure (Skalak, 1996) , and correlation coefficient (Kuncheva and Whitaker, 2003) . See Kuncheva and Whitaker (2003) ; Cruz et al. 2020for their summarized definitions. As shown in Table 2, in all the metrics, winning-ticket subnetworks (*-LT) produced more diverse predictions than the 5 Although some studies (Prasanna et al., 2020; Chen et al., 2020; Liang et al., 2021) reported that they found winningticket subnetworks on these tasks, our finding did not contradict it. Their subnetworks were often actually a little worse than their dense networks, as well as we found. Chen et al. (2020) defined winning tickets as subnetworks with performances within one standard deviation from the dense networks. Prasanna et al. (2020) considered subnetworks with even 90% performance as winning tickets.",
"cite_spans": [
{
"start": 296,
"end": 308,
"text": "(Yule, 1900)",
"ref_id": "BIBREF41"
},
{
"start": 324,
"end": 338,
"text": "(Aksela, 2003)",
"ref_id": "BIBREF0"
},
{
"start": 363,
"end": 388,
"text": "(Giacinto and Roli, 2001)",
"ref_id": "BIBREF13"
},
{
"start": 412,
"end": 426,
"text": "(Skalak, 1996)",
"ref_id": "BIBREF31"
},
{
"start": 457,
"end": 486,
"text": "(Kuncheva and Whitaker, 2003)",
"ref_id": "BIBREF16"
},
{
"start": 493,
"end": 521,
"text": "Kuncheva and Whitaker (2003)",
"ref_id": "BIBREF16"
},
{
"start": 692,
"end": 693,
"text": "5",
"ref_id": null
},
{
"start": 716,
"end": 739,
"text": "(Prasanna et al., 2020;",
"ref_id": "BIBREF23"
},
{
"start": 740,
"end": 758,
"text": "Chen et al., 2020;",
"ref_id": "BIBREF5"
},
{
"start": 759,
"end": 778,
"text": "Liang et al., 2021)",
"ref_id": "BIBREF18"
},
{
"start": 982,
"end": 1000,
"text": "Chen et al. (2020)",
"ref_id": "BIBREF5"
},
{
"start": 1113,
"end": 1135,
"text": "Prasanna et al. (2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity of Predictions",
"sec_num": "4.2"
},
{
"text": "6 For example, comparing BASELINE with RANDOM-LT of pruning ratio 20%, their average values of single/ensemble/difference are 91.38/91.93/+0.55 vs. 91.09/91.90/+0.81 on This also happens to experiments with roberta-base while multi-ticket ensemble still works well on MRPC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity of Predictions",
"sec_num": "4.2"
},
{
"text": "8 Finding such a convenient diversity metric itself is still a challenge in the research community (Wu et al., 2021 Table 2 : Diversity metrics on MRPC. The signs, \u2193 and \u2191, indicate that the metric gets lower and higher when the predictions are diverse. Q = Q statistic, R = ratio errors, ND = negative double fault, D = disagreement measure, C = correlation coefficient.",
"cite_spans": [
{
"start": 99,
"end": 115,
"text": "(Wu et al., 2021",
"ref_id": "BIBREF40"
}
],
"ref_spans": [
{
"start": 116,
"end": 123,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Diversity of Predictions",
"sec_num": "4.2"
},
{
"text": "Overlap ratio of pruning masks m si between different seeds on MRPC. The lower (yellower) the value is, the more dissimilar the two masks are. baseline using the dense networks (BASELINE).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4:",
"sec_num": null
},
{
"text": "We finally revealed the diversity of the subnetwork structures on MRPC. We calculated the overlap ratio of two pruning masks, which is defined as intersection over union, IoU = (Chen et al., 2020) . In Figure 4 , we show the overlap ratio between the pruning masks for the five random seeds, i.e., {m s 1 , ..., m s 5 }. At first, we can see that ACTIVE-LT and RANDOM-LT using the regularizers resulted in diverse pruning. This higher diversity could lead to the best improvements by ensembling, as discussed in Section 4.1. Secondly, BASE-LT produced surprisingly similar (99%) pruning masks with different random seeds. However, recall that even BASE-LT using the na\u00efve iterative magnitude pruning performed better than BASE-LINE. This result shows that even seemingly small changes in structure can improve the diversity of predictions and the performance of the ensemble.",
"cite_spans": [
{
"start": 177,
"end": 196,
"text": "(Chen et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 202,
"end": 210,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Diversity of Subnetwork Structures",
"sec_num": "4.3"
},
{
"text": "|m i \u2229m j | |m i \u222am j |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity of Subnetwork Structures",
"sec_num": "4.3"
},
{
"text": "We raised a question on difficulty of ensembling large-scale pretrained models. As an efficient remedy, we explored methods to use subnetworks in a single model. We empirically demonstrated that ensembling winning-ticket subnetworks could outperform the dense ensembles via diversification and indicated a limitation too.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We follow the setting of Chen et al. (2020) 's implementation; epoch: 3, initial learning rate: 2e-5 with linear decay, maximum sequence length: 128, batch size: 32, dropout probability: 0.1. This is one of the most-used settings for finetuning a BERT; e.g., the example of finetuning in the Transformers library (Wolf et al., 2020) uses the setting 9 .",
"cite_spans": [
{
"start": 25,
"end": 43,
"text": "Chen et al. (2020)",
"ref_id": "BIBREF5"
},
{
"start": 313,
"end": 332,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A The Setting of Fine-tuning",
"sec_num": null
},
{
"text": "We did not prune the embedding layer, following Chen et al. (2020) ; Prasanna et al. (2020) . The coefficient of L 1 regularizer, \u03c4 , is decayed using the same scheduler as the learning rate. We tuned it on MRPC and used it for other tasks.",
"cite_spans": [
{
"start": 48,
"end": 66,
"text": "Chen et al. (2020)",
"ref_id": "BIBREF5"
},
{
"start": 69,
"end": 91,
"text": "Prasanna et al. (2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A The Setting of Fine-tuning",
"sec_num": null
},
{
"text": "B The Learning Rate Scheduler of Chen et al. (2020) Our implementation used in the experiments are derived from Chen et al. (2020)'s implementation 10 . However, we found a bug in Chen et al. (2020) 's implementation on GitHub. Thus, we fixed it and experimented with the correct version. In their implementation, the learning rate schedule did not follow the common setting and the description mentioned in the paper; 'We use standard implementations and hyperparameters [49] . Learning rate decays linearly from initial value to zero'. Specifically, the learning rate with linear decay did not reach zero but was at significant levels even at the end of the finetuning. Our implementation corrected it so that it did reach zero as specified in their paper and in the common setting.",
"cite_spans": [
{
"start": 33,
"end": 51,
"text": "Chen et al. (2020)",
"ref_id": "BIBREF5"
},
{
"start": 180,
"end": 198,
"text": "Chen et al. (2020)",
"ref_id": "BIBREF5"
},
{
"start": 472,
"end": 476,
"text": "[49]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A The Setting of Fine-tuning",
"sec_num": null
},
{
"text": "In the experiments, we first prepared twenty random seeds and split them into two groups, each of which trained ten models. For stabilizing the measurement of the result, we exhaustively evaluated all the possible combinations of ensembles (i.e., depending on the number of members, 10 C 2 , 10 C 3 , 10 C 4 , 10 C 5 patterns, respectively) among the ten models for each group, and averaged the results with the two groups. The performance of the members is also averaged over all the seeds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C The Combinations of Ensembles",
"sec_num": null
},
{
"text": "9 https://github.com/ huggingface/transformers/blob/ 7e406f4a65727baf8e22ae922f410224cde99ed6/ examples/pytorch/text-classification/ README.md#glue-tasks 10 https://github.com/VITA-Group/ BERT-Tickets Table 3 : The performances (single, ens.) and the improvements by ensembling (diff.) of RoBERTa-base models.",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 208,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "C The Combinations of Ensembles",
"sec_num": null
},
{
"text": "We simply conducted supplementary experiments with RoBERTa (Liu et al., 2019) (robeta-base model), although optimal hyperparameters were not searched well. The results were similar to the cases of base-base-uncased. The patterns can be categorized into the three. First, multi-ticket ensembles worked well with roberta on MRPC, as shown in Table 3 . Secondly, accurate winningticket subnetworks were not found on CoLA and QNLI. Although the effect of ensembleing was improved after pruning, each single model got worse and the final ensemble accuracy did not outperform the dense baseline. Thirdly, although accurate winning-ticket subnetworks were found on STS-B and SST-2, regularizations worsened single-model performances. While this case also improved the effect of ensembling, the final accuracy did not outperform the baseline. These experiments further emphasized the importance of development of more sophisticated pruning methods without sacrifice of model performances in the context of the lottery ticket hypothesis.",
"cite_spans": [
{
"start": 59,
"end": 77,
"text": "(Liu et al., 2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 340,
"end": 347,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "D The Results with RoBERTa",
"sec_num": null
},
{
"text": "Some concurrent studies also investigate the usage of subnetworks for ensembles. Gal and Ghahramani (2016) is a pioneer to use subnetwork ensemble. A trained neural network with dropout can infer with many different subnetworks, and their ensemble can be used for uncertainty estimation, which is called MC-dropout. Durasov et al. (2021) improved the efficiency of MC-dropout by exploring subnetworks. Zhang et al. (2021) (unpublished) experimented with an ensemble of subnetworks of different structures and initialization when trained from scratch, while the improvements possibly could be due to regularization of each single model. Havasi et al. (2021) is a similar but more elegant approach, which does not explicitly identify subnetworks. Instead, it trains a single dense model",
"cite_spans": [
{
"start": 316,
"end": 337,
"text": "Durasov et al. (2021)",
"ref_id": "BIBREF10"
},
{
"start": 402,
"end": 435,
"text": "Zhang et al. (2021) (unpublished)",
"ref_id": null
},
{
"start": 636,
"end": 656,
"text": "Havasi et al. (2021)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "E Related Work",
"sec_num": null
},
{
"text": "Some concurrent and recent studies also investigate subnetworks for effective ensemble(Durasov et al., 2021;Havasi et al., 2021) for training-from-scratch settings of image recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also did not prune the embedding layer, followingChen et al. (2020);Prasanna et al. (2020)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also did not prune the embedding layer, followingChen et al. (2020);Prasanna et al. (2020)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We found a bug inChen et al. (2020)'s implementation on GitHub, so we fixed it and experimented with the correct version.3 Note that not all evaluation samples satisfy independence assumption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Raffel et al. (2020) reported that the same problem happened on almost all tasks (GLUE(Wang et al., 2018), Super-GLUE(Wang et al., 2019), SQuAD(Rajpurkar et al., 2016), summarization, and machine translation) using the T5 model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We appreciate the helpful comments from the anonymous reviewers. This work was supported by JSPS KAKENHI Grant Number JP19H04162.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "with training using multi-input multi-output inference; the optimization can implicitly find multiple disentangled subnetworks in the dense model during optimization from random initialization. These studies support our assumption that different subnetworks can improve ensemble by diversity.Some other directions for introducing diversity exist, while most are unstable. Promising directions are to use entropy (Pang et al., 2019) or adversarial training (Rame and Cord, 2021) . Although they required complex optimization processes, they improved the robustness or ensemble performance on small image recognition datasets.Recently, concurrent work (Sellam et al., 2022; Tay et al., 2022) provide multiple BERT or T5 models pretrained from different seeds or configurations for investigation of seed or configuration dependency using large-scale computational resources. Further research with the models and such computational resources will be helpful for more solid comparison and analysis.Note that no prior work tackled the problem of ensembles from a pre-trained model. Framing the problem is one of the contributions of this paper. Secondly, our multi-ticket ensemble based on random masking enables an independently parallelizable training while existing methods require a sequential processing or a grouped training procedure. Finally, multi-ticket ensemble can be combined with other methods, which can improve the total performance together.",
"cite_spans": [
{
"start": 412,
"end": 431,
"text": "(Pang et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 456,
"end": 477,
"text": "(Rame and Cord, 2021)",
"ref_id": "BIBREF27"
},
{
"start": 650,
"end": 671,
"text": "(Sellam et al., 2022;",
"ref_id": null
},
{
"start": 672,
"end": 689,
"text": "Tay et al., 2022)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Comparison of classifier selection methods for improving committee performance",
"authors": [
{
"first": "Matti",
"middle": [],
"last": "Aksela",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 4th International Conference on Multiple Classifier Systems, MCS'03",
"volume": "",
"issue": "",
"pages": "84--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matti Aksela. 2003. Comparison of classifier selection methods for improving committee performance. In Proceedings of the 4th International Conference on Multiple Classifier Systems, MCS'03, page 84-93, Berlin, Heidelberg. Springer-Verlag.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Towards understanding ensemble, knowledge distillation and self-distillation in deep learning",
"authors": [
{
"first": "Zeyuan",
"middle": [],
"last": "Allen-Zhu",
"suffix": ""
},
{
"first": "Yuanzhi",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeyuan Allen-Zhu and Yuanzhi Li. 2020. To- wards understanding ensemble, knowledge distilla- tion and self-distillation in deep learning. CoRR, abs/2012.09816.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "What is the state of neural network pruning?",
"authors": [
{
"first": "Davis",
"middle": [],
"last": "Blalock",
"suffix": ""
},
{
"first": "Jose Javier Gonzalez",
"middle": [],
"last": "Ortiz",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Frankle",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Guttag",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of Second Machine Learning and Systems",
"volume": "",
"issue": "",
"pages": "129--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, and John Guttag. 2020. What is the state of neural network pruning? In Proceedings of Second Machine Learning and Systems (MLSys 2020), pages 129-146.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Low-complexity probing via finding subnetworks",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2021)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Cao, Victor Sanh, and Alexander M. Rush. 2021. Low-complexity probing via finding subnetworks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL-HLT 2021). Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2001"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, I\u00f1igo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval 2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The lottery ticket hypothesis for pretrained bert networks",
"authors": [
{
"first": "Tianlong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Frankle",
"suffix": ""
},
{
"first": "Shiyu",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Sijia",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhangyang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Carbin",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems",
"volume": "33",
"issue": "",
"pages": "15834--15846",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, and Michael Carbin. 2020. The lottery ticket hypothesis for pre- trained bert networks. In Advances in Neural Infor- mation Processing Systems 33 (NeurIPS 2020), pages 15834-15846. Curran Associates, Inc.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Deslib: A dynamic ensemble selection library in python",
"authors": [
{
"first": "M",
"middle": [
"O"
],
"last": "Rafael",
"suffix": ""
},
{
"first": "Luiz",
"middle": [
"G"
],
"last": "Cruz",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Hafemann",
"suffix": ""
},
{
"first": "George",
"middle": [
"D C"
],
"last": "Sabourin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cavalcanti",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Machine Learning Research",
"volume": "21",
"issue": "8",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rafael M. O. Cruz, Luiz G. Hafemann, Robert Sabourin, and George D. C. Cavalcanti. 2020. Deslib: A dy- namic ensemble selection library in python. Journal of Machine Learning Research, 21(8):1-5.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019)",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019), pages 4171-4186, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatically constructing a corpus of sentential paraphrases",
"authors": [
{
"first": "B",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Third International Workshop on Paraphrasing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William B. Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP 2005).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Why does bagging work? a bayesian account and its implications",
"authors": [
{
"first": "Pedro",
"middle": [],
"last": "Domingos",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Third International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "155--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pedro Domingos. 1997. Why does bagging work? a bayesian account and its implications. In Proceed- ings of the Third International Conference on Knowl- edge Discovery and Data Mining (KDD 1997), page 155-158. AAAI Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Masksembles for uncertainty estimation",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Durasov",
"suffix": ""
},
{
"first": "Timur",
"middle": [],
"last": "Bagautdinov",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Baque",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Fua",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "13539--13548",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikita Durasov, Timur Bagautdinov, Pierre Baque, and Pascal Fua. 2021. Masksembles for uncertainty esti- mation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13539-13548.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Frankle",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Carbin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 7th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Frankle and Michael Carbin. 2019. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In Proceedings of the 7th Inter- national Conference on Learning Representations (ICLR 2019).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 33rd International Conference on Machine Learning",
"volume": "48",
"issue": "",
"pages": "1050--1059",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model un- certainty in deep learning. In Proceedings of The 33rd International Conference on Machine Learn- ing, volume 48 of Proceedings of Machine Learning Research, pages 1050-1059, New York, New York, USA. PMLR.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Design of effective neural network ensembles for image classification purposes",
"authors": [
{
"first": "Giorgio",
"middle": [],
"last": "Giacinto",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Roli",
"suffix": ""
}
],
"year": 2001,
"venue": "Image and Vision Computing",
"volume": "19",
"issue": "9",
"pages": "699--707",
"other_ids": {
"DOI": [
"10.1016/S0262-8856(01)00045-2"
]
},
"num": null,
"urls": [],
"raw_text": "Giorgio Giacinto and Fabio Roli. 2001. Design of effective neural network ensembles for image clas- sification purposes. Image and Vision Computing, 19(9):699-707.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Training independent subnetworks for robust prediction",
"authors": [
{
"first": "Marton",
"middle": [],
"last": "Havasi",
"suffix": ""
},
{
"first": "Rodolphe",
"middle": [],
"last": "Jenatton",
"suffix": ""
},
{
"first": "Stanislav",
"middle": [],
"last": "Fort",
"suffix": ""
},
{
"first": "Jeremiah",
"middle": [
"Zhe"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Jasper",
"middle": [],
"last": "Snoek",
"suffix": ""
},
{
"first": "Balaji",
"middle": [],
"last": "Lakshminarayanan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Mingbo"
],
"last": "Dai",
"suffix": ""
},
{
"first": "Dustin",
"middle": [],
"last": "Tran",
"suffix": ""
}
],
"year": 2021,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah Zhe Liu, Jasper Snoek, Balaji Lakshmi- narayanan, Andrew Mingbo Dai, and Dustin Tran. 2021. Training independent subnetworks for robust prediction. In International Conference on Learning Representations.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "CVPR tutorial: Diversity meets deep networks -inference, ensemble learning, and applications",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Kirillov",
"suffix": ""
},
{
"first": "Bogdan",
"middle": [],
"last": "Savchynskyy",
"suffix": ""
},
{
"first": "Carsten",
"middle": [],
"last": "Rother",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Kirillov, Bogdan Savchynskyy, Carsten Rother, Stefan Lee, and Dhruv Batra. 2016. CVPR tutorial: Diversity meets deep networks -inference, ensemble learning, and applications.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy",
"authors": [
{
"first": "I",
"middle": [],
"last": "Ludmila",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"J"
],
"last": "Kuncheva",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Whitaker",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "51",
"issue": "",
"pages": "181--207",
"other_ids": {
"DOI": [
"10.1023/A:1022859003006"
]
},
"num": null,
"urls": [],
"raw_text": "Ludmila I. Kuncheva and Christopher J. Whitaker. 2003. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Ma- chine Learning, 51(2):181-207.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A statistical approach to learning and generalization in layered neural networks",
"authors": [
{
"first": "Esther",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "Naftali",
"middle": [],
"last": "Tishby",
"suffix": ""
},
{
"first": "Sara",
"middle": [
"A"
],
"last": "Solla",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the Second Annual Workshop on Computational Learning Theory (COLT 1989)",
"volume": "",
"issue": "",
"pages": "245--260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Esther Levin, Naftali Tishby, and Sara A. Solla. 1989. A statistical approach to learning and generalization in layered neural networks. In Proceedings of the Sec- ond Annual Workshop on Computational Learning Theory (COLT 1989), page 245-260, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Super tickets in pre-trained language models: From model compression to improving generalization",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Simiao",
"middle": [],
"last": "Zuo",
"suffix": ""
},
{
"first": "Minshuo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Haoming",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Tuo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.510"
]
},
"num": null,
"urls": [],
"raw_text": "Chen Liang, Simiao Zuo, Minshuo Chen, Haoming Jiang, Xiaodong Liu, Pengcheng He, Tuo Zhao, and Weizhu Chen. 2021. Super tickets in pre-trained language models: From model compression to im- proving generalization. In Proceedings of the 59th",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "6524--6538",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 6524-6538, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Ensemble learning via negative correlation",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Yao",
"suffix": ""
}
],
"year": 1999,
"venue": "Neural Networks",
"volume": "12",
"issue": "10",
"pages": "1399--1404",
"other_ids": {
"DOI": [
"10.1016/S0893-6080(99)00073-8"
]
},
"num": null,
"urls": [],
"raw_text": "Y. Liu and X. Yao. 1999. Ensemble learn- ing via negative correlation. Neural Networks, 12(10):1399-1404.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Improving adversarial robustness via promoting ensemble diversity",
"authors": [
{
"first": "Tianyu",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning",
"volume": "97",
"issue": "",
"pages": "4970--4979",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyu Pang, Kun Xu, Chao Du, Ning Chen, and Jun Zhu. 2019. Improving adversarial robustness via pro- moting ensemble diversity. In Proceedings of the 36th International Conference on Machine Learn- ing, volume 97 of Proceedings of Machine Learning Research, pages 4970-4979. PMLR.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "When BERT Plays the Lottery, All Tickets Are Winning",
"authors": [
{
"first": "Sai",
"middle": [],
"last": "Prasanna",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)",
"volume": "",
"issue": "",
"pages": "3208--3229",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.259"
]
},
"num": null,
"urls": [],
"raw_text": "Sai Prasanna, Anna Rogers, and Anna Rumshisky. 2020. When BERT Plays the Lottery, All Tickets Are Win- ning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 3208-3229, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "How fine can fine-tuning be? learning efficient language models",
"authors": [
{
"first": "Evani",
"middle": [],
"last": "Radiya",
"suffix": ""
},
{
"first": "-Dixit",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics (ICML 2020)",
"volume": "108",
"issue": "",
"pages": "2435--2443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evani Radiya-Dixit and Xin Wang. 2020. How fine can fine-tuning be? learning efficient language models. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics (ICML 2020), volume 108 of Proceedings of Machine Learning Research, pages 2435-2443. PMLR.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Machine Learning Research",
"volume": "21",
"issue": "140",
"pages": "1--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "SQuAD: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016)",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1264"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP 2016), pages 2383-2392, Austin, Texas. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "{DICE}: Diversity in deep ensembles via conditional redundancy adversarial estimation",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Rame",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Cord",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 9th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre Rame and Matthieu Cord. 2021. {DICE}: Diversity in deep ensembles via conditional redun- dancy adversarial estimation. In Proceedings of the 9th International Conference on Learning Represen- tations (ICLR 2021).",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Movement pruning: Adaptive sparsity by fine-tuning",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems",
"volume": "33",
"issue": "",
"pages": "20378--20389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Thomas Wolf, and Alexander Rush. 2020. Movement pruning: Adaptive sparsity by fine-tuning. In Advances in Neural Information Processing Sys- tems, volume 33, pages 20378-20389. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Dipanjan Das, and Ellie Pavlick. 2022. The multiB-ERTs: BERT reproductions for robustness analysis",
"authors": [
{
"first": "Thibault",
"middle": [],
"last": "Sellam",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Yadlowsky",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Naomi",
"middle": [],
"last": "Saphra",
"suffix": ""
},
{
"first": "Alexander D'",
"middle": [],
"last": "Amour",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Jasmijn",
"middle": [],
"last": "Bastings",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Iulia Raluca Turc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": null,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thibault Sellam, Steve Yadlowsky, Ian Tenney, Jason Wei, Naomi Saphra, Alexander D'Amour, Tal Linzen, Jasmijn Bastings, Iulia Raluca Turc, Jacob Eisenstein, Dipanjan Das, and Ellie Pavlick. 2022. The multiB- ERTs: BERT reproductions for robustness analysis. In International Conference on Learning Representa- tions (ICLR 2022).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The cost of training NLP models: A concise overview",
"authors": [
{
"first": "Or",
"middle": [],
"last": "Sharir",
"suffix": ""
},
{
"first": "Barak",
"middle": [],
"last": "Peleg",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Shoham",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Or Sharir, Barak Peleg, and Yoav Shoham. 2020. The cost of training NLP models: A concise overview. CoRR, abs/2004.08900.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The sources of increased accuracy for two proposed boosting algorithms",
"authors": [
{
"first": "David",
"middle": [
"B"
],
"last": "Skalak",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. American Association for Arti Intelligence, AAAI-96, Integrating Multiple Learned Models Workshop",
"volume": "",
"issue": "",
"pages": "120--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David B. Skalak. 1996. The sources of increased ac- curacy for two proposed boosting algorithms. In In Proc. American Association for Arti Intelligence, AAAI-96, Integrating Multiple Learned Models Work- shop, pages 120-125.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP 2013)",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP 2013), pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, and Donald Metzler. 2022. Scale efficiently: Insights from pretraining and finetuning transformers",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Tay",
"suffix": ""
},
{
"first": "Mostafa",
"middle": [],
"last": "Dehghani",
"suffix": ""
},
{
"first": "Jinfeng",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Fedus",
"suffix": ""
},
{
"first": "Samira",
"middle": [],
"last": "Abnar",
"suffix": ""
}
],
"year": null,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, and Donald Met- zler. 2022. Scale efficiently: Insights from pretrain- ing and finetuning transformers. In International Conference on Learning Representations.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "SuperGLUE: A stickier benchmark for general-purpose language understanding systems",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Aman- preet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. SuperGLUE: A stickier benchmark for general-purpose language understand- ing systems. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019). Curran As- sociates, Inc.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5446"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for nat- ural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Neural network acceptability judgments",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Warstadt",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "625--641",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00290"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Warstadt, Amanpreet Singh, and Samuel R. Bow- man. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (EMNLP 2020)",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transform- ers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (EMNLP 2020), pages 38-45, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Boosting ensemble accuracy by revisiting ensemble diversity metrics",
"authors": [
{
"first": "Yanzhao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ling",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhongwei",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Ka-Ho",
"middle": [],
"last": "Chow",
"suffix": ""
},
{
"first": "Wenqi",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "16469--16477",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanzhao Wu, Ling Liu, Zhongwei Xie, Ka-Ho Chow, and Wenqi Wei. 2021. Boosting ensemble accuracy by revisiting ensemble diversity metrics. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16469- 16477.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "On the association of attributes in statistics: With illustrations from the material of the childhood society, &c",
"authors": [
{
"first": "G",
"middle": [],
"last": "Udny Yule",
"suffix": ""
}
],
"year": 1900,
"venue": "Philosophical Transactions of the Royal Society of London",
"volume": "194",
"issue": "",
"pages": "257--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Udny Yule. 1900. On the association of attributes in statistics: With illustrations from the material of the childhood society, &c. Philosophical Transactions of the Royal Society of London, 194:257-319.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Ex uno plures: Splitting one model into an ensemble of subnetworks",
"authors": [
{
"first": "Zhilu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Vianne",
"suffix": ""
},
{
"first": "Mert",
"middle": [
"R"
],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sabuncu",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilu Zhang, Vianne R. Gao, and Mert R. Sabuncu. 2021. Ex uno plures: Splitting one model into an ensemble of subnetworks.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Masking as an efficient alternative to finetuning for pretrained language models",
"authors": [
{
"first": "Mengjie",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Jaggi",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)",
"volume": "",
"issue": "",
"pages": "2226--2241",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.174"
]
},
"num": null,
"urls": [],
"raw_text": "Mengjie Zhao, Tao Lin, Fei Mi, Martin Jaggi, and Hin- rich Sch\u00fctze. 2020. Masking as an efficient alterna- tive to finetuning for pretrained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), pages 2226-2241, Online. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "mask of pruning, m s,10% 2 {0, 1} |\u2713s| , where the surviving positions have 1 otherwise 0. The pruning of parameters \u2713 by a mask m can be also represented as \u2713 m, where is the element-wise product. Next, we replay finetuning but from \u2713 s m s,10% and get FINE(\u2713 s m s,10% , s) as well as 20%-pruning mask m s,20% . By repeating iterative magnitude pruning, we obtain the parameter FINE(\u2713 s m s,P % , s). In our experiments, we set P = 30, i.e., evaluate ensemble of 30%-pruning sub-networks, where M =",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": ";Prasanna et al. (2020) 2 tialization of the task-specific output layer, (2) 080 dataset shuffling for stochastic gradient descent 081 (SGD), and (3) dropout noise. However, empiri-082 cally, they do not necessarily lead to diverse mod-083 els, asRadiya-Dixit and Wang (2020) reported the 084 finetuned parameters could not be far away.085For diversifying models more, we propose to086 introduce a novel randomness, (4) pruning of the 087 pretrained model before each finetuning. We ex-088 pect that, during finetuning, each sub-network ac-089 quires different views using different sub-spaces of 090 the pretrained knowledge. This idea has two chal-091 lenges: the diversity and the accuracy of the sub-092 networks. Recent studies on the lottery ticket hy-093 pothesis (Frankle and Carbin, 2019) suggest that a 094 dense neural network at an initialization contains a 095 sub-network, called winning ticket, whose accuracy 096 becomes comparable with that of the dense network 097 after the same training steps. A pretrained BERT 098 also has sparse sub-networks (e.g., 50%), which 099 can achieve the same accuracy with the entire net-100 work when finetuning on downstream tasks (Chen 101 et al., 2020). However, it is still unclear how di-102",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "'s procedure 2 . We experiment with several 118 variants on the basis of this method.119Let FINE(\u2713, s) be the new parameters, which 120 are finetuned from \u2713 using SGD depending121 on a random seed s. Also, let \u2713 s represent 122 the parameter of the pretrained BERT and the 123 task-specific layer, whose parameter is randomly 124 initialized by random seed s. After finetuning 125 \u2713 s to FINE(\u2713 s , s), we identify and prune the 126 parameters with 10% lowest magnitudes in 127 FINE(\u2713 s , s). We also get the corresponding binary 128 mask of pruning, m s,10% 2 {0, 1} |\u2713s| , where 129 the surviving positions have 1 otherwise 0. The 130 pruning of parameters \u2713 by a mask m can be also 131 represented as \u2713 m, where is the element-wise 132 product. Next, we replay finetuning but from 133 \u2713 s m s,10% and get FINE(\u2713 s m s,10% , s) as 134 well as 20%-pruning mask m s,20% . By repeating 135 iterative magnitude pruning, we obtain the 136 parameter FINE(\u2713 s m s,P % , s). In our exper-137 iments, we set P = 30, i.e., evaluate ensemble 138 of 30%-pruning sub-networks, where M = 139 {FINE(\u2713 s1 m s1,30% , s 1 ), ..., FINE(\u2713 s |M| 140",
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"uris": null,
"text": "Comparison of the performances and the number of ensemble members on MRPC (left) and STS-B (right). They are represented as the relative gain compared with BASELINE's accuracy.",
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"num": null,
"html": null,
"text": "on a random seed s. Also, let \u2713 s represent 122 the parameter of the pretrained BERT and the 123 task-specific layer, whose parameter is randomly 124 initialized by random seed s. After finetuning 125 \u2713 s to FINE(\u2713 s , s), we identify and prune the , s). We also get the corresponding binary128 mask of pruning, m s,10% 2 {0, 1} |\u2713s| , where 129 the surviving positions have 1 otherwise 0. The 130 pruning of parameters \u2713 by a mask m can be also 131 represented as \u2713 m, where is the element-wise",
"content": "<table><tr><td>088</td><td colspan=\"3\">pretrained model before each finetuning. We ex-</td></tr><tr><td>089</td><td colspan=\"3\">pect that, during finetuning, each sub-network ac-</td></tr><tr><td>090</td><td colspan=\"2\">quires different views using different sub-spaces of</td><td/></tr><tr><td>091</td><td colspan=\"3\">the pretrained knowledge. This idea has two chal-</td></tr><tr><td>092</td><td colspan=\"3\">lenges: the diversity and the accuracy of the sub-</td></tr><tr><td>093</td><td colspan=\"3\">networks. Recent studies on the lottery ticket hy-</td><td>126</td></tr><tr><td>094</td><td colspan=\"4\">parameters with 10% lowest magnitudes in pothesis (Frankle and Carbin, 2019) suggest that a</td><td>127</td></tr><tr><td>095 096 097 098</td><td colspan=\"4\">dense neural network at an initialization contains a sub-network, called winning ticket, whose accuracy becomes comparable with that of the dense network FINE(\u2713 s 132 after the same training steps. A pretrained BERT</td></tr><tr><td/><td colspan=\"4\">product. Next, we replay finetuning but from</td><td>133</td></tr><tr><td/><td>\u2713 s</td><td colspan=\"2\">m s,10% and get FINE(\u2713 s</td><td>m s,10% , s) as</td><td>134</td></tr><tr><td/><td colspan=\"4\">well as 20%-pruning mask m s,20% . By repeating</td><td>135</td></tr><tr><td/><td colspan=\"4\">iterative magnitude pruning, we obtain the</td><td>136</td></tr><tr><td/><td colspan=\"2\">parameter FINE(\u2713 s</td><td colspan=\"2\">m s,P % , s). In our exper-</td><td>137</td></tr><tr><td/><td colspan=\"4\">iments, we set P = 30, i.e., evaluate ensemble</td><td>138</td></tr><tr><td/><td colspan=\"4\">of 30%-pruning sub-networks, where M = {FINE(\u2713 s1 m s1,30% , s 1 ), ..., FINE(\u2713 s |M|</td><td>139 140</td></tr><tr><td/><td/><td colspan=\"3\">2 We also did not prune the embedding layer, following</td></tr><tr><td/><td colspan=\"3\">Chen et al. (2020); Prasanna et al. (2020)</td></tr><tr><td/><td>2</td><td/><td/></tr></table>"
},
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"text": "For diversifying models more, we propose to dense neural network at an initialization contains a 095 sub-network, called winning ticket, whose accuracy 096 becomes comparable with that of the dense network 097 after the same training steps. A pretrained BERT 098 also has sparse sub-networks (e.g., 50%), which 099 can achieve the same accuracy with the entire net-",
"content": "<table/>"
},
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"text": "We also get the corresponding binary128 mask of pruning, m s,10% 2 {0, 1} |\u2713s| , where 129 the surviving positions have 1 otherwise 0. The 130 pruning of parameters \u2713 by a mask m can be also 131 represented as \u2713 m, where is the element-wise 132 product. Next, we replay finetuning but from 133 \u2713 s m s,10% and get FINE(\u2713 s m s,10% , s) as 134 well as 20%-pruning mask m s,20% . By repeating 135 iterative magnitude pruning, we obtain the 136 parameter FINE(\u2713 s m s,P % , s). In our exper-137 iments, we set P = 30, i.e., evaluate ensemble 30% , s 1 ), ..., FINE(\u2713 s |M| 140 2 We also did not prune the embedding layer, following Chen et al. (2020); Prasanna et al. (2020) 2 pect that, during finetuning, each sub-network ac-089 quires different views using different sub-spaces of 090 the pretrained knowledge. This idea has two chal-091 lenges: the diversity and the accuracy of the sub-092 networks. Recent studies on the lottery ticket hy-093 pothesis (Frankle and Carbin, 2019) suggest that a 094 dense neural network at an initialization contains a 095 sub-network, called winning ticket, whose accuracy 096 becomes comparable with that of the dense network 097 after the same training steps. A pretrained BERT",
"content": "<table><tr><td>of 30%-pruning sub-networks, where M = {FINE(\u2713 s1 m s1,098</td><td>138 139</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"num": null,
"html": null,
"text": ". Note MRPC STS-B single ens. diff. single ens. diff. 83.48 84.34 +0.86 88.35 89.04 +0.69 (BAGGING) 82.87 84.19 +1.32 88.17 88.84 +0.68 BASE-LT 83.84 84.98 +1.14 88.37 89.16 +0.79 ACTIVE-LT 83.22 84.60 +1.38 88.39 89.32 +0.94 RANDOM-LT 83.53 85.05 +1.52 88.49 89.35 +0.86",
"content": "<table/>"
},
"TABREF5": {
"type_str": "table",
"num": null,
"html": null,
"text": "MRPC STS-B single ens. diff. single ens. diff. BASELINE 87.77 88.47 +0.70 89.52 90.00 +0.48 (BAGGING) 87.64 88.12 +0.49 89.34 89.91 +0.54 BASE-LT 87.72 88.25 +0.53 89.71 90.07 +0.36 ACTIVE-LT 87.39 88.51 +1.12 88.46 89.50 +1.04 RANDOM-LT 87.86 89.26 +1.40 88.41 89.39 +0.98",
"content": "<table/>"
}
}
}
}