ACL-OCL / Base_JSON /prefixE /json /eval4nlp /2021.eval4nlp-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:38:40.437360Z"
},
"title": "How Emotionally Stable is ALBERT? Testing Robustness with Stochastic Weight Averaging on a Sentiment Analysis Task",
"authors": [
{
"first": "Urja",
"middle": [],
"last": "Khurana",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Eindhoven University of Technology",
"location": {}
},
"email": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nalisnick",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Eindhoven University of Technology",
"location": {}
},
"email": ""
},
{
"first": "Antske",
"middle": [],
"last": "Fokkens",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Eindhoven University of Technology",
"location": {}
},
"email": "[email protected]@uva.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Despite their success, modern language models are fragile. Even small changes in their training pipeline can lead to unexpected results. We study this phenomenon by examining the robustness of ALBERT (Lan et al., 2020) in combination with Stochastic Weight Averaging (SWA)-a cheap way of ensembling-on a sentiment analysis task (SST-2). In particular, we analyze SWA's stability via CheckList criteria (Ribeiro et al., 2020), examining the agreement on errors made by models differing only in their random seed. We hypothesize that SWA is more stable because it ensembles model snapshots taken along the gradient descent trajectory. We quantify stability by comparing the models' mistakes with Fleiss' Kappa (Fleiss, 1971) and overlap ratio scores. We find that SWA reduces error rates in general; yet the models still suffer from their own distinct biases (according to CheckList).",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Despite their success, modern language models are fragile. Even small changes in their training pipeline can lead to unexpected results. We study this phenomenon by examining the robustness of ALBERT (Lan et al., 2020) in combination with Stochastic Weight Averaging (SWA)-a cheap way of ensembling-on a sentiment analysis task (SST-2). In particular, we analyze SWA's stability via CheckList criteria (Ribeiro et al., 2020), examining the agreement on errors made by models differing only in their random seed. We hypothesize that SWA is more stable because it ensembles model snapshots taken along the gradient descent trajectory. We quantify stability by comparing the models' mistakes with Fleiss' Kappa (Fleiss, 1971) and overlap ratio scores. We find that SWA reduces error rates in general; yet the models still suffer from their own distinct biases (according to CheckList).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Current language models perform well on data that resemble the distribution they are trained on, but even a slight variation in the model training setup can lead to results that diverge from what is originally reported (Fokkens et al., 2013; Sellam et al., 2021) . Furthermore, when the model relies on spurious correlations for decision making, it then contains biases that are not represented by real world data. Ideally a model should be robust to data that has (slightly) different characteristics from the data it was trained on. Accuracy and related metrics, despite their popularity, are usually not sufficient to identify these frailties. This is known as underspecification (D'Amour et al., 2020) : different predictors can achieve similar results on a specific task, but exhibit diverging performance on other tasks due to different induced biases.",
"cite_spans": [
{
"start": 219,
"end": 241,
"text": "(Fokkens et al., 2013;",
"ref_id": "BIBREF7"
},
{
"start": 242,
"end": 262,
"text": "Sellam et al., 2021)",
"ref_id": "BIBREF17"
},
{
"start": 683,
"end": 705,
"text": "(D'Amour et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Stress tests are an increasingly popular method for exposing biases of a model. To test the linguis-tic capabilities and robustness of models, Ribeiro et al. (2020) introduce CheckList, an evaluation methodology that is comparable to the aforementioned stress tests for robustness. CheckList can be used to investigate which linguistic phenomena are fully captured by a model and for which the model is thus expected to be robust across datasets.",
"cite_spans": [
{
"start": 143,
"end": 164,
"text": "Ribeiro et al. (2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Robustness and generalization can be improved by ensembling multiple models. Training different models, however, is expensive. Stochastic Weight Averaging (SWA) (Izmailov et al., 2018 ) is a way of ensembling without the need to train different models. During training, the weights of the model at specific timepoints are averaged, avoiding the need to keep track of several models. The idea is that SWA explores different solutions close to a high performing minimum.",
"cite_spans": [
{
"start": 161,
"end": 183,
"text": "(Izmailov et al., 2018",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we study the effect of SWA on the robustness to both a standard sentiment analysis dataset and different CheckList capabilities. We investigate if models varying only in their random seeds still have different behavior on the same data when trained using SWA. Specifically, we train ALBERT-large (Lan et al., 2020) on SST-2 (Socher et al., 2013) , a sentiment analysis dataset, with 10 random seeds. We perform one run with SWA turned off (termed vanilla models) and repeat the procedure with SWA turned on (termed SWA models). We explore the robustness of the trained models using the CheckList methodology by looking at the stability of mistakes. We quantify this stability to measure the agreement in mistakes between the different models and compare the resulting values between the vanilla and SWA models.",
"cite_spans": [
{
"start": 311,
"end": 329,
"text": "(Lan et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 339,
"end": 360,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main hypothesis is that using SWA leads to more stable models. We therefore expect more overlap across random seeds in the results on the SST-2 evaluation data. We also expect SWA to lead to more overlap in mistakes for CheckList items that are captured by part of the vanilla models. We also anticipate (minor) improvements of general performance in both cases. For CheckList phenomena that are already largely captured or not at all, on the other hand, we do not expect to see major differences between vanilla models and SWA in terms of general performance or overlap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We make the following contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We explore the effects of SWA on the stability and robustness of ALBERT-large that stem from underspecification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We perform the, to our knowledge, first joint study of SWA and CheckList.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We provide an in-depth analysis of results by going beyond accuracy to look at overlap and agreement between random seeds and Check-List.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We quantify agreement between different models by calculating overlap ratio and Fleiss' Kappa score on their mistakes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We find that SWA improves on error rates in general, but results on increased stability are mixed: models with different random seeds still hold onto their own distinct induced biases on linguistic information captured by part of the models in our CheckList evaluation. There is minor improvement in stability on the Fleiss' Kappa score on the development set of SST-2, but results are not conclusive. Finally, we observe a large error rate for one of the random seeds on both SST-2 and CheckList, which also leads to a less strong result on increasing agreement between models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, we are the first to combine SWA with CheckList and apply it to a BERT-based model to understand its effect on robustness with different random seeds. The work closest to ours has used variations of SWA for investigating the differences in interpretability on CNNs and LSTMs among different random seeds (Madhyastha and Jain, 2019) . A similar method to Stochastic Weight Averaging was employed by Xu et al. (2020) with a different objective: improving the fine-tuning process of BERT. They propose to average the BERT model at each time-step and two types of knowledge distillation to improve fine-tuning of the model. The averaging receives slightly better results and their variant of knowledge distillation works the best. However, it is unclear what the effect of this is on different random seeds.",
"cite_spans": [
{
"start": 333,
"end": 360,
"text": "(Madhyastha and Jain, 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Instead of looking at a form of ensembling, Hua et al. (2021) investigate the effect of injecting noise in BERT as a regularizer on the stability (sensitivity to input perturbation) of the models and show that fine-tuning performance improves. They point out that this improves generalizability as well, by looking at the difference in accuracy on the training and test set. However, training and test set might contain the same biases and hence might not reveal generalization issues (Elangovan et al., 2021) .",
"cite_spans": [
{
"start": 44,
"end": 61,
"text": "Hua et al. (2021)",
"ref_id": "BIBREF10"
},
{
"start": 485,
"end": 509,
"text": "(Elangovan et al., 2021)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Varying Performance Most work until now has focused on behavioral changes of models on train and test data when changing an arbitrary choice of the pipeline, such as the random seed (Zhong et al., 2021; Sellam et al., 2021) . Investigating the behavior of language models with different pre-training and fine-tuning random seeds on an instance-level, Zhong et al. (2021) find that the fine-tuning random seed is influential for the variation in performance on an instance-level. This contrast in performance is also highlighted by Sellam et al. (2021) ; they release multiple BERT checkpoints with a different weight initialization and show diverging performance between similarly trained models. Such behavior has also been observed for out-ofdistribution samples (McCoy et al., 2020; D'Amour et al., 2020; Amir et al., 2021) , where different induced biases are found when the random seed is modified and checkpoints behave differently on unseen data, even when evaluation performance is similar. (D'Amour et al., 2020; Amir et al., 2021) . Watson et al. (2021) show that outputs from explainability methods also vary when changing hyperparameters, e.g. the random seed.",
"cite_spans": [
{
"start": 182,
"end": 202,
"text": "(Zhong et al., 2021;",
"ref_id": null
},
{
"start": 203,
"end": 223,
"text": "Sellam et al., 2021)",
"ref_id": "BIBREF17"
},
{
"start": 531,
"end": 551,
"text": "Sellam et al. (2021)",
"ref_id": "BIBREF17"
},
{
"start": 765,
"end": 785,
"text": "(McCoy et al., 2020;",
"ref_id": "BIBREF15"
},
{
"start": 786,
"end": 807,
"text": "D'Amour et al., 2020;",
"ref_id": null
},
{
"start": 808,
"end": 826,
"text": "Amir et al., 2021)",
"ref_id": "BIBREF0"
},
{
"start": 999,
"end": 1021,
"text": "(D'Amour et al., 2020;",
"ref_id": null
},
{
"start": 1022,
"end": 1040,
"text": "Amir et al., 2021)",
"ref_id": "BIBREF0"
},
{
"start": 1043,
"end": 1063,
"text": "Watson et al. (2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Model Evaluation Evaluating models on a development set might not expose certain biases or weaknesses a model has acquired due to the possibility of the same biases occurring in the training set. Hence, scalable diagnostic methodologies are useful to investigate a model's capabilities (Wu et al., 2019; Ribeiro et al., 2020; Wu et al., 2021; Goel et al., 2021) . Even though these methodologies all focus on evaluation, the approach can vary between the methods. Wu et al. (2021) tackle evaluation from a counterfactual point of view. Wu et al. (2019) not only examine counterfactuals but also grouping queries to ensure that error analysis is scaled to all instances. Likewise, Goel et al. (2021) exploit such subpopulation grouping, in addition to adversarial attacks, perturbations, and evaluation sets. It is possible to be unaware of certain subpopulations for which the model is weak, and therefore d'Eon et al. (2021) introduce a method that looks for such weak groups. Ribeiro et al. 2020provide a methodology to analyze robustness toward basic capabilities and operationalize this with different test types (e.g. invariance to specific perturbations, basic capabilities). There are also more task specific efforts for evaluation, such as perturbations for robustness in task-oriented dialog (Liu et al., 2021) and evaluation of bias in a sentiment analysis setting (Asyrofi et al., 2021) .",
"cite_spans": [
{
"start": 286,
"end": 303,
"text": "(Wu et al., 2019;",
"ref_id": null
},
{
"start": 304,
"end": 325,
"text": "Ribeiro et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 326,
"end": 342,
"text": "Wu et al., 2021;",
"ref_id": null
},
{
"start": 343,
"end": 361,
"text": "Goel et al., 2021)",
"ref_id": "BIBREF8"
},
{
"start": 680,
"end": 698,
"text": "Goel et al. (2021)",
"ref_id": "BIBREF8"
},
{
"start": 1301,
"end": 1319,
"text": "(Liu et al., 2021)",
"ref_id": "BIBREF13"
},
{
"start": 1375,
"end": 1397,
"text": "(Asyrofi et al., 2021)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To examine Stochastic Weight Averaging's effect on model stability due to underspecification, we finetune a pretrained ALBERT-large version 2 on the SST-2 dataset. We train two types of models 10 times: 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "\u2022 Vanilla model: Model finetuned with the hyperparameter values from Lan et al. 2020\u2022 SWA model: Model finetuned for the first few epochs with the hyperparameter values from Lan et al. (2020) and then switching to a SWA training schedule For all models, we keep the training protocol the same except for the random seed. We train 10 models with a different random seed per model type. This gives us 20 different models: 10 vanilla models and 10 SWA models. 2 We then investigate the robustness of each model on CheckList tests and compare the performance of vanilla models with SWA models. Due to underspecification, the vanilla models are expected to have deviating performances on the tests across different random seeds, while the SWA models are expected to dampen this effect. We make a distinction between the following scenarios and what we expect:",
"cite_spans": [
{
"start": 174,
"end": 191,
"text": "Lan et al. (2020)",
"ref_id": "BIBREF12"
},
{
"start": 457,
"end": 458,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "1. Linguistic information captured by all of the models: We expect all of the models, regardless of the random seed, to be able to perform well on basic capabilities. Hence, we do not expect SWA to make much improvement, as there should not be a different behavior across random seeds. Stability will stay consistent here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "2. Linguistic information captured by a part of the models: This type of linguistic information is only captured by a part of the models due to their own induced biases. Hence, we expect that not all vanilla models behave similarly on such instances. With the introduction of SWA, more stability thus more overlap between mistakes is expected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "3. Linguistic information captured by none of the models: Some information cannot be captured by the model at all or it is unlikely that the model will be able to handle such information properly. In such cases, we do not expect SWA models to have an increase in performance, though that cannot be ruled out since it is possible that the weight space averaged by SWA is able to capture it. For the former, we do expect a large overlap of mistakes with SWA models since such information is not captured by any of the models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "Stochastic Weight Averaging (SWA) is a cheap approach to create ensembles by averaging over different snapshots over the SGD trajectory, in contrast to the widely used approach of training different models (Izmailov et al., 2018) . In essence, SWA ensembles in weight space instead of the usual model space. Due to the ensembling nature of correlated members from the same trajectory, we expect better generalization; a reduction in error rate and more stability in mistakes on unseen data. We employ a strategy where the SWA models are trained in the same manner as the vanilla models for the first two epochs. This cut-off epoch is chosen empirically, by observing that the vanilla models start converging around 2-3 epochs. We make use of the Adam optimizer instead of the SGD optimizer since the former optimizer is used for the training of ALBERT. From the third epoch, the learning rate drops to a constant learning rate and at every end of the epoch, the model weights are averaged with the running average weights. With a high constant learning rate, the model is able to explore other solutions that are close to the local minimum that was found after two epochs and close to convergence. The respective constant learning rates of each random seed can be found in Table 1 . The values for the learning rates are found empirically on the development set with the following candidate learning rates: {6e-06, 7.5e-06}. 3 ",
"cite_spans": [
{
"start": 206,
"end": 229,
"text": "(Izmailov et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 1273,
"end": 1280,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Stochastic Weight Averaging",
"sec_num": "3.1"
},
{
"text": "We use the binary version of the Stanford Sentiment Treebank dataset 4 (Socher et al., 2013) , which consists of human-annotated sentences from movie reviews originating from rottentomatoes.com for a sentiment classification task. This version of the dataset is also included in the GLUE task (Wang et al., 2018) . We use this dataset since sentiment analysis is an interesting task to study underspecification as it is a more subjective task, making rigorous, multifaceted evaluation even more important. The training set consists of 67349 phrases, while the validation and test dataset consist of 872 and 1821 sentences respectively. We use the training and validation set for the training procedure, while the test set is used for the generation of specific CheckList items. 3 We looked at the learning rates in examples from the original paper at https://github.com/timgaripov/ swa#examples where some SWA learning rates are half of the original learning rate and explored close candidate learning rates. From previous initial experiments learning rate 5e \u2212 06 did not work and was thus left out in these sets of experiments.",
"cite_spans": [
{
"start": 71,
"end": 92,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF18"
},
{
"start": 293,
"end": 312,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 778,
"end": 779,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SST-2 Dataset",
"sec_num": "3.2"
},
{
"text": "4 https://nlp.stanford.edu/sentiment/ index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SST-2 Dataset",
"sec_num": "3.2"
},
{
"text": "CheckList is a methodology to test basic and linguistic capabilities of a model, similar to behavioral testing in software engineering (Ribeiro et al., 2020) . They make a distinction between three types of tests: Minimum Functionality Test (MFT): Small examples to test for basic capabilities. We test if each instance has the specified label. Invariance Test (INV): Tests that apply perturbations to the input and expect the prediction to stay consistent, regardless of the correctness of the prediction. The original input together with its perturbations is seen as one test case. Directional Expectation Tests (DIR): Tests where the output is expected to change in a specific way, when the input is modified: the confidence is expected to change in a specific direction. Similar to INV tests, the original input with modifications is seen as a test case.",
"cite_spans": [
{
"start": 135,
"end": 157,
"text": "(Ribeiro et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Checklist Evaluation",
"sec_num": "3.3"
},
{
"text": "In this paper, we consider different MFTs, INVs, and DIRs tests for sentiment analysis. We check for basic capabilities and robustness. Each trained model is evaluated on our CheckList set up and their performances are compared. We expect that vanilla models make more mistakes than SWA models and qualitatively make less overlapping mistakes due to each model having their own different induced biases. On the other hand, SWA models are expected to have more overlapping mistakes, due to its ensembling and explorative nature in the weight space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Checklist Evaluation",
"sec_num": "3.3"
},
{
"text": "We created 18 CheckList capability tests by adapting tests from the CheckList GitHub repository 5 to the use-case in this paper. For reasons of space, we refer to individual capability tests with transparent names followed by the test size, only using short explanations when the name by itself is not sufficiently clear. For tests that perturb the input and are not created from scratch, we use the test set from SST-2. Each original input can be augmented more than once, depending on the capability. These tests are followed by two numbers when introduced: the number of original items and total items with perturbations included. A full overview of the CheckList capabilities and their sizes can be found in Table 7 in Appendix D.",
"cite_spans": [],
"ref_spans": [
{
"start": 712,
"end": 719,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Checklist Evaluation",
"sec_num": "3.3"
},
{
"text": "This section presents the outcome of our experiments. We first provide results on the original dataset and then the results on CheckList items. Lastly we examine how stable vanilla and SWA models are by looking at the label agreement between models trained from different seeds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "As mentioned in the previous section, we originally ran our experiments on five random seeds and added five additional seeds after observing that one seed performed lower than all others. When we compare the accuracy of the vanilla models with the SWA models on the validation set of SST-2 in Table 2 , it is evident that most of the SWA models perform slightly better than the vanilla models. The only exceptions are Random Seed 0, 7, and 8. Upon running our experiments on five additional seeds, Random Seed 0 remains the only seed that has an accuracy around 0.90, confirming that it is an outlier. The SWA versions of the other two random seeds might not outperform their vanilla counterparts but achieve a close accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 293,
"end": 300,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Stochastic Weight Averaging",
"sec_num": "4.1"
},
{
"text": "Due to the outlying behavior of Random Seed 0, we leave its results out of the rest of the analysis, to avoid noise from this model influencing the analysis. We present the complete results with Random Seed 0 included in Appendix C. Table 2 : Accuracy on the validation set of SST-2 for the vanilla and SWA models of the different random seeds.",
"cite_spans": [],
"ref_spans": [
{
"start": 233,
"end": 240,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Stochastic Weight Averaging",
"sec_num": "4.1"
},
{
"text": "Error Rates We show the failure rate for each capability per vanilla model in Figure 1a . For the Movie Sentiments (n=58), Single Positive Words (n=22), Single Negative Words (n=14), and Sentiment-laden Words in Context (n=1350) capabilities there are no mistakes made by any of the vanilla models. On Add Positive Phrases (n=500, m=5500), only Random Seed 8 makes mistakes with a very small error rate. Similarly, on Movie Industries Sentiments (n=1200) only Random Seed 8 and Random Seed 2 make mistakes, again with very small error rates that would not be visible on the plot. Hence for clarity, these capabilities are left out of the plot.",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 87,
"text": "Figure 1a",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Vanilla Model Results",
"sec_num": "4.2.1"
},
{
"text": "There is not much variation in the error rate for most of the capabilities. The most variation in performance among the random seeds can be observed for the capability that tests negations of positive sentences, with a neutral sentiment in the middle of the sentence: Negation of Positive, neutral words in the middle (D) (n=500). Interestingly, it is evident that particular random seeds can deal with negation better than others: Random Seed 1, 4, and 5. These random seeds have the lowest error rate for both Negation of Positive Sentences (C) (n=1350) and Negation of Positive, neutral words in the middle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vanilla Model Results",
"sec_num": "4.2.1"
},
{
"text": "Overlap Ratios A similar error rate, however, does not mean that the errors occur for the same instances. Hence, we analyze the overlap of errors of the vanilla models per capability. We calculate an overlap ratio by dividing the intersection of the failures of two random seeds by the union of those same failures. In contrast to the error rates, the overlap ratios are on an instance-level instead of case-level. There is no overlap of errors between the models for the capability Add Positive Phrases. The capability with the highest overlap ratio is Movie Genre Specific Sentiments (A) (n=736), which checks for sentiments that are fitting or not for specific genres: e.g. a scared feeling after watching a horror movie. This indicates that most of the models make similar mistakes for this capability. When looking at the mistakes, all the models misclassify sentences about horror movies being terrifying, scary, frightening or calming, a comedy movie being serious and a drama movie being funny instead of serious. In general, most of the vanilla models have a low overlap ratio, with the only exceptions being Negation of Positive, neutral words in the middle (D) and Temporal Sentiment Change (B) (n=2152). The latter capability contains sentences where the sentiment changes over time. These two contain certain random seeds that achieve a higher overlap ratio, as we can see in the spread of the box for these capabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vanilla Model Results",
"sec_num": "4.2.1"
},
{
"text": "Error Rates Error rates for the SWA models per capability can be found in Figure 1b . In general, we can observe a (slight) reduction in error rate with SWA models compared to vanilla models. On Add Positive Phrases, only Random Seed 5 and Random Seed 6 have a slight increase in error rate. The latter is also the only one to make a mistake on Movie Industries Sentiments 6 . The largest drop can be seen for Negation of Positive, neutral words in the middle (D), where the diverging performance seen for the vanilla models has been reduced for most random seeds, except for Random Seed 7 and Random Seed 9, whose error rates increase significantly. Similar behavior can be observed for Negation of Positive Sentences (C), where only the SWA versions of Random Seed 7 and 9 have an increase in error rate. This suggests that the SWA solution for these two random seeds is worse in handling negation than their corresponding vanilla versions. For other capabilities, the error rate mostly reduces slightly or stays the same. The only exceptions are Positive Names -Negative Instances (G) (n=123, m=1353) and Negative Names -Negative Instances (H) (n=123, m=1353), where Negative Names are names that tend to occur in negative reviews in the training data, similarly for positive names, and we Overlap Ratios The overlap ratio for most capabilities remains low. Notably, the spread of overlap ratio for Movie Genre Specific Sentiments (A) increases from the vanilla models. All of the models still struggle with understanding that horror movies being terrifying, scary or frightening is positive, and calming is negative. This is in line with the expectations of SWA not improving (much) on capabilities that are not captured by any of the models. We find increase in overlap for Change Names (E) (n=147, m=1617), Negative Names -Negative Instances (H), and Change Neutral Words (K) (n=500, m=3846), in accordance with our expectation of SWA bringing more stability. There is a different trend, against expectations, for Add Negative Phrases (L), Negation of Positive Sentences (C), and Temporal Sentiment Change (B), where the large variation of overlap of vanilla models is reduced significantly. For the rest of the capabilities, the overlap ratio appears to stay somewhat the same.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 83,
"text": "Figure 1b",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "SWA Model Results",
"sec_num": "4.2.2"
},
{
"text": "Overall, there are three different outcomes when comparing stability with SWA to vanilla models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SWA Model Results",
"sec_num": "4.2.2"
},
{
"text": "(1) Good performance of vanilla models stays con-sistent for the SWA models. (2) Large variations in error rates with vanilla models are reduced with SWA, but the overlap of mistakes does not increase and might decrease for some cases. (3) Overlap ratio with SWA does not necessarily increase, when error rates of the vanilla models are somewhat similar and remain the same for the SWA models. As such, we do not find evidence to confirm our hypothesis based on overlap between the outcomes on CheckList items.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SWA Model Results",
"sec_num": "4.2.2"
},
{
"text": "To further investigate the stability of SWA models, we measure the inter-model agreement on the misclassifications with the use of Fleiss' Kappa (Fleiss, 1971) . This measure is used for inter-annotator agreement which can be related to the nine random seeds. In our case the annotators and the predictions being their annotations, used for both the vanilla and SWA models. Negative values or values close to zero are considered to indicate a rather low agreement, while the higher the value, the more agreement there is. The results on the development set in Table 3 illustrate a significant increase in agreement for SWA models, when considering the initial four random seeds, without outlier Random Seed 0. While the agreement is still on the lower side, hinting at the presence of induced biases, the increase indicates more agreement on errors between the models and lesser distinct mistakes. We hence look at the Fleiss' Kappa values with the additional five random seeds incorporated. The Fleiss' Kappa agreement increases significantly in general for both the vanilla and SWA models. We now only observe a small increase in agreement when applying SWA compared to the vanilla models. We calculate the Kappa measure on the predictions of all the random seeds on the CheckList items as well. For the tests that measure basic capabilities (MFTs), we look at the agreement on predictions of errors. With tests that perturb an input (INVs), the instances that flip the output prediction are considered as a failure, so we check for model agreement on flipping for an instance. Similarly, for capabilities that test a directional change in confidence (DIRs), instances that go against the expected direction are considered failures and we compare model agreement on if they change in the same direction.",
"cite_spans": [
{
"start": 145,
"end": 159,
"text": "(Fleiss, 1971)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 560,
"end": 567,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Fleiss' Kappa",
"sec_num": "4.3"
},
{
"text": "The Kappa values for the CheckList mistakes in Table 4 stay mostly unchanged with slight increases or decreases in agreement. This is in accordance with the results observed for the development set mistakes: it appears that SWA does not provide the stability across random seeds and still suffers from its own induced biases. Generally, the agreement is on the lower side. The Kappa values for Movie Industries Sentiments and Add Positive Phrases were 0.0 for both vanilla and SWA models and hence left out of the table. For Movie Genre Specific Sentiments we see a large agreement and the biggest increase in agreement with SWA. This corresponds to the high overlap ratio for the same capability.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 54,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Fleiss' Kappa",
"sec_num": "4.3"
},
{
"text": "While SWA globally cuts down on error rate, it appears that this does not necessarily translate to improvement in stability: there is still disagreement in the labels assigned by individual models. Even with SWA, the models appear to make different errors on CheckList as confirmed by the low Kappa values and overlap ratio. For some capabilities the spread of the overlap ratio is on the higher side, indicating that some random seed models are closer to each other in terms of decision making, but this does not hold for all.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fleiss' Kappa",
"sec_num": "4.3"
},
{
"text": "This research illustrates the potential impact of random seeds. First, our original sample of 5 seeds contained an outlier that performed far worse than the other seeds (as well as the original study). Second, while initial results on the SST-2 development set were promising when looking at the 4 random seeds that showed normal behavior, these results did not hold when adding 5 additional random seeds. This highlights the necessity for proper analysis and the fragility of deep language models. Possibly, the initial random seeds were closer to each other in the weight space and hence SWA appeared to increase the agreement significantly. The additional random seeds could lie farther away, thus subsiding the increased agreement. In the future, more comprehensive research on the proximity and behavior of different random seeds could therefore be useful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Even though CheckList provides an easy way to investigate the capabilities of a model, automatizing some tests can be hard. There can be situations in which labels indicated for a specific capability might not hold for a certain test case. For instance, negating a negative sentence might not always lead to a positive sentence, it can also be neutral. Similarly, we applied negations on some instances from the test set but the label is not required to flip, depending on the placement of the negation. Therefore, we leave out the results in our conclusions as the labels did not always make sense upon investigation. In some instances, it is also unclear what the resulting label should be. We have added the results for these specific capabilities in Appendix B for completeness. For further experiments, we would like to manually generate some CheckList capabilities to ensure validity of the labels. This will also enable us to focus on the creation of more subjective tests, cases that are less black-and-white than the tests conducted in this research. We can then gain more insights into the fragility of models when it comes to border cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We combine SWA with the CheckList methodology to explore the effects of SWA on the robustness of a BERT-based model (ALBERT-large) on different random seeds and apply it to a sentiment analysis task. To understand how SWA affects the stability amongst different random seeds, we analyze in-depth the results and mistakes made on the development set and CheckList test items and provide error rates, overlap ratios, and Fleiss' Kappa agreement values. While SWA is able to reduce the error rate in general amongst most of the random seeds, on the CheckList tests, there are still some capabilities that models make their own distinct mistakes on with SWA incorporated. The stability on the development set also improves only slightly. In the future, we would like to create more hand-crafted CheckList capabilities for further rigorous study. Furthermore, it could be useful to thoroughly investigate the impact of adjacency of random seeds on their error agreement. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "For model training, we make use of the Hugging-Face (Wolf et al., 2019) pipeline and train the models on a single GeForce RTX 2080 Ti. We use the same hyperparameter settings as reported by Lan et al. (2020) . The visualization of the learning rate schedules can be seen in Figure 3 . Figure 3 : All the different learning rate schedules start identically with 1256 warmup steps to 1e-5. For the vanilla models (the green line) the learning rate anneals linearly to 0 till the 20935th step. This is in accordance with the hyperparameters reported in Lan et al. (2020) . For the SWA models, after the second epoch, the learning rate drops to one of the specified learning rates (blue or orange lines) and stays constant.",
"cite_spans": [
{
"start": 190,
"end": 207,
"text": "Lan et al. (2020)",
"ref_id": "BIBREF12"
},
{
"start": 550,
"end": 567,
"text": "Lan et al. (2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 274,
"end": 282,
"text": "Figure 3",
"ref_id": null
},
{
"start": 285,
"end": 293,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Technical Details",
"sec_num": null
},
{
"text": "As the HuggingFace pipeline does not provide the labels for the test set of SST-2, we match the phrases of the test set in HuggingFace with the phrases in the SST-2 dataset from the dictionary.txt file, downloaded from GLUE, 7 to get their phrase IDs. Then we use those IDs to extract the labels from sentiment_labels.txt. Every label above 0.6 is mapped to positive and equal to or lower than 0.4 is mapped to negative, as mentioned in the instructions of the README.md file. Some sentences are matched manually as they differ only in British vs. American English spelling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Technical Details",
"sec_num": null
},
{
"text": "For completeness, we also show the results for capabilities excluded from our analysis. For Add Negations and Negation of Negative Sentences we generated automatic test cases but the labels were not always correct upon investigation. Hence, we left these two capabilities out of the analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Results of Excluded Capabilities",
"sec_num": null
},
{
"text": "In Table 5 we show the Fleiss' Kappa values, the error rates per capability for the vanilla and SWA models can be found in Figure 4a and Figure 4b , respectively. The variation in error rates and overlap ratios between vanilla and SWA models can be found in the Figures 5a and 5b respectively. All the results are with the five initial random seeds, Random Seed 0 included. ",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 5",
"ref_id": "TABREF10"
},
{
"start": 123,
"end": 132,
"text": "Figure 4a",
"ref_id": "FIGREF4"
},
{
"start": 137,
"end": 146,
"text": "Figure 4b",
"ref_id": "FIGREF4"
},
{
"start": 262,
"end": 279,
"text": "Figures 5a and 5b",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "B Results of Excluded Capabilities",
"sec_num": null
},
{
"text": "We present our results on CheckList with Random Seed 0 as well for transparency. We again present the Fleiss' Kappa values for the CheckList capabilities in Table 6 . The error rates of each capability per vanilla and SWA models can be found in Figures 6a and 6b . We also plot the variation in error rates ( Figure 7a ) and overlap ratios (Figure 7b ). ",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 164,
"text": "Table 6",
"ref_id": null
},
{
"start": 245,
"end": 262,
"text": "Figures 6a and 6b",
"ref_id": "FIGREF9"
},
{
"start": 309,
"end": 318,
"text": "Figure 7a",
"ref_id": "FIGREF10"
},
{
"start": 340,
"end": 350,
"text": "(Figure 7b",
"ref_id": "FIGREF10"
}
],
"eq_spans": [],
"section": "C CheckList Results with Random Seed 0",
"sec_num": null
},
{
"text": "In Table 7 we describe each CheckList capability that we test for. For perturbing capabilities such as Negative names, Positive instances and its other variants, we extract names from the SST-2 training set with Spacy (Honnibal et al., 2020). Due to false positives, we manually remove names that do not refer to a person, such as movie names and historical figures. Per name, we calculate the mean of labels of the instances it occurs in. This way, we can select positive and negative names to perturb test set instances with. As reviews were predominantly about Hollywood, we also perturbed instances talking specifically about it. We compile a list of around 10 other movie industries, 8 based on how many movies are produced 9 and revenue. (a) Error rates of each vanilla random seed for each CheckList capability. ",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "D CheckList Capabilities",
"sec_num": null
},
{
"text": "We provide all code at https://github.com/ cltl/robustness-albert 2 The experiments originally contained five random seeds, of which Random Seed 0 had exceptionally poor performance of 90.83 accuracy on the development set. This was far from the reported validation accuracy of 94.9 (https://github.com/google-research/ albert#albert). For the camera-ready version, we trained an additional five seeds, which confirmed that the anomalous one is indeed an outlier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/marcotcr/ checklist/blob/master/notebooks/ Sentiment.ipynb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These error rates are too low (0.2% -0.35%) to be visible in the plot, hence left out.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://gluebenchmark.com/tasks",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://en.wikipedia.org/wiki/List_ of_Hollywood-inspired_nicknames 9 https://en.wikipedia.org/wiki/Film_ industry#Statistics",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was (partially) funded by the Hybrid Intelligence Center, a 10-year programme funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organisation for Scientific Research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "SWA Difference : Overview of all the CheckList capabilities, with the test type, amount of examples, and a description of the capability provided. For clarity, we also provide the name of the capabilities originally used in the code, if they are not the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vanilla",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "On the impact of random seeds on the fairness of clinical classifiers",
"authors": [
{
"first": "Silvio",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Jan-Willem",
"middle": [],
"last": "Van De Meent",
"suffix": ""
},
{
"first": "Byron",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "3808--3823",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.299"
]
},
"num": null,
"urls": [],
"raw_text": "Silvio Amir, Jan-Willem van de Meent, and Byron Wal- lace. 2021. On the impact of random seeds on the fairness of clinical classifiers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 3808-3823, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Biasfinder: Metamorphic test generation to uncover bias for sentiment analysis systems",
"authors": [
{
"first": "Muhammad",
"middle": [],
"last": "Hilmi Asyrofi",
"suffix": ""
},
{
"first": "Imam",
"middle": [],
"last": "Nur Bani",
"suffix": ""
},
{
"first": "Hong",
"middle": [
"Jin"
],
"last": "Yusuf",
"suffix": ""
},
{
"first": "Ferdian",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Zhou",
"middle": [],
"last": "Thung",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lo",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2102.01859"
]
},
"num": null,
"urls": [],
"raw_text": "Muhammad Hilmi Asyrofi, Imam Nur Bani Yusuf, Hong Jin Kang, Ferdian Thung, Zhou Yang, and David Lo. 2021. Biasfinder: Metamorphic test gen- eration to uncover bias for sentiment analysis sys- tems. arXiv preprint arXiv:2102.01859.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Underspecification presents challenges for credibility in modern machine learning",
"authors": [
{
"first": "Christina",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Deaton",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hoffman",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2011.03395"
]
},
"num": null,
"urls": [],
"raw_text": "Christina Chen, Jonathan Deaton, Jacob Eisen- stein, Matthew D Hoffman, et al. 2020. Un- derspecification presents challenges for credibil- ity in modern machine learning. arXiv preprint arXiv:2011.03395.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The spotlight: A general method for discovering systematic errors in deep learning models",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Greg D'eon",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Eon",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Wright",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Leyton-Brown",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg d'Eon, Jason d'Eon, James R. Wright, and Kevin Leyton-Brown. 2021. The spotlight: A general method for discovering systematic errors in deep learning models.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Memorization vs. generalization : Quantifying data leakage in NLP performance evaluation",
"authors": [
{
"first": "Aparna",
"middle": [],
"last": "Elangovan",
"suffix": ""
},
{
"first": "Jiayuan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Verspoor",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "1325--1335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aparna Elangovan, Jiayuan He, and Karin Verspoor. 2021. Memorization vs. generalization : Quantify- ing data leakage in NLP performance evaluation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 1325-1335, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Measuring nominal scale agreement among many raters",
"authors": [
{
"first": "L",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fleiss",
"suffix": ""
}
],
"year": 1971,
"venue": "Psychological bulletin",
"volume": "76",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph L Fleiss. 1971. Measuring nominal scale agree- ment among many raters. Psychological bulletin, 76(5):378.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Offspring from reproduction problems: What replication failure teaches us",
"authors": [
{
"first": "Antske",
"middle": [],
"last": "Fokkens",
"suffix": ""
},
{
"first": "Marten",
"middle": [],
"last": "Marieke Van Erp",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Postma",
"suffix": ""
},
{
"first": "Piek",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Nuno",
"middle": [],
"last": "Vossen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Freire",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1691--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antske Fokkens, Marieke van Erp, Marten Postma, Ted Pedersen, Piek Vossen, and Nuno Freire. 2013. Off- spring from reproduction problems: What replica- tion failure teaches us. In Proceedings of the 51st Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 1691-1701, Sofia, Bulgaria. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Robustness gym: Unifying the NLP evaluation landscape",
"authors": [
{
"first": "Karan",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Nazneen Fatema Rajani",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Vig",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Taschdjian",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations",
"volume": "",
"issue": "",
"pages": "42--55",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Karan Goel, Nazneen Fatema Rajani, Jesse Vig, Zachary Taschdjian, Mohit Bansal, and Christopher R\u00e9. 2021. Robustness gym: Unifying the NLP eval- uation landscape. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies: Demonstrations, pages 42-55, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "spaCy: Industrial-strength Natural Language Processing in Python",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.1212303"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal, Ines Montani, Sofie Van Lan- deghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Noise stability regularization for improving BERT fine-tuning",
"authors": [
{
"first": "Hang",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Xingjian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dejing",
"middle": [],
"last": "Dou",
"suffix": ""
},
{
"first": "Chengzhong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jiebo",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "3229--3241",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.258"
]
},
"num": null,
"urls": [],
"raw_text": "Hang Hua, Xingjian Li, Dejing Dou, Chengzhong Xu, and Jiebo Luo. 2021. Noise stability regularization for improving BERT fine-tuning. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 3229-3241, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Averaging weights leads to wider optima and better generalization",
"authors": [
{
"first": "Pavel",
"middle": [],
"last": "Izmailov",
"suffix": ""
},
{
"first": "Dmitrii",
"middle": [],
"last": "Podoprikhin",
"suffix": ""
},
{
"first": "Timur",
"middle": [],
"last": "Garipov",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Vetrov",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Gordon"
],
"last": "Wilson",
"suffix": ""
}
],
"year": 2018,
"venue": "34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018, 34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018",
"volume": "",
"issue": "",
"pages": "876--885",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. 2018. Averaging weights leads to wider optima and bet- ter generalization. In 34th Conference on Uncer- tainty in Artificial Intelligence 2018, UAI 2018, 34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018, pages 876-885. Association For Uncertainty in Artificial Intelligence (AUAI).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "ALBERT: A lite BERT for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2020,
"venue": "8th International Conference on Learning Representations",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th Inter- national Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Robustness testing of language understanding in task-oriented dialog",
"authors": [
{
"first": "Jiexi",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ryuichi",
"middle": [],
"last": "Takanobu",
"suffix": ""
},
{
"first": "Jiaxin",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Dazhen",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Hongguang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Weiran",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "2467--2480",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.192"
]
},
"num": null,
"urls": [],
"raw_text": "Jiexi Liu, Ryuichi Takanobu, Jiaxin Wen, Dazhen Wan, Hongguang Li, Weiran Nie, Cheng Li, Wei Peng, and Minlie Huang. 2021. Robustness testing of lan- guage understanding in task-oriented dialog. In Pro- ceedings of the 59th Annual Meeting of the Associa- tion for Computational Linguistics and the 11th In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2467- 2480, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "On model stability as a function of random seed",
"authors": [
{
"first": "Pranava",
"middle": [],
"last": "Madhyastha",
"suffix": ""
},
{
"first": "Rishabh",
"middle": [],
"last": "Jain",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "929--939",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1087"
]
},
"num": null,
"urls": [],
"raw_text": "Pranava Madhyastha and Rishabh Jain. 2019. On model stability as a function of random seed. In Pro- ceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 929- 939, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance",
"authors": [
{
"first": "R",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Junghyun",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "217--227",
"other_ids": {
"DOI": [
"10.18653/v1/2020.blackboxnlp-1.21"
]
},
"num": null,
"urls": [],
"raw_text": "R. Thomas McCoy, Junghyun Min, and Tal Linzen. 2020. BERTs of a feather do not generalize to- gether: Large variability in generalization across models with similar test set performance. In Pro- ceedings of the Third BlackboxNLP Workshop on An- alyzing and Interpreting Neural Networks for NLP, pages 217-227, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Beyond accuracy: Behavioral testing of NLP models with CheckList",
"authors": [
{
"first": "Tongshuang",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Guestrin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4902--4912",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.442"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Be- havioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4902- 4912, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The multiberts: Bert reproductions for robustness analysis",
"authors": [
{
"first": "Thibault",
"middle": [],
"last": "Sellam",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Yadlowsky",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Naomi",
"middle": [],
"last": "Saphra",
"suffix": ""
},
{
"first": "Alexander D'",
"middle": [],
"last": "Amour",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Jasmijn",
"middle": [],
"last": "Bastings",
"suffix": ""
},
{
"first": "Iulia",
"middle": [],
"last": "Turc",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2106.16163"
]
},
"num": null,
"urls": [],
"raw_text": "Thibault Sellam, Steve Yadlowsky, Jason Wei, Naomi Saphra, Alexander D'Amour, Tal Linzen, Jasmijn Bastings, Iulia Turc, Jacob Eisenstein, Dipanjan Das, et al. 2021. The multiberts: Bert repro- ductions for robustness analysis. arXiv preprint arXiv:2106.16163.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5446"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Comparison of error rates per capability of vanilla and SWA models."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Comparing the variation of error rates and overlap ratios per capability for vanilla and SWA models. Legend for the x-axis can be found inFigure 1insert these names in negative instances of the test set. More details are provided in Appendix D."
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Error rates of each SWA random seed for the excluded capabilities."
},
"FIGREF4": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Comparison of error rates of vanilla and SWA models for the excluded capabilities."
},
"FIGREF5": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Comparison of variation in error rates between vanilla (red boxes) and SWA models (blue boxes), for the excluded capabilities. Comparison of variation in overlap ratios between vanilla (red boxes) and SWA models (blue boxes), for the excluded capabilities."
},
"FIGREF6": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Comparing the variation of error rates and overlap ratios for the excluded capabilities."
},
"FIGREF7": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "A d d N e g a ti v e P h ra s e s N e g a ti o n o f P o s it iv e S e n te n c e s N e g a ti o n o f P o s it iv e , n e u tr a l w o rd s in th e m id d le M o v ie G e n re S p e c if ic S e n ti m e n ts C h a n g e N a m e s N e g a ti v e N a m e s -P o s it iv e In s ta n c e s P o s it iv e N a m e s -N e g a ti v e In s ta n c e s N e g a ti v e N a m e s -N e g a ti v e In s ta n c e s P o s it iv e N a m e s -P o s it iv e In s ta n c e s C h a n g e M o v ie In d u s tr ie s C h a n g e N e u tr a l W o rd s Te m p o ra l S e n ti m e n t C h a n"
},
"FIGREF8": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "d d N e g a ti v e P h ra s e s N e g a ti o n o f P o s it iv e S e n te n c e s N e g a ti o n o f P o s it iv e , n e u tr a l w o rd s in th e m id d le M o v ie G e n re S p e c if ic S e n ti m e n ts C h a n g e N a m e s N e g a ti v e N a m e s -P o s it iv e In s ta n c e s P o s it iv e N a m e s -N e g a ti v e In s ta n c e s N e g a ti v e N a m e s -N e g a ti v e In s ta n c e s P o s it iv e N a m e s -P o s it iv e In s ta n c e s C h a n g e M o v ie In d u s tr ie s C h a n g e N e u tr a l W o rd sTe m p o ra l S e n ti m e n t C h a n"
},
"FIGREF9": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Comparison of error rates per capability of vanilla and SWA models with all the 10 random seeds.29A d d N e g a ti v e P h ra s e s N e g a ti o n o f P o s it iv e S e n te n c e s N e g a ti o n o f P o s it iv e , n e u tr a l w o rd s in th e m id d le M o v ie G e n re S p e c if ic S e n ti m e n ts C h a n g e N a m e s N e g a ti v e N a m e s -P o s it iv e In s ta n c e s P o s it iv e N a m e s -N e g a ti v e In s ta n c e s N e g a ti v e N a m e s -N e g a ti v e In s ta n c e s P o s it iv e N a m e s -P o s it iv e In s ta n c e s C h a n g e M o v ie In d u s tr ie s C h a n g e N e u tr a l W o rd s Te m p o ra l S e n ti m e n t C h a n Comparison of variation in error rates between vanilla (red boxes) and SWA models (blue boxes), showcased per CheckList capability. Outliers are indicated with a circle.A d d N e g a ti v e P h ra s e s N e g a ti o n o f P o s it iv e S e n te n c e s N e g a ti o n o f P o s it iv e , n e u tr a l w o rd s in th e m id d le M o v ie G e n re S p e c if ic S e n ti m e n ts C h a n g e N a m e s N e g a ti v e N a m e s -P o s it iv e In s ta n c e s P o s it iv e N a m e s -N e g a ti v e In s ta n c e s N e g a ti v e N a m e s -N e g a ti v e In s ta n c e s P o s it iv e N a m e s -P o s it iv e In s ta n c e s C h a n g e M o v ie In d u s tr ie s C h a n g e N e u tr a l W o rd s Te m p o ra l S e n ti m e n t C h a n Errors per Capability: Vanilla vs. SWA Vanilla SWA (b) Comparison of variation in overlap ratios between vanilla (red boxes) and SWA models (blue boxes), showcased per CheckList capability. Outliers are indicated with a circle."
},
"FIGREF10": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Comparing the variation of error rates and overlap ratios per capability for vanilla and SWA models, including results from Random Seed 0."
},
"TABREF1": {
"content": "<table/>",
"num": null,
"text": "",
"html": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td colspan=\"4\">Error Rates of Capabilities per Model</td><td/><td/><td/></tr><tr><td>Error Rate</td><td>20 30 40 50</td><td/><td>Random Seed 9 Random Seed 8 Random Seed 7 Random Seed 6 Random Seed 5 Random Seed 4 Random Seed 3 Random Seed 2 Random Seed 1</td><td/><td/><td/><td/><td/><td colspan=\"5\">A -Movie Genre Specific Sentiments B -Temporal Sentiment Change C -Negation of Positive Sentences D -Negation of Positive, neutral words in the middle E -Change Names F -Negative Names -Positive Instances G -Positive Names -Negative Instances H -Negative Names -Negative Instances I -Positive Names -Positive Instances J -Change Movie Industries K -Change Neutral Words L -Add Negative Phrases</td></tr><tr><td/><td>10</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>0</td><td>A</td><td>B</td><td>C</td><td>D</td><td>E</td><td>F</td><td>G</td><td>H</td><td>I</td><td>J</td><td>K</td><td>L</td></tr><tr><td/><td/><td/><td colspan=\"8\">(a) Error rates of each vanilla random seed for each CheckList capability.</td><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"4\">Error Rates of Capabilities per Model</td><td/><td/><td/></tr><tr><td/><td>50</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Random Seed 9 SWA</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Random Seed 8 SWA</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Random Seed 7 SWA</td></tr><tr><td/><td>40</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Random Seed 6 SWA Random Seed 5 SWA</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Random Seed 4 SWA</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Random Seed 3 SWA</td></tr><tr><td>Error Rate</td><td>20 30</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Random Seed 2 SWA Random Seed 1 SWA</td></tr><tr><td/><td>10</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>0</td><td>A</td><td>B</td><td>C</td><td>D</td><td>E</td><td>F</td><td>G</td><td>H</td><td>I</td><td>J</td><td>K</td><td>L</td></tr><tr><td/><td/><td/><td>(b)</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"num": null,
"text": "Error rates of each SWA random seed for each CheckList capability.",
"html": null,
"type_str": "table"
},
"TABREF6": {
"content": "<table/>",
"num": null,
"text": "Fleiss' Kappa values of the vanilla and SWA models on the agreement on the misclassifications on the development set. The upper block is with the first five random seeds and the lower is with all 10.",
"html": null,
"type_str": "table"
},
"TABREF8": {
"content": "<table/>",
"num": null,
"text": "Fleiss' Kappa values of the vanilla and SWA models on the agreement on CheckList mistakes per capability. The first part of the table shows the MFT capabilities, the second part are the INV capabilities, and the third part are the DIR capabilities.",
"html": null,
"type_str": "table"
},
"TABREF9": {
"content": "<table><tr><td colspan=\"2\">Thomas Wolf, Lysandre Debut, Victor Sanh, Julien</td></tr><tr><td colspan=\"2\">Chaumond, Clement Delangue, Anthony Moi, Pier-</td></tr><tr><td colspan=\"2\">ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Fun-</td></tr><tr><td colspan=\"2\">towicz, et al. 2019. Huggingface's transformers:</td></tr><tr><td colspan=\"2\">State-of-the-art natural language processing. arXiv</td></tr><tr><td colspan=\"2\">preprint arXiv:1910.03771.</td></tr><tr><td colspan=\"2\">Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer,</td></tr><tr><td colspan=\"2\">and Daniel Weld. 2019. Errudite: Scalable, repro-</td></tr><tr><td colspan=\"2\">ducible, and testable error analysis. In Proceed-</td></tr><tr><td colspan=\"2\">ings of the 57th Annual Meeting of the Association</td></tr><tr><td colspan=\"2\">for Computational Linguistics, pages 747-763, Flo-</td></tr><tr><td colspan=\"2\">rence, Italy. Association for Computational Linguis-</td></tr><tr><td>tics.</td><td/></tr><tr><td colspan=\"2\">Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer,</td></tr><tr><td colspan=\"2\">and Daniel S Weld. 2021. Polyjuice: Generating</td></tr><tr><td colspan=\"2\">counterfactuals for explaining, evaluating, and im-</td></tr><tr><td colspan=\"2\">proving models. In Proceedings of the 59th Annual</td></tr><tr><td colspan=\"2\">Meeting of the Association for Computational Lin-</td></tr><tr><td>guistics.</td><td/></tr><tr><td colspan=\"2\">Yige Xu, Xipeng Qiu, Ligao Zhou, and Xuanjing</td></tr><tr><td>Huang. 2020.</td><td>Improving bert fine-tuning via</td></tr><tr><td colspan=\"2\">self-ensemble and self-distillation. arXiv preprint</td></tr><tr><td colspan=\"2\">arXiv:2002.10345.</td></tr><tr><td colspan=\"2\">Ruiqi Zhong, Dhruba Ghosh, Dan Klein, and Jacob</td></tr><tr><td colspan=\"2\">Steinhardt. 2021. Are larger pretrained language</td></tr><tr><td colspan=\"2\">models uniformly better? comparing performance</td></tr><tr><td colspan=\"2\">at the instance level. In Findings of the Association</td></tr><tr><td colspan=\"2\">for Computational Linguistics: ACL-IJCNLP 2021,</td></tr><tr><td colspan=\"2\">pages 3813-3827, Online. Association for Computa-</td></tr><tr><td>tional Linguistics.</td><td/></tr></table>",
"num": null,
"text": "Matthew Watson, Bashar Awwad Shiekh Hasan, and Noura Al Moubayed. 2021. Agree to disagree: When deep learning models with identical architectures produce distinct explanations. arXiv preprint arXiv:2105.06791.",
"html": null,
"type_str": "table"
},
"TABREF10": {
"content": "<table/>",
"num": null,
"text": "Fleiss' Kappa values of the excluded capabilities.",
"html": null,
"type_str": "table"
},
"TABREF11": {
"content": "<table><tr><td>Random Seed 9</td></tr><tr><td>Random Seed 8</td></tr><tr><td>Random Seed 7</td></tr><tr><td>Random Seed 6</td></tr><tr><td>Random Seed 5</td></tr><tr><td>Random Seed 4</td></tr><tr><td>Random Seed 3</td></tr><tr><td>Random Seed 2</td></tr><tr><td>Random Seed 1</td></tr><tr><td>Random Seed 0</td></tr></table>",
"num": null,
"text": "Error Rates of Capabilities per Model",
"html": null,
"type_str": "table"
}
}
}
}