ACL-OCL / Base_JSON /prefixW /json /woah /2021.woah-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:10:02.143378Z"
},
"title": "Measuring and Improving Model-Moderator Collaboration using Uncertainty Estimation",
"authors": [
{
"first": "Ian",
"middle": [
"D"
],
"last": "Kivlichan",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Zi",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jeremiah",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Content moderation is often performed by a collaboration between humans and machine learning models. However, it is not well understood how to design the collaborative process so as to maximize the combined moderator-model system performance. This work presents a rigorous study of this problem, focusing on an approach that incorporates model uncertainty into the collaborative process. First, we introduce principled metrics to describe the performance of the collaborative system under capacity constraints on the human moderator, quantifying how efficiently the combined system utilizes human decisions. Using these metrics, we conduct a large benchmark study evaluating the performance of state-of-the-art uncertainty models under different collaborative review strategies. We find that an uncertainty-based strategy consistently outperforms the widely used strategy based on toxicity scores, and moreover that the choice of review strategy drastically changes the overall system performance. Our results demonstrate the importance of rigorous metrics for understanding and developing effective moderator-model systems for content moderation, as well as the utility of uncertainty estimation in this domain. 1 * Equal contribution; authors listed alphabetically. \u2020 This work was done while Zi Lin was an AI resident at Google Research.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Content moderation is often performed by a collaboration between humans and machine learning models. However, it is not well understood how to design the collaborative process so as to maximize the combined moderator-model system performance. This work presents a rigorous study of this problem, focusing on an approach that incorporates model uncertainty into the collaborative process. First, we introduce principled metrics to describe the performance of the collaborative system under capacity constraints on the human moderator, quantifying how efficiently the combined system utilizes human decisions. Using these metrics, we conduct a large benchmark study evaluating the performance of state-of-the-art uncertainty models under different collaborative review strategies. We find that an uncertainty-based strategy consistently outperforms the widely used strategy based on toxicity scores, and moreover that the choice of review strategy drastically changes the overall system performance. Our results demonstrate the importance of rigorous metrics for understanding and developing effective moderator-model systems for content moderation, as well as the utility of uncertainty estimation in this domain. 1 * Equal contribution; authors listed alphabetically. \u2020 This work was done while Zi Lin was an AI resident at Google Research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Maintaining civil discussions online is a persistent challenge for online platforms. Due to the sheer scale of user-generated text, modern content moderation systems often employ machine learning algorithms to automatically classify user comments based on their toxicity, with the goal of flagging a collection of likely policy-violating content for human experts to review (Etim, 2017) . However, modern deep learning models have been shown to suffer from reliability and robustness issues, especially in the face of the rich and complex sociolinguistic phenomena in real-world online conversations. Examples include possibly generating confidently wrong predictions based on spurious lexical features (Wang and Culotta, 2020) , or exhibiting undesired biases toward particular social subgroups (Dixon et al., 2018) . This has raised questions about how current toxicity detection models will perform in realistic online environments, as well as the potential consequences for moderation systems (Rainie et al., 2017) .",
"cite_spans": [
{
"start": 374,
"end": 386,
"text": "(Etim, 2017)",
"ref_id": "BIBREF18"
},
{
"start": 703,
"end": 727,
"text": "(Wang and Culotta, 2020)",
"ref_id": "BIBREF43"
},
{
"start": 796,
"end": 816,
"text": "(Dixon et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 997,
"end": 1018,
"text": "(Rainie et al., 2017)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we study an approach to address these questions by incorporating model uncertainty into the collaborative model-moderator system's decision-making process. The intuition is that by using uncertainty as a signal for the likelihood of model error, we can improve the efficiency and performance of the collaborative moderation system by prioritizing the least confident examples from the model for human review. Despite a plethora of uncertainty methods in the literature, there has been limited work studying their effectiveness in improving the performance of human-AI collaborative systems with respect to application-specific metrics and criteria (Awaysheh et al., 2019; Dusenberry et al., 2020; Jesson et al., 2020) . This is especially important for the content moderation task: real-world practice has unique challenges and constraints, including label imbalance, distributional shift, and limited resources of human experts; how these factors impact the collaborative system's effectiveness is not well understood.",
"cite_spans": [
{
"start": 662,
"end": 685,
"text": "(Awaysheh et al., 2019;",
"ref_id": null
},
{
"start": 686,
"end": 710,
"text": "Dusenberry et al., 2020;",
"ref_id": "BIBREF17"
},
{
"start": 711,
"end": 731,
"text": "Jesson et al., 2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we lay the foundation for the study of the uncertainty-aware collaborative content moderation problem. We first (1) propose rigorous met-rics Oracle-Model Collaborative Accuracy (OC-Acc) and AUC (OC-AUC) to measure the performance of the overall collaborative system under capacity constraints on a simulated human moderator. We also propose Review Efficiency, a intrinsic metric to measure a model's ability to improve the collaboration efficiency by selecting examples that need further review. Then, (2) we introduce a challenging data benchmark, Collaborative Toxicity Moderation in the Wild (CoToMoD), for evaluating the effectiveness of a collaborative toxic comment moderation system. CoToMoD emulates the realistic train-deployment environment of a moderation system, in which the deployment environment contains richer linguistic phenomena and a more diverse range of topics than the training data, such that effective collaboration is crucial for good system performance (Amodei et al., 2016) . Finally, (3) we present a large benchmark study to evaluate the performance of five classic and state-of-the-art uncertainty approaches on CoToMoD under two different moderation review approaches (based on the uncertainty score and on the toxicity score, respectively). We find that both the model's predictive and uncertainty quality contribute to the performance of the final system, and that the uncertainty-based review strategy outperforms the toxicity strategy across a variety of models and range of human review capacities.",
"cite_spans": [
{
"start": 995,
"end": 1016,
"text": "(Amodei et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our collaborative metrics draw on the idea of classification with a reject option, or learning with abstention (Bartlett and Wegkamp, 2008; Cortes et al., 2016 Cortes et al., , 2018 Kompa et al., 2021) . In this classification scenario, the model has the option to reject an example instead of predicting its label. The challenge in connecting learning with abstention to OC-Acc or OC-AUC is to account for how many examples have already been rejected. Specifically, the difficulty is that the metrics we present are all dataset-level metrics, i.e. the \"reject\" option is not at the level of individual examples, but rather a set capacity over the entire dataset. Moreover, this means OC-Acc and OC-AUC can be compared directly with traditional accuracy or AUC measures. This difference in focus enables us to consider human time as the limiting resource in the overall model-moderator system's performance.",
"cite_spans": [
{
"start": 111,
"end": 139,
"text": "(Bartlett and Wegkamp, 2008;",
"ref_id": "BIBREF6"
},
{
"start": 140,
"end": 159,
"text": "Cortes et al., 2016",
"ref_id": "BIBREF11"
},
{
"start": 160,
"end": 181,
"text": "Cortes et al., , 2018",
"ref_id": "BIBREF10"
},
{
"start": 182,
"end": 201,
"text": "Kompa et al., 2021)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "One key point for our work is that the best model (in isolation) may not yield the best performance in collaboration with a human (Bansal et al., 2021) . Our work demonstrates this for a case where the collaboration procedure is decided over the full dataset rather than per example: because of this, Bansal et al. (2021) 's expected team utility does not easily generalize to our setting. In particular, the user chooses which classifier predictions to accept after receiving all of them rather than per example.",
"cite_spans": [
{
"start": 130,
"end": 151,
"text": "(Bansal et al., 2021)",
"ref_id": "BIBREF4"
},
{
"start": 301,
"end": 321,
"text": "Bansal et al. (2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Robustness to distribution shift has been applied to toxicity classification in other works (Adragna et al., 2020; Koh et al., 2020) , emphasizing the connection between fairness and robustness. Our work focuses on how these methods connect to the human review process, and how uncertainty can lead to better decision-making for a model collaborating with a human. Along these lines, Dusenberry et al. (2020) analyzed how uncertainty affects optimal decisions in a medical context, though again at the level of individual examples rather than over the dataset. ",
"cite_spans": [
{
"start": 92,
"end": 114,
"text": "(Adragna et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 115,
"end": 132,
"text": "Koh et al., 2020)",
"ref_id": null
},
{
"start": 384,
"end": 408,
"text": "Dusenberry et al. (2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "D = {y i , x i } N i=1 using a deep classi- fier f W (x).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Here the x i are example comments, y i \u223c p * (y|x i ) are toxicity labels drawn from a data generating process p * (e.g., the human annotation process), and W are the parameters of the deep neural network. There are two distinct types of uncertainty in this modeling process: data uncertainty and model uncertainty (Sullivan, 2015; Liu et al., 2019) . Data uncertainty arises from the stochastic variability inherent in the data generating process p * . For example, the toxicity label y i for a comment can vary between 0 and 1 depending on raters' different understandings of the comment or of the annotation guidelines. On the other hand, model uncertainty arises from the model's lack of knowledge about the world, commonly caused by insufficient coverage of the training data. For example, at evaluation time, the toxicity classifier may encounter neologisms or misspellings that did not appear in the training data, making it more likely to make a mistake (van Aken et al., 2018) . While the model uncertainty can be reduced by training on more data, the data uncertainty is inherent to the data generating process and is irreducible. Estimating Uncertainty A model that quantifies its uncertainty well should properly capture both the data and the model uncertainties. To this end, a learned deep classifier f W (x) describes the data uncertainty via its predictive probability, e.g.:",
"cite_spans": [
{
"start": 315,
"end": 331,
"text": "(Sullivan, 2015;",
"ref_id": "BIBREF41"
},
{
"start": 332,
"end": 349,
"text": "Liu et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 962,
"end": 985,
"text": "(van Aken et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "p(y|x, W ) = sigmoid(f W (x)),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "which is conditioned on the model parameter W , and is commonly learned by minimizing the Kullback-Leibler (KL) divergence between the model distribution p(y|x, W ) and the empirical distribution of the data (e.g. by minimizing the cross-entropy loss (Goodfellow et al., 2016) ). On the other hand, a deep classifier can quantify model uncertainty by using probabilistic methods to learn the posterior distribution of the model parameters:",
"cite_spans": [
{
"start": 251,
"end": 276,
"text": "(Goodfellow et al., 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "W \u223c p(W ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This distribution over W leads to a distribution over the predictive probabilities p(y|x, W ). As a result, at inference time, the model can sample model weights {W m } M m=1 from the posterior distribution p(W ), and then compute the posterior sample of predictive probabilities {p(y|x, W m )} M m=1 . This allows the model to express its model uncertainty through the variance of the posterior distribution Var p(y|x, W ) . Section 5 surveys popular probabilistic deep learning methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In practice, it is convenient to compute a single uncertainty score capturing both types of uncertainty. To this end, we can first compute the marginalized predictive probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "p(y|x) = p(y|x, W )p(W ) dW",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "which captures both types of uncertainty by marginalizing the data uncertainty p(y|x, W ) over the model uncertainty p(W ). We can thus quantify the overall uncertainty of the model by computing the predictive variance of this binary distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "u unc (x) = p(y|x) \u00d7 (1 \u2212 p(y|x)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Evaluating Uncertainty Quality A common approach to evaluate a model's uncertainty quality is to measure its calibration performance, i.e., whether the model's predictive uncertainty is indicative of the predictive error (Guo et al., 2017 ). As we shall see in experiments, traditional calibration metrics like the Brier score (Ovadia et al., 2019) do not correlate well with the model performance in collaborative prediction. One notable reason is that the collaborative systems use uncertainty as a ranking score (to identify possibly wrong predictions), while metrics like Brier score only measure the uncertainty's ranking performance indirectly.",
"cite_spans": [
{
"start": 221,
"end": 238,
"text": "(Guo et al., 2017",
"ref_id": "BIBREF21"
},
{
"start": 327,
"end": 348,
"text": "(Ovadia et al., 2019)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Inaccurate TP FN Accurate FP TN Figure 1 : Confusion matrix for evaluating uncertainty calibration. We describe the correspondence in the text.",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 40,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracy",
"sec_num": null
},
{
"text": "This motivates us to consider Calibration AUC, a new class of calibration metrics that focus on the uncertainty score u unc (x)'s ranking performance. This metric evaluates uncertainty estimation by recasting it as a binary prediction problem, where the binary label is the model's prediction error I(f (x i ) = y i ), and the predictive score is the model uncertainty. This formulation leads to a confusion matrix as shown in Figure 1 (Krishnan and Tickoo, 2020) . Here, the four confusion matrix variables take on new meanings: (1) True Positive (TP) corresponds to the case where the prediction is inaccurate and the model is uncertain, (2) True Negative (TN) to the accurate and certain case, (3) False Negative (FN) to the inaccurate and certain case (i.e., over-confidence), and finally (4) False Positive (FP) to the accurate and uncertain case (i.e., underconfidence). Now, consider having the model predict its testing error using model uncertainty. The precision (TP/(TP+FP)) measures the fraction of inaccurate examples where the model is uncertain, recall (TP/(TP+FN)) measures the fraction of uncertain examples where the model is inaccurate, and the false positive rate (FP/(FP+TN)) measures the fraction of under-confident examples among the correct predictions. Thus, the model's calibration performance can be measured by the area under the precision-recall curve (Calibration AUPRC) and under the receiver operating characteristic curve (Calibration AUROC) for this problem. It is worth noting that the calibration AUPRC is closely related to the intrinsic metrics for the model's collaborative effectiveness: we discuss this in greater detail for the Review Efficiency in Section 4.1 and Appendix A.2). This renders it especially suitable for evaluating model uncertainty in the context of collaborative content moderation.",
"cite_spans": [
{
"start": 436,
"end": 463,
"text": "(Krishnan and Tickoo, 2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 427,
"end": 435,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracy",
"sec_num": null
},
{
"text": "Online content moderation is a collaborative process, performed by humans working in conjunction with machine learning models. For example, the model can select a set of likely policy-violating posts for further review by human moderators. In this work, we consider a setting where a neural model interacts with an \"oracle\" human moderator with limited capacity in moderating online comments. Given a large number of examples {x i } n i=1 , the model first generates the predictive probability p(y|x i ) and review score u(x i ) for each example. Then, the model sends a pre-specified number of these examples to human moderators according to the rankings of the review score u(x i ), and relies on its prediction p(y|x i ) for the rest of the examples. In this work, we make the simplifying assumption that the human experts act like an oracle, correctly labeling all comments sent by the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Collaborative Content Moderation Task",
"sec_num": "4"
},
{
"text": "Machine learning systems for online content moderation are typically evaluated using metrics like accuracy or area under the receiver operating characteristic curve (AUROC). These metrics reflect the origins of these systems in classification problems, such as for detecting / classifying online abuse, harassment, or toxicity (Yin et al., 2009; Dinakar et al., 2011; Cheng et al., 2015; Wulczyn et al., 2017) . However, they do not capture the model's ability to effectively collaborate with human moderators, or the performance of the resultant collaborative system. New metrics, both extrinsic and intrinsic (Moll\u00e1 and Hutchinson, 2003) , are one of the core contributions of this work. We introduce extrinsic metrics describing the performance of the overall modelmoderator collaborative system (Oracle-Model Collaborative Accuracy and AUC, analogous to the classic accuracy and AUC), and an intrinsic metric focusing on the model's ability to effectively collaborate with human moderators (Review Efficiency), i.e., how well the model selects the examples in need of further review.",
"cite_spans": [
{
"start": 327,
"end": 345,
"text": "(Yin et al., 2009;",
"ref_id": "BIBREF46"
},
{
"start": 346,
"end": 367,
"text": "Dinakar et al., 2011;",
"ref_id": "BIBREF14"
},
{
"start": 368,
"end": 387,
"text": "Cheng et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 388,
"end": 409,
"text": "Wulczyn et al., 2017)",
"ref_id": "BIBREF44"
},
{
"start": 611,
"end": 639,
"text": "(Moll\u00e1 and Hutchinson, 2003)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Performance of the Collaborative Moderation System",
"sec_num": "4.1"
},
{
"text": "Extrinsic Metrics: Oracle-model Collaborative Accuracy and AUC To capture the collaborative interaction between human moderators and machine learning models, we first propose Oracle-Model Collaborative Accuracy (OC-Acc).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Performance of the Collaborative Moderation System",
"sec_num": "4.1"
},
{
"text": "OC-Acc measures the combined accuracy of this collaborative process, subject to a limited review capacity \u03b1 for the human oracle (i.e., the oracle can process at most \u03b1 \u00d7 100% of the total examples). Formally, given a dataset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Performance of the Collaborative Moderation System",
"sec_num": "4.1"
},
{
"text": "D = {(x i , y i )} n i=1 , for a predictive model f (x i ) generating a review score u(x i ), the Oracle-Model Collaborative Accuracy for example x i is OC-Acc(x i |\u03b1) = 1 if u(x i ) > q 1\u2212\u03b1 I(f (x i ) = y i ) otherwise ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Performance of the Collaborative Moderation System",
"sec_num": "4.1"
},
{
"text": "Thus, over the whole dataset, OC-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Performance of the Collaborative Moderation System",
"sec_num": "4.1"
},
{
"text": "Acc(\u03b1) = 1 n n i=1 OC-Acc(x i |\u03b1). Here q 1\u2212\u03b1 is the (1 \u2212 \u03b1) th quantile of the model's review scores {u(x i )} n i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Performance of the Collaborative Moderation System",
"sec_num": "4.1"
},
{
"text": "over the entire dataset. OC-Acc thus describes the performance of a collaborative system which defers to a human oracle when the review score u(x i ) is high, and relies on the model prediction otherwise, capturing the real-world usage and performance of the underlying model in a way that traditional metrics fail to.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Performance of the Collaborative Moderation System",
"sec_num": "4.1"
},
{
"text": "However, as an accuracy-like metric, OC-Acc relies on a set threshold on the prediction score. This limits the metric's ability in describing model performance when compared to threshold-agnostic metrics like AUC. Moreover, OC-Acc can be sensitive to the intrinsic class imbalance in the toxicity datasets, appearing overly optimistic for model predictions that are biased toward negative class, similar to traditional accuracy metrics (Borkan et al., 2019) . Therefore in practice, we prefer the AUC analogue of Oracle-Model Collaborative Accuracy, which we term the Oracle-Model Collaborative AUC (OC-AUC). OC-AUC measures the same collaborative process as the OC-Acc, where the model sends the predictions with the top \u03b1 \u00d7 100% of review scores. Then, similar to the standard AUC computation, OC-AUC sets up a collection of classifiers with varying predictive score thresholds, each of which has access to the oracle exactly as for OC-Acc (Davis and Goadrich, 2006) . Each of these classifiers sends the same set of examples to the oracle (since the review score u(x) is threshold-independent), and the oracle corrects model predictions when they are incorrect given the threshold. The OC-AUC-both OC-AUROC and OC-AUPRC-can then be calculated over this set of classifiers following the standard AUC algorithms (Davis and Goadrich, 2006) .",
"cite_spans": [
{
"start": 436,
"end": 457,
"text": "(Borkan et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 942,
"end": 968,
"text": "(Davis and Goadrich, 2006)",
"ref_id": "BIBREF12"
},
{
"start": 1313,
"end": 1339,
"text": "(Davis and Goadrich, 2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Performance of the Collaborative Moderation System",
"sec_num": "4.1"
},
{
"text": "Intrinsic Metric: Review Efficiency The metrics so far measure the performance of the over-all collaborative system, which combines both the model's predictive accuracy and the model's effectiveness in collaboration. To understand the source of the improvement, we also introduce Review Efficiency, an intrinsic metric focusing solely on the model's effectiveness in collaboration. Specifically, Review Efficiency is the proportion of examples sent to the oracle for which the model prediction would otherwise have been incorrect. This can be thought of as the model's precision in selecting inaccurate examples for further review (TP/(TP+FP) in Figure 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 646,
"end": 654,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Measuring the Performance of the Collaborative Moderation System",
"sec_num": "4.1"
},
{
"text": "Note that the system's overall performance (measured by the oracle-model collaborative accuracy) can be rewritten as a weighted sum of the model's original predictive accuracy and the Review Efficiency (RE):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Performance of the Collaborative Moderation System",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "OC-Acc(\u03b1) = Acc + \u03b1 \u00d7 RE(\u03b1)",
"eq_num": "(2)"
}
],
"section": "Measuring the Performance of the Collaborative Moderation System",
"sec_num": "4.1"
},
{
"text": "where RE(\u03b1) is the model's review efficiency among all the examples whose review score u(x i ) are greater than q 1\u2212\u03b1 (i.e., those sent to human moderators). Thus, a model with better predictive performance and higher review efficiency yields better performance in the overall system. The benefits of review efficiency become more pronounced as the review fraction \u03b1 increases. We derive Eq. (2) in Appendix B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Performance of the Collaborative Moderation System",
"sec_num": "4.1"
},
{
"text": "In a realistic industrial setting, toxicity detection models are often trained on a well-curated dataset with clean annotations, and then deployed to an environment that contains a more diverse range of sociolinguistic phenomena, and additionally exhibits systematic shifts in the lexical and topical distributions when compared to the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CoToMoD: An Evaluation Benchmark for Real-world Collaborative Moderation",
"sec_num": "4.2"
},
{
"text": "To this end, we introduce a challenging data benchmark, Collaborative Toxicity Moderation in the Wild (CoToMoD), to evaluate the performance of collaborative moderation systems in a realistic environment. CoToMoD consists of a set of train, test, and deployment environments: the train and test environments consist of 200k comments from Wikipedia discussion comments from 2004-2015 (the Wikipedia Talk Corpus (Wulczyn et al., 2017) ), and the deployment environment consists of one million public comments appeared on approximately 50 English-language news sites across the world from 2015-2017 (the CivilComments dataset (Borkan et al., 2019) ). This setup mirrors the real-world implementation of these methods, where robust performance under changing data is essential for proper deployment (Amodei et al., 2016) .",
"cite_spans": [
{
"start": 410,
"end": 432,
"text": "(Wulczyn et al., 2017)",
"ref_id": "BIBREF44"
},
{
"start": 623,
"end": 644,
"text": "(Borkan et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 795,
"end": 816,
"text": "(Amodei et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CoToMoD: An Evaluation Benchmark for Real-world Collaborative Moderation",
"sec_num": "4.2"
},
{
"text": "Notably, CoToMoD contains two data challenges often encountered in practice: (1) Distributional Shift, i.e. the comments in the training and deployment environments cover different time periods and surround different topics of interest (Wikipedia pages vs. news articles). As the Civil-Comments corpus is much larger in size, it contains a considerable collection of long-tail phenomena (e.g., neologisms, obfuscation, etc.) that appear less frequently in the training data. (2) Class Imbalance, i.e. the fact that most online content is not toxic (Cheng et al., 2017; Wulczyn et al., 2017) . This manifests in the datasets we use: roughly 2.5% (50,350 / 1,999,514) of the examples in the Civil-Comments dataset, and 9.6% (21,384 / 223,549) of the examples in Wikipedia Talk Corpus examples are toxic (Wulczyn et al., 2017; Borkan et al., 2019) . As we will show, failing to account for class imbalance can severely bias model predictions toward the majority (non-toxic) class, reducing the effectiveness of the collaborative system.",
"cite_spans": [
{
"start": 548,
"end": 568,
"text": "(Cheng et al., 2017;",
"ref_id": "BIBREF8"
},
{
"start": 569,
"end": 590,
"text": "Wulczyn et al., 2017)",
"ref_id": "BIBREF44"
},
{
"start": 801,
"end": 823,
"text": "(Wulczyn et al., 2017;",
"ref_id": "BIBREF44"
},
{
"start": 824,
"end": 844,
"text": "Borkan et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CoToMoD: An Evaluation Benchmark for Real-world Collaborative Moderation",
"sec_num": "4.2"
},
{
"text": "Moderation Review Strategy In measuring model-moderator collaborative performance, we consider two review strategies (i.e. using different review scores u(x)). First, we experiment with a common toxicity-based review strategy (Jigsaw, 2019; Salganik and Lee, 2020) . Specifically, the model sends comments for review in decreasing order of the predicted toxicity score (i.e., the predictive probability p(y|x)), equivalent to a review score u tox (x) = p(y|x). The second strategy is uncertainty-based: given p(y|x), we use uncertainty as the review score, u unc (x) = p(y|x)(1 \u2212 p(y|x)) (recall Eq. (1)), so that the review score is maximized at p(y|x) = 0.5, and decreases toward 0 as p(x) approaches 0 or 1. Which strategy performs best depends on the toxicity distribution in the dataset and the available review capacity \u03b1.",
"cite_spans": [
{
"start": 226,
"end": 240,
"text": "(Jigsaw, 2019;",
"ref_id": "BIBREF24"
},
{
"start": 241,
"end": 264,
"text": "Salganik and Lee, 2020)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "5"
},
{
"text": "We evaluate the performance of classic and the latest state-of-the-art probabilistic deep learning methods on the Co-ToMoD benchmark. We consider BERT base as the base model (Devlin et al., 2019) , and select five methods based on their practical applicabil-ity for transformer models. Specifically, we consider (1) Deterministic which computes the sigmoid probability p(x) = sigmoid(logit(x)) of a vanilla BERT model (Hendrycks and Gimpel, 2017) , (2) Monte Carlo Dropout (MC Dropout) which estimates uncertainty using the Monte Carlo average of p(x) from 10 dropout samples (Gal and Ghahramani, 2016) , (3) Deep Ensemble which estimates uncertainty using the ensemble mean of p(x) from 10 BERT models trained in parallel (Lakshminarayanan et al., 2017) , (4) Spectralnormalized Neural Gaussian Process (SNGP), a recent state-of-the-art approach which improves a BERT model's uncertainty quality by transforming it into an approximate Gaussian process model (Liu et al., 2020) , and (5) SNGP Ensemble, which is the Deep Ensemble using SNGP as the base model.",
"cite_spans": [
{
"start": 174,
"end": 195,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 418,
"end": 446,
"text": "(Hendrycks and Gimpel, 2017)",
"ref_id": "BIBREF22"
},
{
"start": 576,
"end": 602,
"text": "(Gal and Ghahramani, 2016)",
"ref_id": "BIBREF19"
},
{
"start": 723,
"end": 754,
"text": "(Lakshminarayanan et al., 2017)",
"ref_id": "BIBREF29"
},
{
"start": 959,
"end": 977,
"text": "(Liu et al., 2020)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Uncertainty Models",
"sec_num": null
},
{
"text": "Learning Objective To address class imbalance, we consider combining the uncertainty methods with Focal Loss (Lin et al., 2017) . Focal loss reshapes the loss function to down-weight \"easy\" negatives (i.e. non-toxic examples), thereby focusing training on a smaller set of more difficult examples, and empirically leading to improved predictive and uncertainty calibration performance on class-imbalanced datasets (Lin et al., 2017; Mukhoti et al., 2020) . We focus our attention on focal loss (rather than other approaches to class imbalance) because of how this impact on calibration interacts with our moderation review strategies.",
"cite_spans": [
{
"start": 109,
"end": 127,
"text": "(Lin et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 414,
"end": 432,
"text": "(Lin et al., 2017;",
"ref_id": "BIBREF30"
},
{
"start": 433,
"end": 454,
"text": "Mukhoti et al., 2020)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Uncertainty Models",
"sec_num": null
},
{
"text": "We first examine the prediction and calibration performance of the uncertainty models alone (Section 6.1). For prediction, we compute the predictive accuracy (Acc) and the predictive AUC (both AU-ROC and AUPRC). For uncertainty, we compute the Brier score (i.e., the mean squared error between true labels and predictive probabilities, a standard uncertainty metric), and also the Calibration AUPRC (Section 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark Experiments",
"sec_num": "6"
},
{
"text": "We then evaluate the models' collaboration performance under both the uncertainty-and the toxicity-based review strategies (Section 6.2). For each model-strategy combination, we measure the model's collaboration ability by computing Review Efficiency, and evaluate the performance of the overall collaborative system using Oracle-Model Collaborative AUROC (OC-AUROC). We evaluate all collaborative metrics over a range of human moderator review ca-pacities, with their review fractions (i.e., fraction of total examples the model sends to the moderator for further review) ranging over {0.001, 0.005, 0.01, 0.02, 0.05, 0.1, 0.15, 0.20}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark Experiments",
"sec_num": "6"
},
{
"text": "Results on further uncertainty and collaboration metrics (Calibration AUROC, OC-Acc, OC-AUPRC, etc.) are in Appendix D. Table 1 shows the performance of all uncertainty methods evaluated on the testing (the Wikipedia Talk corpus) and the deployment environments (the CivilComments corpus).",
"cite_spans": [],
"ref_spans": [
{
"start": 120,
"end": 127,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Benchmark Experiments",
"sec_num": "6"
},
{
"text": "First, we compare the uncertainty methods based on the predictive and calibration AUC. As shown, for prediction, the ensemble models (both SNGP Ensemble and Deep Ensemble) provide the best performance, while the SNGP Ensemble and MC Dropout perform best for uncertainty calibration. Training with focal loss systematically improves the model prediction under class imbalance (improving the predictive AUC), while incurring a trade-off with the model's calibration quality (i.e. decreasing the calibration AUC).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction and Calibration",
"sec_num": "6.1"
},
{
"text": "Next, we turn to the model performance between the test and deployment environments. Across all methods, we observe a significant drop in predictive performance (\u223c 0.28 for AUROC and \u223c 0.13 for AUPRC), and a less pronounced, but still noticeable drop in uncertainty calibration (\u223c 0.05 for Calibration AUPRC). Interestingly, focal loss seems to mitigate the drop in predictive performance, but also slightly exacerbates the drop in uncertainty calibration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction and Calibration",
"sec_num": "6.1"
},
{
"text": "Lastly, we observe a counter-intuitive improvement in the non-AUC metrics (i.e., accuracy and Brier score) in the out-of-domain deployment environment. This is likely due to their sensitivity to class imbalance (recall that toxic examples are slightly less rare in CivilComments). As a result, these classic metrics tend to favor model predictions biased toward the negative class, and therefore are less suitable for evaluating model performance in the context of toxic comment moderation. Figure 2 and 3 show the Oracle-model Collaborative AUROC (OC-AUROC) of the overall collaborative system, and Figure 4 shows the Review Efficiency of uncertainty models. Both the toxicitybased (dashed line) and uncertainty-based review strategies (solid line) are included. Effect of Review Strategy For the AUC performance of the collaborative system, the uncertaintybased review strategy consistently outperforms the toxicity-based review strategy. For example, in the in-domain environment (Wikipedia Talk corpus), using the uncertainty-rather than toxicity-based review strategy yields larger OC-AUROC improvements than any modeling change; this holds across all measured review fractions. We see a similar trend for OC-AUPRC (Appendix Figure 7-8 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 491,
"end": 505,
"text": "Figure 2 and 3",
"ref_id": "FIGREF0"
},
{
"start": 600,
"end": 608,
"text": "Figure 4",
"ref_id": "FIGREF1"
},
{
"start": 1230,
"end": 1240,
"text": "Figure 7-8",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Prediction and Calibration",
"sec_num": "6.1"
},
{
"text": "The trend in Review Efficiency (Figure 4 ) provides a more nuanced view to this picture. As shown, the efficiency of the toxicity-based strategy starts to improve as the review fraction increases, leading to a cross-over with the uncertainty-based strategy at high fractions. This is likely caused by the fact that in toxicity classification, the false positive rate exceeds the false negative rate. Therefore sending a large number of positive predictions eventually leads the collaborative system to capture more errors, at the cost of a higher review load on human moderators. We notice that this transition occurs much earlier out-of-domain on CivilComments (Figure 4 right) . This highlights the impact of the toxicity distribution of the data on the best review strategy: because the proportion of toxic examples is much lower in CivilComments than in the Wikipedia Talk Corpus, the cross-over between the uncertainty and toxicity review strategies correspondingly occurs at lower review fractions. Finally, it is important to note that this advantage in review efficiency does not directly translate to improvements for the overall system. For example, the OC-AUCs using the toxicity strategy are still lower than those with the uncertainty strategy even for high review fractions.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 40,
"text": "(Figure 4",
"ref_id": "FIGREF1"
},
{
"start": 662,
"end": 678,
"text": "(Figure 4 right)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Collaboration Performance",
"sec_num": "6.2"
},
{
"text": "Recall that the performance of the overall collaborative system is the result of the model performance in both prediction and calibration, e.g. Eq. (2). As a result, the model performance in Section 6.1 translates to performance on the collaborative metrics. For example, the ensemble methods (SNGP Ensemble and Deep Ensemble) consistently outperform on the OC-AUC metrics due to their high performance in predictive AUC and decent performance in calibration (Table 1) . On the other hand, MC Dropout has Figure 3 : Semilog plot of oracle-model collaborative AUROC as a function of review fraction, trained with crossentropy (XENT, left) or focal loss (right) and evaluated on CivilComments corpus (i.e., the out-of-domain deployment environment). Solid line: uncertainty-based review strategy. Dashed line: toxicity-based review strategy.",
"cite_spans": [],
"ref_spans": [
{
"start": 459,
"end": 468,
"text": "(Table 1)",
"ref_id": "TABREF2"
},
{
"start": 505,
"end": 513,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of Modeling Approach",
"sec_num": null
},
{
"text": "Training with focal rather than cross-entropy loss yields a large improvement. The best performing method is the Deep Ensemble trained with focal loss and uses the uncertainty-based review strategy. good calibration performance but sub-optimal predictive AUC. As a result, it sometimes attains the best Review Efficiency (e.g., Figure 4 , right), but never achieves the best overall OC-AUC. Finally, comparing between training objectives, the focalloss-trained models tend to outperform their crossentropy-trained counterparts in OC-AUC, due to the fact that focal loss tends to bring significant benefits to the predictive AUC (albeit at a small cost to the calibration performance).",
"cite_spans": [],
"ref_spans": [
{
"start": 328,
"end": 336,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Effect of Modeling Approach",
"sec_num": null
},
{
"text": "In this work, we presented the problem of collaborative content moderation, and introduced Co-ToMoD, a challenging benchmark for evaluating the practical effectiveness of collaborative (modelmoderator) content moderation systems. We proposed principled metrics to quantify how effectively a machine learning model and human (e.g. a moderator) can collaborate. These include Oracle-Model Collaborative Accuracy (OC-Acc) and AUC (OC-AUC), which measure analogues of the usual accuracy or AUC for interacting human-AI sys-tems subject to limited human review capacity. We also proposed Review Efficiency, which quantifies how effectively a model utilizes human decisions. These metrics are distinct from classic measures of predictive performance or uncertainty calibration, and enable us to evaluate the performance of the full collaborative system as a function of human attention, as well as to understand how efficiently the collaborative system utilizes human decision-making. Moreover, though we focused here on measuring the combined system's performance through metrics analogous to accuracy and AUC, it is trivial to extend these to other classic metrics like precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Using these new metrics, we evaluated the performance of a variety of models on the collaborative content moderation task. We considered two canonical strategies for collaborative review: one based on the toxicity scores, and a new one using model uncertainty. We found that the uncertaintybased review strategy outperforms the toxicity strategy across a variety of models and range of human review capacities, yielding a > 30% absolute in-crease in how efficiently the model uses human decisions and \u223c 0.01 and \u223c 0.05 absolute increases in the collaborative system's AUROC and AUPRC, respectively. This merits further study and consideration of this strategy's use in content moderation. The interaction between the data distribution and best review strategy demonstrated by the crossover between the two strategies' performance out-ofdomain) emphasizes the implicit trade-off between false positives and false negatives in the two review strategies: because toxicity is rare, prioritizing comments for review in order of toxicity reduces the false positive rate while potentially increasing the false negative rate. By comparison, the uncertaintybased review strategy treats false positives and negatives more evenly. Further study is needed to clarify this interaction. Our work shows that the choice of review strategy drastically changes the collaborative system performance: evaluating and striving to optimize only the model yields much smaller improvements than changing the review strategy, and misses major opportunities to improve the overall system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Though the results presented in the current paper are encouraging, there remain important challenges for uncertainty modeling in the domain of toxic content moderation. In particular, dataset bias remains a significant issue: statistical correlation between the annotated toxicity labels and various surface-level cues may lead models to learn to overly rely on e.g. lexical or dialectal patterns (Zhou et al., 2021) . This could cause the model to produce high-confidence mispredictions for comments containing these cues (e.g., reclaimed words or counter-speech), resulting in a degradation in calibration performance in the deployment environment (cf. Table 1) . Surprisingly, the standard debiasing techniques we experimented in this work (specifically, focal loss (Karimi Mahabadi et al., 2020)) only exacerbated this decline in calibration performance. This suggests that naively applying debiasing techniques may incur unexpected negative impacts on other aspects of the moderation system. Further research is needed into modeling approaches that can achieve robust performance both in prediction and in uncertainty calibration under data bias and distributional shift (Nam et al., 2020; Utama et al., 2020; Du et al., 2021; Yaghoobzadeh et al., 2021; Bao et al., 2021; Karimi Mahabadi et al., 2020) .",
"cite_spans": [
{
"start": 397,
"end": 416,
"text": "(Zhou et al., 2021)",
"ref_id": "BIBREF47"
},
{
"start": 1176,
"end": 1194,
"text": "(Nam et al., 2020;",
"ref_id": "BIBREF36"
},
{
"start": 1195,
"end": 1214,
"text": "Utama et al., 2020;",
"ref_id": "BIBREF42"
},
{
"start": 1215,
"end": 1231,
"text": "Du et al., 2021;",
"ref_id": "BIBREF16"
},
{
"start": 1232,
"end": 1258,
"text": "Yaghoobzadeh et al., 2021;",
"ref_id": "BIBREF45"
},
{
"start": 1259,
"end": 1276,
"text": "Bao et al., 2021;",
"ref_id": "BIBREF5"
},
{
"start": 1277,
"end": 1306,
"text": "Karimi Mahabadi et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 655,
"end": 663,
"text": "Table 1)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "There exist several important directions for fu-ture work. One key direction is to develop better review strategies than the ones discussed here: though the uncertainty-based strategy outperforms the toxicity-based one, there may be room for further improvement. Furthermore, constraints on the moderation process may necessitate different review strategies: for example, if content can only be removed with moderator approval, we could experiment with a hybrid strategy which sends a mixture of high toxicity and high uncertainty content for human review. A second direction is to study how these methods perform with real moderators: the experiments in this work are computational and there may exist further challenges in practice. For example, the difficulty of rating a comment can depend on the text itself in unexpected ways. Finally, a linked question is how to communicate uncertainty and different review strategies to moderators: simpler communicable strategies may be preferable to more complex ones with better theoretical performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "For completeness, we include a definition of the expected calibration error (ECE) (Naeini et al., 2015) here. We use the ECE as a comparison for the uncertainty calibration performance alongside the Brier score in the tables in Appendix D.",
"cite_spans": [
{
"start": 82,
"end": 103,
"text": "(Naeini et al., 2015)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Expected Calibration Error",
"sec_num": null
},
{
"text": "ECE can be computed by discretizes the probability range [0, 1] into a set of B bins, and computes the weighted average of the difference between confidence (the mean probability within each bin) and the accuracy (the fraction of predictions within each bin that are correct),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Expected Calibration Error",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ECE = B b=1 n b N |conf(b) \u2212 acc(b)|,",
"eq_num": "(3)"
}
],
"section": "A.1 Expected Calibration Error",
"sec_num": null
},
{
"text": "where acc(b) and conf(b) denote the accuracy and confidence for bin b, respectively, n b is the number of examples in bin b, and N = b n b is the total number of examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Expected Calibration Error",
"sec_num": null
},
{
"text": "As discussed in Section 3, Calibration AUPRC is an especially suitable metric for measuring model uncertainty in the context of collaborative content moderation, due to its close connection with the intrinsic metrics for the model's collaboration effectiveness. Specifically, the Review Efficiency metric (introduced in Section 4.1) can be understood as the analog of precision for the calibration task. To see this, recall the four confusion matrix variables introduced in Figure 1: (1) True Positive (TP) corresponds to the case where the prediction is inaccurate and the model is uncertain, (2) True Negative (TN) to the accurate and certain case, (3) False Negative (FN) to the inaccurate and certain case (i.e., over-confidence), and finally (4) False Positive (FP) to the accurate and uncertain case (i.e., under-confidence).",
"cite_spans": [],
"ref_spans": [
{
"start": 474,
"end": 483,
"text": "Figure 1:",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.2 Connection between Calibration AUPRC and Collaboration Metrics",
"sec_num": null
},
{
"text": "Then, given a review capacity constraint \u03b1, we see that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Connection between Calibration AUPRC and Collaboration Metrics",
"sec_num": null
},
{
"text": "ReviewEfficiency(\u03b1) = T P \u03b1 T P \u03b1 + F P \u03b1 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Connection between Calibration AUPRC and Collaboration Metrics",
"sec_num": null
},
{
"text": "which measures the proportion of examples that were sent to human moderator that would otherwise be classified incorrectly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Connection between Calibration AUPRC and Collaboration Metrics",
"sec_num": null
},
{
"text": "Similarly, we can also define the analog of recall for the calibration task, which we term Review Effectiveness:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Connection between Calibration AUPRC and Collaboration Metrics",
"sec_num": null
},
{
"text": "ReviewEffectiveness(\u03b1) = T P \u03b1 T P \u03b1 + F N \u03b1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Connection between Calibration AUPRC and Collaboration Metrics",
"sec_num": null
},
{
"text": "Review Effectiveness is also a valid intrinsic metric for the model's collaboration effectivess. It measures the proportion of incorrect model predictions that were successfully corrected using the review strategy. (We visualize model performance in Review Effectiveness in Section D.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Connection between Calibration AUPRC and Collaboration Metrics",
"sec_num": null
},
{
"text": "To this end, the calibration AUPRC can be understood as the area under the Review Efficiency v.s. Review Effectiveness curve, with the usual classification threshold replaced by the review capacity \u03b1. Therefore, calibration AUPRC serves as a threshold-agnostic metric that captures the model's intrinsic performance in collaboration effectiveness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Connection between Calibration AUPRC and Collaboration Metrics",
"sec_num": null
},
{
"text": "For the uncertainty-based review, an important question is whether classic uncertainty metrics like Brier score capture good model-moderator collaborative efficiency. The SNGP Ensemble's good performance contrasts with its poorer Brier score (Table 1) . By comparison, the calibration AUPRC successfully captures this good performance, and is highest for that model. More generally, the low-review fraction review efficiency with cross-entropy is exactly captured by the calibration AUPRC (same ordering for the two measures). This correspondence is not perfect: though the SNGP Ensemble with focal loss has the highest review efficiency overall, its calibration AUPRC is lower than the MC Dropout or SNGP models (models with next highest review efficiencies). This may reflect the reshaping effect of focal loss on SNGP's calibration (explored in Appendix C). Overall, calibration AUPRC much better captures the relationship between collaborative ability and calibration than do classic calibration metrics like Brier score (or ECE, see Appendix D). This is because classic calibration metrics are population-level averages, whereas calibration AUPRC measures the ranking of the predictions, and is thus more closely linked to the review order problem.",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 251,
"text": "(Table 1)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "A.3 Further Discussion",
"sec_num": null
},
{
"text": "In this appendix, we derive Eq. (2) from the main paper, which connects the Review Efficiency and Oracle-Collaborative Accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Connecting Review Efficiency and Collaborative Accuracy",
"sec_num": null
},
{
"text": "Given a trained toxicity model, a review policy and a dataset, let us denote r as the event that an example gets reviewed, and c as the event that model prediction is correct. Now, assuming the model sends \u03b1 \u00d7 100% of examples for human review, we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Connecting Review Efficiency and Collaborative Accuracy",
"sec_num": null
},
{
"text": "Acc = P (c), \u03b1 = P (r).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Connecting Review Efficiency and Collaborative Accuracy",
"sec_num": null
},
{
"text": "Also, we can write:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Connecting Review Efficiency and Collaborative Accuracy",
"sec_num": null
},
{
"text": "RE(\u03b1) = P (\u00acc|r)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Connecting Review Efficiency and Collaborative Accuracy",
"sec_num": null
},
{
"text": "i.e., review efficiency RE(\u03b1) is the percentage of incorrect predictions among reviewed examples. Finally:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Connecting Review Efficiency and Collaborative Accuracy",
"sec_num": null
},
{
"text": "OC-Acc(\u03b1) = P (c \u2229 \u00acr) + P (c \u2229 r) + P (\u00acc \u2229 r)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Connecting Review Efficiency and Collaborative Accuracy",
"sec_num": null
},
{
"text": "i.e., an example is predicted correctly by the collaborative system if either the model prediction itself is accurate (c\u2229\u00acr), or it was sent for human review (c \u2229 r or \u00acc \u2229 r).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Connecting Review Efficiency and Collaborative Accuracy",
"sec_num": null
},
{
"text": "The above expression of OC-Acc leads to two different decompositions of the OC-Acc. First, OC-Acc(\u03b1) = P (c \u2229 \u00acr) + P (r) = P (c|\u00acr)P (\u00acr) + P (r)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Connecting Review Efficiency and Collaborative Accuracy",
"sec_num": null
},
{
"text": "= Acc(1 \u2212 \u03b1) * (1 \u2212 \u03b1) + \u03b1,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Connecting Review Efficiency and Collaborative Accuracy",
"sec_num": null
},
{
"text": "where Acc(1 \u2212 \u03b1) is the accuracy among the (1 \u2212 \u03b1) \u00d7 100% examples that are not sent to human for review. Alternatively, we can write OC-Acc(\u03b1) = P (c) + P (\u00acc \u2229 r) = P (c) + P (\u00acc|r)P (r) = Acc + RE(\u03b1) * \u03b1, which coincides with the expression in Eq. (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Connecting Review Efficiency and Collaborative Accuracy",
"sec_num": null
},
{
"text": "We study the effect of focal loss on calibration quality for SNGP in further detail. We plot the reliability diagrams for the deterministic and SNGP models trained with cross-entropy and focal crossentropy. Figure 5 shows the reliability diagrams in-domain and Figure 6 shows them out-of-domain. We see that focal loss fundamentally changes the models' uncertainty behavior, systematically shifting the uncertainty curves from overconfidence (the lower right, below the diagonal) and toward the calibration line (the diagonal). However, the exact pattern of change is model dependent. We find that the deterministic model with focal loss is over-confident for predictions under 0.5, and under-confident above 0.5, while the SNGP models are still over-confident, although to a lesser degree compared to using cross-entropy loss.",
"cite_spans": [],
"ref_spans": [
{
"start": 207,
"end": 215,
"text": "Figure 5",
"ref_id": "FIGREF2"
},
{
"start": 261,
"end": 269,
"text": "Figure 6",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "C Reliability Diagrams for Deterministic and SNGP models",
"sec_num": null
},
{
"text": "We give the results for the remaining collaborative metrics not included in the main paper in this appendix. These give a comprehensive summary of the collaborative performance of the models evaluated in the paper. Table 2 and Table 3 give values for all review fraction-independent metrics, both in-and out-of-domain, respectively. We did not include the ECE and calibration AUROC in the corresponding table in the main paper (Table 1) for simplicity. Similarly, Figures 9 and 7 show the in-domain results (the OC-Acc and OC-AUPRC), and the out-of-domain plots (in the same order, followed by Review Efficiency) are Figures 10 through 12. The in-and out-of-domain OC-AUROC figures are included in the main paper as Figure 2 and Figure 3, respectively; the in-domain Review Efficiency is Figure 4 . Additionally, we also report results on the Review Effectiveness metric (introduced in Section A.2) in Figures 13-14 . Similiar to Review Efficiency, we find little difference in performance between different uncertainty models, and that the uncertainty-based policy outperforms toxicity-based policy especially in the low review capacity setting. The uncertainty-based review strategy uniformly outperforms toxicity-based review, though the difference is small when training with focal loss. Figure 9 : Oracle-model collaborative accuracy as a function of review fraction, trained with cross-entropy (left) or focal loss (right) and evaluated on Wikipedia Toxicity corpus (in-domain test environment). Solid Line: uncertainty-based strategy. Dashed Line: toxicity-based strategy. Focal loss yields a significant improvement, equivalent to using a 10% review fraction with cross-entropy. For most review fractions (below \u03b1 = 0.1), MC Dropout using the uncertainty review strategy performs trained with cross-entropy, while overall the Deep Ensemble with focal loss (again using the uncertainty review) performs best. For large review fractions (\u03b1 > 0.1), the toxicity-based review in fact outperforms the uncertainty review. Figure 10 : Oracle-model collaborative accuracy as a function of review fraction, trained with cross-entropy (left) or focal loss (right) and evaluated on CivilComments corpus (out-of-domain deployment environment). Solid Line: uncertainty-based strategy. Dashed Line: toxicity-based strategy. Training with cross-entropy, MC Dropout using uncertainty-based review performs best until the SNGP Ensemble using the toxicity-based review overtakes it at \u03b1 = 0.05. Training with focal loss gives significant baseline improvements (by mitigating the class imbalance problem); the Deep Ensemble is best for small \u03b1 while the SNGP Ensemble is best for large \u03b1. Despite these baseline improvements, they appear to come at a cost of collaborative accuracy in the intermediate region around \u03b1 \u2248 0.05, where the SNGP Ensemble trained with cross-entropy briefly performs best overall, apart from that region the models with focal loss and the uncertainty-based review perform best (Deep Ensemble for \u03b1 \u2264 0.02, SNGP Ensemble for \u03b1 \u2265 0.1). Figure 11 : Review efficiency as a function of review fraction, trained with cross-entropy (left) or focal loss (right) and evaluated on Wikipedia Toxicity corpus (in-domain test environment). Solid Line: uncertainty-based strategy. Dashed Line: toxicity-based strategy. This is the only plot for which we observe a major crossover: training with cross-entropy, the efficiency for toxicity-based review spikes above the uncertainty-based review efficiency at \u03b1 = 0.02 before converging back toward it with increasing \u03b1. There is no corresponding crossover when training with focal loss; rather, the efficiencies of the two strategies converge at \u03b1 = 0.02 instead. Figure 12 : Review efficiency as a function of review fraction, trained with cross-entropy (left) or focal loss (right) and evaluated on CivilComments corpus (out-of-domain deployment environment). Solid Line: uncertainty-based strategy. Dashed Line: toxicity-based strategy. This is the only plot for which we observe a major crossover: training with cross-entropy, the efficiency for toxicity-based review spikes above the uncertainty-based review efficiency at \u03b1 = 0.02 before converging back toward it with increasing \u03b1. There is no corresponding crossover when training with focal loss; rather, the efficiencies of the two strategies converge at \u03b1 = 0.02 instead.",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 222,
"text": "Table 2",
"ref_id": "TABREF6"
},
{
"start": 227,
"end": 234,
"text": "Table 3",
"ref_id": "TABREF7"
},
{
"start": 427,
"end": 436,
"text": "(Table 1)",
"ref_id": "TABREF2"
},
{
"start": 464,
"end": 479,
"text": "Figures 9 and 7",
"ref_id": null
},
{
"start": 716,
"end": 724,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 788,
"end": 796,
"text": "Figure 4",
"ref_id": "FIGREF1"
},
{
"start": 902,
"end": 915,
"text": "Figures 13-14",
"ref_id": "FIGREF1"
},
{
"start": 1292,
"end": 1300,
"text": "Figure 9",
"ref_id": null
},
{
"start": 2024,
"end": 2033,
"text": "Figure 10",
"ref_id": null
},
{
"start": 3050,
"end": 3059,
"text": "Figure 11",
"ref_id": null
},
{
"start": 3714,
"end": 3723,
"text": "Figure 12",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "D Complete metric results",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank Jeffrey Sorensen for extensive feedback on the manuscript, and Nitesh Goyal, Aditya Gupta, Luheng He, Balaji Lakshminarayanan, Alyssa Lees, and Jie Ren for helpful comments and discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": " Figure 14 : Review effectiveness as a function of review fraction, trained with cross-entropy (left) or focal loss (right) and evaluated on CivilComments corpus (out-of-domain deployment environment). Solid Line: uncertaintybased strategy. Dashed Line: toxicity-based strategy. Here, the uncertainty review performs better until a crossover at \u03b1 \u2248 0.02, much lower than in Figure 4 . The SNGP Ensemble performs best with either cross-entropy or focal loss (slightly better with cross-entropy).",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 10,
"text": "Figure 14",
"ref_id": null
},
{
"start": 374,
"end": 382,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fairness and robustness in invariant learning: A case study in toxicity classification",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Adragna",
"suffix": ""
},
{
"first": "Elliot",
"middle": [],
"last": "Creager",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Madras",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2011.06485"
]
},
"num": null,
"urls": [],
"raw_text": "Robert Adragna, Elliot Creager, David Madras, and Richard Zemel. 2020. Fairness and robustness in invariant learning: A case study in toxicity classi- fication. arXiv preprint arXiv:2011.06485.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Challenges for toxic comment classification: An in-depth error analysis",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Betty Van Aken",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Risch",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Krestel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "L\u00f6ser",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "33--42",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5105"
]
},
"num": null,
"urls": [],
"raw_text": "Betty van Aken, Julian Risch, Ralf Krestel, and Alexan- der L\u00f6ser. 2018. Challenges for toxic comment clas- sification: An in-depth error analysis. In Proceed- ings of the 2nd Workshop on Abusive Language On- line (ALW2), pages 33-42, Brussels, Belgium. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Concrete problems in ai safety",
"authors": [
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Olah",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Steinhardt",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Christiano",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Schulman",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Man\u00e9",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.06565"
]
},
"num": null,
"urls": [],
"raw_text": "Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man\u00e9. 2016. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Is the most accurate ai the best teammate? optimizing ai for teamwork",
"authors": [
{
"first": "Gagan",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Besmira",
"middle": [],
"last": "Nushi",
"suffix": ""
},
{
"first": "Ece",
"middle": [],
"last": "Kamar",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Horvitz",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "35",
"issue": "",
"pages": "11405--11414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gagan Bansal, Besmira Nushi, Ece Kamar, Eric Horvitz, and Daniel S. Weld. 2021. Is the most ac- curate ai the best teammate? optimizing ai for team- work. Proceedings of the AAAI Conference on Arti- ficial Intelligence, 35(13):11405-11414.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Predict then interpolate: A simple algorithm to learn stable classifiers",
"authors": [
{
"first": "Yujia",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Shiyu",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2021,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yujia Bao, Shiyu Chang, and Regina Barzilay. 2021. Predict then interpolate: A simple algorithm to learn stable classifiers. In International Conference on Machine Learning. PMLR.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Classification with a reject option using a hinge loss",
"authors": [
{
"first": "L",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Marten",
"middle": [
"H"
],
"last": "Bartlett",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wegkamp",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "59",
"pages": "1823--1840",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter L. Bartlett and Marten H. Wegkamp. 2008. Clas- sification with a reject option using a hinge loss. Journal of Machine Learning Research, 9(59):1823- 1840.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Nuanced metrics for measuring unintended bias with real data for text classification",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Borkan",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2019. Nuanced metrics for measuring unintended bias with real data for text classification. CoRR, abs/1903.04561.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Anyone can become a troll: Causes of trolling behavior in online discussions",
"authors": [
{
"first": "Justin",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bernstein",
"suffix": ""
},
{
"first": "Cristian",
"middle": [],
"last": "Danescu-Niculescu-Mizil",
"suffix": ""
},
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW '17",
"volume": "",
"issue": "",
"pages": "1217--1230",
"other_ids": {
"DOI": [
"10.1145/2998181.2998213"
]
},
"num": null,
"urls": [],
"raw_text": "Justin Cheng, Michael Bernstein, Cristian Danescu- Niculescu-Mizil, and Jure Leskovec. 2017. Any- one can become a troll: Causes of trolling behavior in online discussions. In Proceedings of the 2017 ACM Conference on Computer Supported Coopera- tive Work and Social Computing, CSCW '17, page 1217-1230, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Antisocial behavior in online discussion communities",
"authors": [
{
"first": "Justin",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Cristian",
"middle": [],
"last": "Danescu-Niculescu-Mizil",
"suffix": ""
},
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justin Cheng, Cristian Danescu-Niculescu-Mizil, and Jure Leskovec. 2015. Antisocial behavior in online discussion communities. Proceedings of the Interna- tional AAAI Conference on Web and Social Media, 9(1).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Online learning with abstention",
"authors": [
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Giulia",
"middle": [],
"last": "Desalvo",
"suffix": ""
},
{
"first": "Claudio",
"middle": [],
"last": "Gentile",
"suffix": ""
},
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning",
"volume": "80",
"issue": "",
"pages": "1059--1067",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corinna Cortes, Giulia DeSalvo, Claudio Gentile, Mehryar Mohri, and Scott Yang. 2018. Online learning with abstention. In Proceedings of the 35th International Conference on Machine Learn- ing, volume 80 of Proceedings of Machine Learn- ing Research, pages 1059-1067, Stockholmsm\u00e4ssan, Stockholm Sweden. PMLR.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning with rejection",
"authors": [
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Giulia",
"middle": [],
"last": "Desalvo",
"suffix": ""
},
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 2016,
"venue": "ALT 2016, Proceedings, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
"volume": "",
"issue": "",
"pages": "19--29",
"other_ids": {
"DOI": [
"10.1007/978-3-319-46379-7_5"
]
},
"num": null,
"urls": [],
"raw_text": "Corinna Cortes, Giulia DeSalvo, and Mehryar Mohri. 2016. Learning with rejection. In Algorithmic Learning Theory -27th International Conference, ALT 2016, Proceedings, Lecture Notes in Computer Science (including subseries Lecture Notes in Artifi- cial Intelligence and Lecture Notes in Bioinformat- ics), pages 67-82. Springer Verlag. 27th Interna- tional Conference on Algorithmic Learning Theory, ALT 2016 ; Conference date: 19-10-2016 Through 21-10-2016.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The relationship between precision-recall and roc curves",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Goadrich",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 23rd International Conference on Machine Learning, ICML '06",
"volume": "",
"issue": "",
"pages": "233--240",
"other_ids": {
"DOI": [
"10.1145/1143844.1143874"
]
},
"num": null,
"urls": [],
"raw_text": "Jesse Davis and Mark Goadrich. 2006. The relation- ship between precision-recall and roc curves. In Proceedings of the 23rd International Conference on Machine Learning, ICML '06, page 233-240, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Modeling the detection of textual cyberbullying",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Dinakar",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Lieberman",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karthik Dinakar, Roi Reichart, and Henry Lieberman. 2011. Modeling the detection of textual cyberbully- ing. Proceedings of the International AAAI Confer- ence on Web and Social Media, 5(1).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Measuring and mitigating unintended bias in text classification",
"authors": [
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, AIES '18",
"volume": "",
"issue": "",
"pages": "67--73",
"other_ids": {
"DOI": [
"10.1145/3278721.3278729"
]
},
"num": null,
"urls": [],
"raw_text": "Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat- ing unintended bias in text classification. In Pro- ceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, AIES '18, page 67-73, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Towards interpreting and mitigating shortcut learning behavior of NLU models",
"authors": [
{
"first": "Mengnan",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Manjunatha",
"suffix": ""
},
{
"first": "Rajiv",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Ruchi",
"middle": [],
"last": "Deshpande",
"suffix": ""
},
{
"first": "Franck",
"middle": [],
"last": "Dernoncourt",
"suffix": ""
},
{
"first": "Jiuxiang",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xia",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mengnan Du, Varun Manjunatha, Rajiv Jain, Ruchi Deshpande, Franck Dernoncourt, Jiuxiang Gu, Tong Sun, and Xia Hu. 2021. Towards interpreting and mitigating shortcut learning behavior of NLU mod- els. CoRR, abs/2103.06922.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Analyzing the role of model uncertainty for electronic health records",
"authors": [
{
"first": "Michael",
"middle": [
"W"
],
"last": "Dusenberry",
"suffix": ""
},
{
"first": "Dustin",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Kemp",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Nixon",
"suffix": ""
},
{
"first": "Ghassen",
"middle": [],
"last": "Jerfel",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Heller",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"M"
],
"last": "Dai",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the ACM Conference on Health, Inference, and Learning, CHIL '20",
"volume": "",
"issue": "",
"pages": "204--213",
"other_ids": {
"DOI": [
"10.1145/3368555.3384457"
]
},
"num": null,
"urls": [],
"raw_text": "Michael W. Dusenberry, Dustin Tran, Edward Choi, Jonas Kemp, Jeremy Nixon, Ghassen Jerfel, Kather- ine Heller, and Andrew M. Dai. 2020. Analyzing the role of model uncertainty for electronic health records. In Proceedings of the ACM Conference on Health, Inference, and Learning, CHIL '20, pages 204-213, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The times sharply increases articles open for comments, using google's technology. The New York Times",
"authors": [
{
"first": "Bassey",
"middle": [],
"last": "Etim",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bassey Etim. 2017. The times sharply increases arti- cles open for comments, using google's technology. The New York Times, 13.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 33rd International Conference on Machine Learning",
"volume": "48",
"issue": "",
"pages": "1050--1059",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model un- certainty in deep learning. In Proceedings of The 33rd International Conference on Machine Learn- ing, volume 48 of Proceedings of Machine Learning Research, pages 1050-1059, New York, New York, USA. PMLR.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Deep Learning",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press. http://www. deeplearningbook.org.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "On calibration of modern neural networks",
"authors": [
{
"first": "Chuan",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Geoff",
"middle": [],
"last": "Pleiss",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "1321--1330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Wein- berger. 2017. On calibration of modern neural net- works. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1321-1330. PMLR.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Confer- ence Track Proceedings. OpenReview.net.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Identifying Causal-Effect Inference Failure with Uncertainty-Aware Models",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Jesson",
"suffix": ""
},
{
"first": "S\u00f6ren",
"middle": [],
"last": "Mindermann",
"suffix": ""
},
{
"first": "Uri",
"middle": [],
"last": "Shalit",
"suffix": ""
},
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems",
"volume": "33",
"issue": "",
"pages": "11637--11649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Jesson, S\u00f6ren Mindermann, Uri Shalit, and Yarin Gal. 2020. Identifying Causal-Effect Infer- ence Failure with Uncertainty-Aware Models. In Advances in Neural Information Processing Systems, volume 33, pages 11637-11649. Curran Associates, Inc.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "How latin america's second largest social platform moderates more than 150k comments a month",
"authors": [
{
"first": "",
"middle": [],
"last": "Jigsaw",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "2021--2025",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jigsaw. 2019. How latin america's second largest so- cial platform moderates more than 150k comments a month. https://medium.com/jigsaw/how- latin-americas-second-largest-social- platform-moderates-more-than-150k- comments-a-month-df0d8a3ac242. Accessed: 2021-04-26.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "End-to-end bias mitigation by modelling biases in corpora",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Rabeeh Karimi Mahabadi",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8706--8716",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.769"
]
},
"num": null,
"urls": [],
"raw_text": "Rabeeh Karimi Mahabadi, Yonatan Belinkov, and James Henderson. 2020. End-to-end bias mitiga- tion by modelling biases in corpora. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8706-8716, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Wilds: A benchmark of in-the-wild distribution shifts",
"authors": [
{
"first": "Pang",
"middle": [],
"last": "Wei Koh",
"suffix": ""
},
{
"first": "Shiori",
"middle": [],
"last": "Sagawa",
"suffix": ""
},
{
"first": "Henrik",
"middle": [],
"last": "Marklund",
"suffix": ""
},
{
"first": "Sang",
"middle": [
"Michael"
],
"last": "Xie",
"suffix": ""
},
{
"first": "Marvin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Akshay",
"middle": [],
"last": "Balsubramani",
"suffix": ""
},
{
"first": "Weihua",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Michihiro",
"middle": [],
"last": "Yasunaga",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2012.07421"
]
},
"num": null,
"urls": [],
"raw_text": "Pang Wei Koh, Shiori Sagawa, Henrik Mark- lund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Sara Beery, et al. 2020. Wilds: A benchmark of in-the-wild distribution shifts. arXiv preprint arXiv:2012.07421.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Second opinion needed: communicating uncertainty in medical machine learning",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Kompa",
"suffix": ""
},
{
"first": "Jasper",
"middle": [],
"last": "Snoek",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"L"
],
"last": "Beam",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1038/s41746-020-00367-3"
]
},
"num": null,
"urls": [],
"raw_text": "Benjamin Kompa, Jasper Snoek, and Andrew L. Beam. 2021. Second opinion needed: communicating un- certainty in medical machine learning. npj Digital Medicine, 4(1):4.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Improving model calibration with accuracy versus uncertainty optimization",
"authors": [
{
"first": "Ranganath",
"middle": [],
"last": "Krishnan",
"suffix": ""
},
{
"first": "Omesh",
"middle": [],
"last": "Tickoo",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ranganath Krishnan and Omesh Tickoo. 2020. Im- proving model calibration with accuracy versus un- certainty optimization. Advances in Neural Informa- tion Processing Systems.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Simple and scalable predictive uncertainty estimation using deep ensembles",
"authors": [
{
"first": "Balaji",
"middle": [],
"last": "Lakshminarayanan",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Pritzel",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Blundell",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. 2017. Simple and scalable predic- tive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Focal loss for dense object detection",
"authors": [
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Priya",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Girshick",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. 2017. Focal loss for dense ob- ject detection. In Proceedings of the IEEE Interna- tional Conference on Computer Vision (ICCV).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Simple and principled uncertainty estimation with deterministic deep learning via distance awareness",
"authors": [
{
"first": "Jeremiah",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shreyas",
"middle": [],
"last": "Padhy",
"suffix": ""
},
{
"first": "Dustin",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Tania",
"middle": [
"Bedrax"
],
"last": "Weiss",
"suffix": ""
},
{
"first": "Balaji",
"middle": [],
"last": "Lakshminarayanan",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems",
"volume": "33",
"issue": "",
"pages": "7498--7512",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeremiah Liu, Zi Lin, Shreyas Padhy, Dustin Tran, Tania Bedrax Weiss, and Balaji Lakshminarayanan. 2020. Simple and principled uncertainty estimation with deterministic deep learning via distance aware- ness. In Advances in Neural Information Processing Systems, volume 33, pages 7498-7512. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Accurate uncertainty estimation and decomposition in ensemble learning",
"authors": [
{
"first": "Jeremiah",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Paisley",
"suffix": ""
},
{
"first": "Marianthi-Anna",
"middle": [],
"last": "Kioumourtzoglou",
"suffix": ""
},
{
"first": "Brent",
"middle": [],
"last": "Coull",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeremiah Liu, John Paisley, Marianthi-Anna Kioumourtzoglou, and Brent Coull. 2019. Ac- curate uncertainty estimation and decomposition in ensemble learning. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Intrinsic versus extrinsic evaluations of parsing systems",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Moll\u00e1",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Hutchinson",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the EACL 2003 Workshop on Evaluation Initiatives in Natural Language Processing: Are Evaluation Methods, Metrics and Resources Reusable?, Evalinitiatives '03",
"volume": "",
"issue": "",
"pages": "43--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diego Moll\u00e1 and Ben Hutchinson. 2003. Intrinsic ver- sus extrinsic evaluations of parsing systems. In Proceedings of the EACL 2003 Workshop on Eval- uation Initiatives in Natural Language Processing: Are Evaluation Methods, Metrics and Resources Reusable?, Evalinitiatives '03, page 43-50, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Calibrating deep neural networks using focal loss",
"authors": [
{
"first": "Jishnu",
"middle": [],
"last": "Mukhoti",
"suffix": ""
},
{
"first": "Viveka",
"middle": [],
"last": "Kulharia",
"suffix": ""
},
{
"first": "Amartya",
"middle": [],
"last": "Sanyal",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Golodetz",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Torr",
"suffix": ""
},
{
"first": "Puneet",
"middle": [],
"last": "Dokania",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems",
"volume": "33",
"issue": "",
"pages": "15288--15299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stu- art Golodetz, Philip Torr, and Puneet Dokania. 2020. Calibrating deep neural networks using focal loss. In Advances in Neural Information Processing Sys- tems, volume 33, pages 15288-15299. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Obtaining well calibrated probabilities using bayesian binning",
"authors": [
{
"first": "Gregory",
"middle": [
"F"
],
"last": "Mahdi Pakdaman Naeini",
"suffix": ""
},
{
"first": "Milos",
"middle": [],
"last": "Cooper",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hauskrecht",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI'15",
"volume": "",
"issue": "",
"pages": "2901--2907",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahdi Pakdaman Naeini, Gregory F. Cooper, and Mi- los Hauskrecht. 2015. Obtaining well calibrated probabilities using bayesian binning. In Proceed- ings of the Twenty-Ninth AAAI Conference on Artifi- cial Intelligence, AAAI'15, page 2901-2907. AAAI Press.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Learning from failure: De-biasing classifier from biased classifier",
"authors": [
{
"first": "Junhyun",
"middle": [],
"last": "Nam",
"suffix": ""
},
{
"first": "Hyuntak",
"middle": [],
"last": "Cha",
"suffix": ""
},
{
"first": "Sungsoo",
"middle": [],
"last": "Ahn",
"suffix": ""
},
{
"first": "Jaeho",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jinwoo",
"middle": [],
"last": "Shin",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems",
"volume": "33",
"issue": "",
"pages": "20673--20684",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin. 2020. Learning from fail- ure: De-biasing classifier from biased classifier. In Advances in Neural Information Processing Systems, volume 33, pages 20673-20684. Curran Associates, Inc.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift",
"authors": [
{
"first": "Yaniv",
"middle": [],
"last": "Ovadia",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Fertig",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Nado",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Sculley",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Nowozin",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Dillon",
"suffix": ""
},
{
"first": "Balaji",
"middle": [],
"last": "Lakshminarayanan",
"suffix": ""
},
{
"first": "Jasper",
"middle": [],
"last": "Snoek",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D. Sculley, Sebastian Nowozin, Joshua Dillon, Bal- aji Lakshminarayanan, and Jasper Snoek. 2019. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. In Ad- vances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "The Future of Free Speech, Trolls, Anonymity and Fake News Online",
"authors": [
{
"first": "Lee",
"middle": [],
"last": "Rainie",
"suffix": ""
},
{
"first": "Janna",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Albright",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee Rainie, Janna Anderson, and Jonathan Albright. 2017. The Future of Free Speech, Trolls, Anonymity and Fake News Online.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "To apply machine learning responsibly, we use it in moderation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Robin",
"middle": [
"C"
],
"last": "Salganik",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew J. Salganik and Robin C. Lee. 2020. To apply machine learning responsibly, we use it in moderation. https://open.nytimes.com/to- apply-machine-learning-responsibly- we-use-it-in-moderation-d001f49e0644/.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Introduction to uncertainty quantification",
"authors": [
{
"first": "T",
"middle": [
"J"
],
"last": "Sullivan",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/978-3-319-23395-6"
]
},
"num": null,
"urls": [],
"raw_text": "T. J. Sullivan. 2015. Introduction to uncertainty quan- tification. Springer.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Towards debiasing NLU models from unknown biases",
"authors": [
{
"first": "Nafise Sadat",
"middle": [],
"last": "Prasetya Ajie Utama",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Moosavi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7597--7610",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.613"
]
},
"num": null,
"urls": [],
"raw_text": "Prasetya Ajie Utama, Nafise Sadat Moosavi, and Iryna Gurevych. 2020. Towards debiasing NLU models from unknown biases. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 7597-7610, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Identifying spurious correlations for robust text classification",
"authors": [
{
"first": "Zhao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Aron",
"middle": [],
"last": "Culotta",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "3431--3440",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.308"
]
},
"num": null,
"urls": [],
"raw_text": "Zhao Wang and Aron Culotta. 2020. Identifying spu- rious correlations for robust text classification. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 3431-3440, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Ex machina: Personal attacks seen at scale",
"authors": [
{
"first": "Ellery",
"middle": [],
"last": "Wulczyn",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
}
],
"year": 2017,
"venue": "Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee",
"volume": "",
"issue": "",
"pages": "1391--1399",
"other_ids": {
"DOI": [
"10.1145/3038912.3052591"
]
},
"num": null,
"urls": [],
"raw_text": "Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Pro- ceedings of the 26th International Conference on World Wide Web, WWW '17, pages 1391-1399, Re- public and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Increasing robustness to spurious correlations using forgettable examples",
"authors": [
{
"first": "Yadollah",
"middle": [],
"last": "Yaghoobzadeh",
"suffix": ""
},
{
"first": "Soroush",
"middle": [],
"last": "Mehri",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Tachet Des Combes",
"suffix": ""
},
{
"first": "T",
"middle": [
"J"
],
"last": "Hazen",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "3319--3332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yadollah Yaghoobzadeh, Soroush Mehri, Remi Ta- chet des Combes, T. J. Hazen, and Alessandro Sor- doni. 2021. Increasing robustness to spurious cor- relations using forgettable examples. In Proceed- ings of the 16th Conference of the European Chap- ter of the Association for Computational Linguistics: Main Volume, pages 3319-3332, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Detection of harassment on web 2.0. Proceedings of the Content Analysis in the WEB",
"authors": [
{
"first": "Dawei",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Zhenzhen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Liangjie",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Brian",
"suffix": ""
},
{
"first": "April",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Lynne",
"middle": [],
"last": "Kontostathis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Edwards",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "2",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dawei Yin, Zhenzhen Xue, Liangjie Hong, Brian D Davison, April Kontostathis, and Lynne Edwards. 2009. Detection of harassment on web 2.0. Pro- ceedings of the Content Analysis in the WEB, 2:1-7.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Challenges in automated debiasing for toxic language detection",
"authors": [
{
"first": "Xuhui",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "3143--3155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuhui Zhou, Maarten Sap, Swabha Swayamdipta, Yejin Choi, and Noah Smith. 2021. Challenges in au- tomated debiasing for toxic language detection. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 3143-3155, Online. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Semilog plot of oracle-model collaborative AUROC as a function of review fraction (the proportion of comments the model can send for human/oracle review), trained with cross-entropy (XENT, left) or focal loss (right) and evaluated on the Wikipedia Talk corpus (i.e., the in-domain testing environment). Solid line: uncertainty-based review strategy. Dashed line: toxicity-based review strategy. The best performing method is the SNGP Ensemble trained with focal loss and uses the uncertainty-based strategy.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Semilog plot of review efficiency as a function of review fraction, trained with cross-entropy and evaluated on the Wikipedia Talk corpus (i.e., the in-domain testing environment, left) and CivilComments (i.e., the out-of-domain deployment environment, right). Solid line: uncertainty-based review strategy. Dashed line: toxicity-based review strategy.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "In-domain reliability diagrams for deterministic models and SNGP models with cross-entropy (XENT) and focal cross-entropy.",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "Reliability diagrams for deterministic models and SNGP models with cross-entropy (XENT) and focal cross-entropy on the CivilComments dataset.",
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"text": "Oracle-model collaborative AUPRC as a function of review fraction, trained with cross-entropy (left) or focal loss (right) and evaluated on CivilComments corpus (out-of-domain deployment environment). Solid Line: uncertainty-based strategy. Dashed Line: toxicity-based strategy. Similar to the out-of-domain OC-AUROC results inFigure 3, of the models trained with cross-entropy loss the Deep Ensemble performs best. Training with focal loss yields a small baseline improvement, but surprisingly results in the SNGP Ensemble performing best.",
"uris": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td/><td/><td colspan=\"3\">TESTING ENV (WIKIPEDIA TALK)</td><td/><td/><td colspan=\"4\">DEPLOYMENT ENV (CIVILCOMMENTS)</td></tr><tr><td colspan=\"3\">MODEL DETERMINISTIC SNGP MC DROPOUT AUROC \u2191 XENT 0.9734 0.9741 0.9729 DEEP ENSEMBLE 0.9738</td><td>0.8019 0.8029 0.8006 0.8074</td><td colspan=\"2\">0.9231 0.9233 0.9274 0.0508 0.0548 0.0548 0.9231 0.0544</td><td>0.4053 0.4063 0.4020 0.4045</td><td>0.7796 0.7695 0.7806 0.7849</td><td>0.6689 0.6665 0.6727 0.6741</td><td colspan=\"2\">0.9628 0.9640 0.9671 0.0241 0.0246 0.0253 0.9625 0.0242</td><td>0.3581 0.3660 0.3707 0.3484</td></tr><tr><td/><td>SNGP ENSEMBLE</td><td>0.9741</td><td>0.8045</td><td>0.9226</td><td>0.0549</td><td>0.4158</td><td>0.7749</td><td>0.6719</td><td>0.9633</td><td>0.0248</td><td>0.3655</td></tr><tr><td/><td>DETERMINISTIC</td><td>0.9730</td><td>0.8036</td><td>0.9476</td><td>0.0628</td><td>0.3804</td><td>0.8013</td><td>0.6766</td><td colspan=\"2\">0.9795 0.0377</td><td>0.3018</td></tr><tr><td>FOCAL</td><td>SNGP MC DROPOUT DEEP ENSEMBLE</td><td>0.9736 0.9741 0.9735</td><td>0.8076 0.8076 0.8077</td><td colspan=\"2\">0.9455 0.9472 0.9479 0.0639 0.0388 0.0622</td><td>0.3885 0.3890 0.3840</td><td>0.8003 0.8009 0.8041</td><td>0.6820 0.6790 0.6814</td><td colspan=\"2\">0.9784 0.9790 0.9795 0.0381 0.0264 0.0360</td><td>0.3181 0.3185 0.3035</td></tr><tr><td/><td>SNGP ENSEMBLE</td><td>0.9742</td><td>0.8122</td><td colspan=\"2\">0.9467 0.0379</td><td>0.3846</td><td>0.8002</td><td>0.6827</td><td>0.9790</td><td>0.0266</td><td>0.3212</td></tr></table>",
"html": null,
"text": "AUPRC \u2191 ACC. \u2191 BRIER \u2193 CALIB. AUPRC \u2191 AUROC \u2191 AUPRC \u2191 ACC. \u2191 BRIER \u2193 CALIB. AUPRC \u2191"
},
"TABREF2": {
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td>1.000</td><td colspan=\"3\">Oracle-Model Collaborative AUROC (XENT In-Domain) Deterministic</td><td>1.000</td><td>Oracle-Model Collaborative AUROC (Focal In-Domain) Deterministic</td></tr><tr><td/><td>0.995</td><td>SNGP MC Dropout</td><td/><td/><td>0.995</td><td>SNGP MC Dropout</td></tr><tr><td>OC-AUROC</td><td>0.985 0.990</td><td colspan=\"2\">Deep Ensemble SNGP Ensemble</td><td>OC-AUROC</td><td>0.985 0.990</td><td>Deep Ensemble SNGP Ensemble</td></tr><tr><td/><td>0.980</td><td/><td/><td/><td>0.980</td></tr><tr><td/><td>0.975</td><td/><td/><td/><td>0.975</td></tr><tr><td/><td/><td>10 \u22123</td><td>10 \u22122 Revie\u2212 fraction \u03b1</td><td>10 \u22121</td><td/><td>10 \u22123</td><td>10 \u22122 Review raction \u03b1</td><td>10 \u22121</td></tr></table>",
"html": null,
"text": "Metrics for models evaluated on the testing environment (the Wikipedia Talk corpus, left) and deployment environment (the CivilComments corpus, right). XENT (top) and Focal (bottom) indicate models trained with cross-entropy and focal losses, respectively. The best metric values for each loss function are shown in bold."
},
"TABREF5": {
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td>DETERMINISTIC</td><td>0.9734</td><td>0.8019</td><td>0.9231 0.0245</td><td>0.0548</td><td>0.9230</td><td>0.4053</td></tr><tr><td>XENT</td><td>SNGP MC DROPOUT DEEP ENSEMBLE</td><td>0.9741 0.9729 0.9738</td><td>0.8029 0.8006 0.8074</td><td colspan=\"2\">0.9233 0.0280 0.9274 0.0198 0.0508 0.0548 0.9231 0.0235 0.0544</td><td>0.9238 0.9282 0.9245</td><td>0.4063 0.4020 0.4045</td></tr><tr><td/><td>SNGP ENSEMBLE</td><td>0.9741</td><td>0.8045</td><td>0.9226 0.0281</td><td>0.0549</td><td>0.9249</td><td>0.4158</td></tr><tr><td/><td>DETERMINISTIC</td><td>0.9730</td><td>0.8036</td><td>0.9476 0.1486</td><td>0.0628</td><td>0.9405</td><td>0.3804</td></tr><tr><td>FOCAL</td><td>SNGP MC DROPOUT DEEP ENSEMBLE</td><td>0.9736 0.9741 0.9735</td><td>0.8076 0.8076 0.8077</td><td>0.9455 0.0076 0.9472 0.1442 0.9479 0.1536</td><td>0.0388 0.0622 0.0639</td><td>0.9385 0.9425 0.9418</td><td>0.3885 0.3890 0.3840</td></tr><tr><td/><td>SNGP ENSEMBLE</td><td>0.9742</td><td>0.8122</td><td colspan=\"2\">0.9467 0.0075 0.0379</td><td>0.9400</td><td>0.3846</td></tr></table>",
"html": null,
"text": "MODEL (TEST)AUROC \u2191 AUPRC \u2191 ACC. \u2191 ECE \u2193 BRIER \u2193 CALIB. AUROC \u2191 CALIB. AUPRC \u2191"
},
"TABREF6": {
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td>DETERMINISTIC</td><td>0.7796</td><td>0.6689</td><td>0.9628 0.0128</td><td>0.0246</td><td>0.9412</td><td>0.3581</td></tr><tr><td>XENT</td><td>SNGP MC DROPOUT DEEP ENSEMBLE</td><td>0.7695 0.7806 0.7849</td><td>0.6665 0.6727 0.6741</td><td colspan=\"2\">0.9640 0.0070 0.9671 0.0136 0.0241 0.0253 0.9625 0.0141 0.0242</td><td>0.9457 0.9502 0.9420</td><td>0.3660 0.3707 0.3484</td></tr><tr><td/><td>SNGP ENSEMBLE</td><td>0.7749</td><td>0.6719</td><td>0.9633 0.0076</td><td>0.0248</td><td>0.9463</td><td>0.3655</td></tr><tr><td/><td>DETERMINISTIC</td><td>0.8013</td><td>0.6766</td><td>0.9795 0.1973</td><td>0.0377</td><td>0.9444</td><td>0.3018</td></tr><tr><td>FOCAL</td><td>SNGP MC DROPOUT DEEP ENSEMBLE</td><td>0.8003 0.8009 0.8041</td><td>0.6820 0.6790 0.6814</td><td colspan=\"2\">0.9784 0.0182 0.0264 0.9790 0.1896 0.0360 0.9795 0.1998 0.0381</td><td>0.9465 0.9481 0.9461</td><td>0.3181 0.3185 0.3035</td></tr><tr><td/><td>SNGP ENSEMBLE</td><td>0.8002</td><td>0.6827</td><td colspan=\"2\">0.9790 0.0176 0.0266</td><td>0.9481</td><td>0.3212</td></tr></table>",
"html": null,
"text": "Metrics for models on the Wikipedia Talk corpus (in-domain testing environment), all numbers are averaged over 10 model runs. XENT and Focal indicate models trained with the cross-entropy and focal losses, respectively. The best metric values for each loss function are shown in bold.MODEL (DEPLOYMENT) AUROC \u2191 AUPRC \u2191 ACC. \u2191 ECE \u2193 BRIER \u2193 CALIB. AUROC \u2191 CALIB. AUPRC \u2191"
},
"TABREF7": {
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td>0.975 1.000</td><td colspan=\"3\">Oracle-Model Collaborative AUPRC (XENT In-Domain) Deterministic SNGP</td><td/><td>1.000 0.975</td><td>Oracle-Model Collaborative AUPRC (Focal In-Domain) Deterministic SNGP</td></tr><tr><td>OC-AUPRC</td><td>0.875 0.900 0.925 0.950</td><td/><td>MC Dropout Deep Ensemble SNGP Ensemble</td><td/><td>OC-AUPRC</td><td>0.875 0.900 0.950 0.925</td><td>MC Dropout Deep Ensemble SNGP Ensemble</td></tr><tr><td/><td>0.850</td><td/><td/><td/><td/><td>0.850</td></tr><tr><td/><td>0.825</td><td/><td/><td/><td/><td>0.825</td></tr><tr><td/><td>0.800</td><td/><td/><td/><td/><td>0.800</td></tr><tr><td/><td/><td>10 \u22123</td><td>10 \u22122 Revie\u2212 fraction \u03b1</td><td>10 \u22121</td><td/><td>10 \u22123</td><td>10 \u22122 Review raction \u03b1</td><td>10 \u22121</td></tr><tr><td colspan=\"5\">10 \u22122 Review raction \u03b1 Oracle-Model Collaborative AUPRC (XENT OO-Domain) 10 \u22121 Deterministic SNGP MC Dropout Figure 7: 10 \u22123 0.66 0.68 0.70 0.78 0.80 0.72 0.74 0.76 OC-AUPRC Deep Ensemble SNGP Ensemble</td><td colspan=\"2\">OC-AUPRC</td><td>0.66 0.68 0.70 0.78 0.80 0.72 0.74 0.76</td><td>10 \u22123 Oracle-Model Collaborative AUPRC (Focal OO-Domain) 10 \u22122 10 \u22121 Review fraction \u03b1 Deterministic SNGP MC Dropout Deep Ensemble SNGP Ensemble</td></tr></table>",
"html": null,
"text": "Metrics for models on the CivilComments corpus (out-of-domain deployment environment), all numbers are averaged over 10 model runs. XENT and Focal indicate models trained with the cross-entropy and focal losses, respectively. The best metric values for each loss function are shown in bold. Oracle-model collaborative AUPRC as a function of review fraction, trained with cross-entropy (left) or focal loss (right) and evaluated on Wikipedia Toxicity corpus (in-domain test environment). Solid Line: uncertainty-based strategy. Dashed Line: toxicity-based strategy. Overall, the SNGP Ensemble with focal loss using the uncertainty review performs best across all \u03b1. Restricted to cross-entropy loss, the Deep Ensemble using uncertainty-based review performs best until \u03b1 \u2248 0.1, when some of the toxicity-based reviews (e.g. SNGP Ensemble) begin to outperform it."
}
}
}
}