ACL-OCL / Base_JSON /prefixE /json /emnlp /2020.emnlp-main.102.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:25:33.765485Z"
},
"title": "Calibrated Language Model Fine-Tuning for In-and Out-of-Distribution Data",
"authors": [
{
"first": "Lingkai",
"middle": [],
"last": "Kong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": {
"settlement": "Atlanta",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Haoming",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": {
"settlement": "Atlanta",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Yuchen",
"middle": [],
"last": "Zhuang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": {
"settlement": "Atlanta",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Jie",
"middle": [],
"last": "Lyu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": {
"settlement": "Atlanta",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Tuo",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": {
"settlement": "Atlanta",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Chao",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": {
"settlement": "Atlanta",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Fine-tuned pre-trained language models can suffer from severe miscalibration for both in-distribution and out-of-distribution (OOD) data due to over-parameterization. To mitigate this issue, we propose a regularized fine-tuning method. Our method introduces two types of regularization for better calibration: (1) On-manifold regularization, which generates pseudo on-manifold samples through interpolation within the data manifold. Augmented training with these pseudo samples imposes a smoothness regularization to improve in-distribution calibration. (2) Off-manifold regularization, which encourages the model to output uniform distributions for pseudo off-manifold samples to address the over-confidence issue for OOD data. Our experiments demonstrate that the proposed method outperforms existing calibration methods for text classification in terms of expectation calibration error, misclassification detection, and OOD detection on six datasets. Our code can be found at https://github.com/Lingkai-Kong/ Calibrated-BERT-Fine-Tuning.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Fine-tuned pre-trained language models can suffer from severe miscalibration for both in-distribution and out-of-distribution (OOD) data due to over-parameterization. To mitigate this issue, we propose a regularized fine-tuning method. Our method introduces two types of regularization for better calibration: (1) On-manifold regularization, which generates pseudo on-manifold samples through interpolation within the data manifold. Augmented training with these pseudo samples imposes a smoothness regularization to improve in-distribution calibration. (2) Off-manifold regularization, which encourages the model to output uniform distributions for pseudo off-manifold samples to address the over-confidence issue for OOD data. Our experiments demonstrate that the proposed method outperforms existing calibration methods for text classification in terms of expectation calibration error, misclassification detection, and OOD detection on six datasets. Our code can be found at https://github.com/Lingkai-Kong/ Calibrated-BERT-Fine-Tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Pre-trained language models have recently brought the natural language processing (NLP) community into the transfer learning era. The transfer learning framework consists of two stages, where we first pre-train a large-scale language model, (e.g., BERT (Devlin et al., 2019) , RoBERTa , ALBERT (Lan et al., 2020) and T5 (Raffel et al., 2019) ) on a large text corpus and then finetune it on downstream tasks. Such a fine-tuning approach has achieved SOTA performance in many NLP benchmarks (Wang et al., 2018 (Wang et al., , 2019 .",
"cite_spans": [
{
"start": 253,
"end": 274,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 294,
"end": 312,
"text": "(Lan et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 320,
"end": 341,
"text": "(Raffel et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 490,
"end": 508,
"text": "(Wang et al., 2018",
"ref_id": "BIBREF34"
},
{
"start": 509,
"end": 529,
"text": "(Wang et al., , 2019",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many applications, however, require trustworthy predictions that need to be not only accurate but also well calibrated. In particular, a wellcalibrated model should produce reliable confi- Figure 1 : The reliability diagrams on in-distribution data (the first row) and the histograms of the model confidence on out-of-distribution (OOD) data (the second row) of CNN (Kim, 2014) and fine-tuned BERT-MLP classifier (Devlin et al., 2019) . Though BERT improves classification accuracy, it makes over-confident predictions for both in-distribution and OOD data. dent estimates for both in-distribution and out-ofdistribution (OOD) data: (1) For in-distribution data, a model should produce predictive probabilities close to the true likelihood for each class, i.e., confidence \u2248 true likelihood. (2) For OOD data, which do not belong to any class of the training data, the model output should produce high uncertainty to say 'I don't know', i.e., confidence \u2248 random guess, instead of producing absurdly wrong yet wildly confident predictions. Providing such calibrated output probabilities can help us to achieve better model robustness (Lee et al., 2018) , model fairness (Chouldechova, 2017) and improve label efficiency via uncertainty driven learning (Gal et al., 2017; Siddhant and Lipton, 2018; Shen et al., 2018) .",
"cite_spans": [
{
"start": 366,
"end": 377,
"text": "(Kim, 2014)",
"ref_id": "BIBREF12"
},
{
"start": 413,
"end": 434,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 1134,
"end": 1152,
"text": "(Lee et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 1170,
"end": 1190,
"text": "(Chouldechova, 2017)",
"ref_id": "BIBREF2"
},
{
"start": 1252,
"end": 1270,
"text": "(Gal et al., 2017;",
"ref_id": "BIBREF5"
},
{
"start": 1271,
"end": 1297,
"text": "Siddhant and Lipton, 2018;",
"ref_id": "BIBREF28"
},
{
"start": 1298,
"end": 1316,
"text": "Shen et al., 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 189,
"end": 197,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unfortunately, have shown that due to over-parameterization, deep convolutional neural networks are often miscalibrated. Our experimental investigation further corroborates that fine-tuned language models can suffer from miscalibration even more for NLP tasks. As shown in Figure 1, we present the calibration of a BERT-MLP model for a text classification task on the 20NG dataset. Specifically, we train a TextCNN (Kim, 2014) and a BERT-MLP using 20NG 15 (the first 15 categories of 20NG) and then evaluate them on both in-distribution and OOD data. The first row plots their reliability diagrams (Niculescu-Mizil and Caruana, 2005) on the test set of 20NG 15 . Though BERT improves the classification accuracy from 83.9% to 87.4%, it also increases the expected calibration error (ECE, see more details in Section 2) from 4.0% to 9.5%. This indicates that BERT-MLP is much more miscalibrated for in-distribution data. The second row plots the histograms of the model confidence, i.e., the maximum output probability, on the test set of 20NG 5 (the unseen 5 categories of 20NG). While it is desirable to produce low probabilities for these unseen classes, BERT-MLP produces wrong yet overconfident predictions for such OOD data.",
"cite_spans": [
{
"start": 415,
"end": 426,
"text": "(Kim, 2014)",
"ref_id": "BIBREF12"
},
{
"start": 598,
"end": 633,
"text": "(Niculescu-Mizil and Caruana, 2005)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 273,
"end": 279,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Such an aggravation of miscalibration is due to the even more significant over-parameterization of these language models. At the pre-training stage, they are trained on a huge amount of unlabeled data in an unsupervised manner, e.g., T5 is pre-trained on 745 GB text. To capture rich semantic and syntactic information from such a large corpus, the language models are designed to have enormous capacity, e.g., T5 has about 11 billion parameters. At the fine-tuning stage, however, only limited labeled data are available in the downstream tasks. With the extremely high capacity, these models can easily overfit training data likelihood and be over-confident in their predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To fight against miscalibration, a natural option is to apply a calibration method such as temperature scaling in a post-processing step. However, temperature scaling only learns a single parameter to rescale all the logits, which is not flexible and insufficient. Moreover, it cannot improve out-of-distribution calibration. A second option is to mitigate miscalibration during training using regularization. For example, Pereyra et al. (2017) propose an entropy regularizer to prevent over-confidence, but it can needlessly hurt legitimate high confident predictions. A third option is to use Bayesian neural networks (Blundell et al., 2015; Louizos and Welling, 2017) , which treat model parameters as probability distributions to represent model uncertainty explicitly. However, these Bayesian approaches are often prohibitive, as the priors of the model parameters are difficult to specify, and exact inference is intractable, which can also lead to unreliable uncertainty estimates.",
"cite_spans": [
{
"start": 423,
"end": 444,
"text": "Pereyra et al. (2017)",
"ref_id": "BIBREF25"
},
{
"start": 620,
"end": 643,
"text": "(Blundell et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 644,
"end": 670,
"text": "Louizos and Welling, 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a regularization approach to addressing miscalibration for fine-tuning pre-trained language models from a data augmentation perspective. We propose two new regularizers using pseudo samples both on and off the data manifold to mitigate data scarcity and prevent over-confident predictions. Specifically, our method imposes two types of regularization for better calibration during fine-tuning: (1) On-manifold regularization: We first generate on-manifold samples by interpolating the training data and their corresponding labels along the direction learned from hidden feature space; training over such augmented on-manifold data introduces a smoothness constraint within the data manifold to improve the model calibration for in-distribution data. (2) Off-manifold regularization: We generate off-manifold samples by adding relatively large perturbations along the directions that point outward the data manifold; we penalize the negative entropy of the output distribution for such off-manifold samples to address the overconfidence issue for OOD data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate our proposed model calibration method on six text classification datasets. For indistribution data, we measure ECE and the performance of misclassification detection. For outof-distribution data, we measure the performance of OOD detection. Our experiments show that our method outperforms existing state-of-the-art methods in both settings, and meanwhile maintains competitive classification accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We summarize our contribution as follows: (1) We propose a general calibration framework, which can be applied to pre-trained language model finetuning, as well as other deep neural network-based prediction problems. (2) The proposed method adopts on-and off-manifold regularization from a data augmentation perspective to improve calibration for both in-distribution and OOD data. 3We conduct comprehensive experiments showing that our method outperforms existing calibration methods in terms of ECE, miscalssification detec-tion and OOD detection on six text classification datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We describe model calibration for both indistribution and out-of-distribution data. Calibration for In-distribution Data: For indistribution data, a well-calibrated model is expected to output prediction confidence comparable to its classification accuracy. For example, given 100 data points with their prediction confidence 0.6, we expect 60 of them to be correctly classified. More precisely, for a data point X, we denote by Y (X) the ground truth label,\u0176 (X) the label predicted by the model, andP (X) the output probability associated with the predicted label. The calibration error of the predictive model for a given confidence p \u2208 (0, 1) is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "E p = |P(\u0176 (X) = Y (X)|P (X) = p) \u2212 p|. (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "As (1) involves population quantities, we usually adopt empirical approximations (Guo, 2017) to estimate the calibration error. Specifically, we partition all data points into M bins of equal size according to their prediction confidences. Let B m denote the bin with prediction confidences bounded between m and u m . Then, for any p \u2208 [ m , u m ), we define the empirical calibration error as:",
"cite_spans": [
{
"start": 81,
"end": 92,
"text": "(Guo, 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "E p =\u00ca m = 1 |B m | i\u2208Bm 1(\u0177 i = y i ) \u2212p i , (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "where y i ,\u0177 i andp i are the true label, predicted label and confidence for sample i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "To evaluate the overall calibration error of the predictive model, we can futher take a weighted average of the calibration errors of all bins, which is also known as the expected calibration error (ECE) (Naeini et al., 2015) defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ECE = M m=1 |B m | n\u00ca m ,",
"eq_num": "(3)"
}
],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "where n is the sample size. We remark that the goal of calibration is to minimize the calibration error without significantly sacrificing prediction accuracy. Otherwise, a random guess classifier can achieve zero calibration error. Calibration for Out-of-distribution Data: In real applications, a model can encounter test data that significantly differ from the training data. For example, they come from other unseen classes, or they are potential outliers. A well-calibrated model is expected to produce an output with high uncertainty for such out-of-distribution (OOD) data, formally,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "P (Y = j) = 1/K \u2200j = 1, ..., K,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "where K is the number of classes of the training data. As such, we can detect OOD data by setting up an uncertainty threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "We consider N data points of the target task S =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calibrated Fine-Tuning via Manifold Smoothing",
"sec_num": "3"
},
{
"text": "{(x i , y i )} N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calibrated Fine-Tuning via Manifold Smoothing",
"sec_num": "3"
},
{
"text": ", where x i 's denote the input embedding of the sentence and y i 's are the associated onehot labels. Let f (\u2022) denote the feature extraction layers (e.g., BERT); let g(\u2022) denote the task-specific layer; and let \u03b8 denote all parameters of f and g. We propose to optimize the following objective at the fine-tuning stage:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calibrated Fine-Tuning via Manifold Smoothing",
"sec_num": "3"
},
{
"text": "min \u03b8 F(\u03b8) = E x,y\u223cS (g \u2022 f (x), y) + \u03bb on R on (g \u2022 f ) + \u03bb off R off (g \u2022 f ), (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calibrated Fine-Tuning via Manifold Smoothing",
"sec_num": "3"
},
{
"text": "where is the cross entropy loss, and \u03bb on , \u03bb off are two hyper-parameters. The regularizers R on and R off are for on-and off-manifold calibration, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calibrated Fine-Tuning via Manifold Smoothing",
"sec_num": "3"
},
{
"text": "The on-manifold regularizer R on exploits the interpolation of training data within the data manifold to improve the in-distribution calibration. Specifically, given two training samples (x, y) and (x,\u1ef9) and the feature extraction layers f , we generate an on-manifold pseudo sample (x , y ) as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On-manifold Regularization",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x * = arg min x \u2208B(x,\u03b4on) D x (f (x ), f (x)),",
"eq_num": "(5)"
}
],
"section": "On-manifold Regularization",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y = (1 \u2212 \u03b4 y )y + \u03b4 y\u1ef9 ,",
"eq_num": "(6)"
}
],
"section": "On-manifold Regularization",
"sec_num": "3.1"
},
{
"text": "where \u03b4 on and \u03b4 y are small interpolation parameters for data and label, and D x is a proper distance for features extracted by f such as cosine distance, i.e., D x (a, b) = a/ a 2 , b/ b 2 , and B(x, \u03b4 on ) denotes an \u221e ball centered at x with a radius \u03b4 on , i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On-manifold Regularization",
"sec_num": "3.1"
},
{
"text": "B(x, \u03b4 on ) = {x | x \u2212 x \u221e \u2264 \u03b4 on }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On-manifold Regularization",
"sec_num": "3.1"
},
{
"text": "As can be seen, x * is essentially interpolating between x andx on the data manifold, and D x (f (\u2022), f (\u2022)) can be viewed as a metric over such a manifold. However, as f (\u2022) is learnt from finite training data, it can recover the actual data manifold only up to a certain statistical error. Therefore, x < l a t e x i t s h a 1 _ b a s e 6 4 = \" k O 3 6 F w q n z 6 9 P G e 5 U 1 Y s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On-manifold Regularization",
"sec_num": "3.1"
},
{
"text": "d T l 4 2 T 9 o = \" > A A A B 8 X i c b V D L S g M x F L 3 j s 9 Z X 1 a W b Y B F c l Z k q 2 G X B j c s K 9 o F t K Z k 0 0 4 Z m M k N y R y x D / 8 K N C 0 X c + j f u / B s z 7 S y 0 9 U D g c M 6 9 5 N z j x 1 I Y d N 1 v Z 2 1 9 Y 3 N r u 7 B T 3 N 3 b P z g s H R 2 3 T J R o x p s s k p H u + N R w K R R v o k D J O 7 H m N P Q l b / u T m 8 x v P 3 J t R K T u c R r z f k h H S g S C U b T S Q y + k O P a D 9 G k 2 K J X d i j s H W S V e T s q Q o z E o f f W G E U t C r p B J a k z X c 2 P s p 1 S j Y J L P i r 3 E 8 J i y C R 3 x r q W K h t z 0 0 3 n i G T m 3 y p A E k b Z P I Z m r v z d S G h o z D X 0 7 m S U 0 y 1 4 m / u d 1 E w x q / V S o O E G u 2 O K j I J E E I 5 K d T 4 Z C c 4 Z y a g l l W t i s h I 2 p p g x t S U V b g r d 8 8 i p p V S v e Z a V 6 d 1 W u 1 / I 6 C n A K Z 3 A B H l x D H W 6 h A U 1 g o O A Z X u H N M c 6 L 8 + 5 8 L E b X n H z n B P 7 A + f w B + 4 W R F g = = < / l a t e x i t >x < l a t e x i t s h a 1 _ b a s e 6 4 = \" p t Y S 2 Y t z p E m 1 9 I J M 5 K r 9 s 0 k F M H A = \" > A A A B + 3 i c b V D L S s N A F J 3 U V 6 2 v W J d u B o v g q i R V s M u C G 5 c V 7 A O a U C a T S T t 0 8 m D m R l p C f s W N C 0 X c + i P u / B s n b R b a e m D g c M 6 9 3 D P H S w R X Y F n f R m V r e 2 d 3 r 7 p f O z g 8 O j 4 x T + t 9 F a e S s h 6 N R S y H H l F M 8 I j 1 g I N g w 0 Q y E n q C D b z Z X e E P n p h U P I 4 e Y Z E w N y S T i A e c E t D S 2 K w 7 w I X P M i c k M P W C b J 7 n Y 7 N h N a 0 l 8 C a x S 9 J A J b p j 8 8 v x Y 5 q G L A I q i F I j 2 0 r A z Y g E T g X L a 0 6 q W E L o j E z Y S N O I h E y 5 2 T J 7 j i + 1 4 u M g l v p F g J f q 7 4 2 M h E o t Q k 9 P F h H V u l e I / 3 m j F I K 2 m / E o S Y F F d H U o S A W G G B d F Y J 9 L R k E s N C F U c p 0 V 0 y m R h I K u q 6 Z L s N e / v E n 6 r a Z 9 3 W w 9 3 D Q 6 7 b K O K j p H F + g K 2 e g W d d A 9 6 q I e o m i O n t E r e j N y 4 8 V 4 N z 5 W o x W j 3 D l D f 2 B 8 / g D v Q p T 9 < / l a t e x i t > on",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On-manifold Regularization",
"sec_num": "3.1"
},
{
"text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" C Z C h a C f d A h 9 O / e w y V 9 6 F q P Y v e 1 U = \" > A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On-manifold Regularization",
"sec_num": "3.1"
},
{
"text": "A A B + H i c b V D L S s N A F J 3 4 r P X R q E s 3 g 0 V w V Z I q 2 G X B j c s K 9 g F N C J P J p B 0 6 j z A z E W r o l 7 h x o Y h b P 8 W d f + O 0 z U J b D 1 w 4 n H M v 9 9 4 T Z 4 x q 4 3 n f z s b m 1 v b O b m W v u n 9 w e F R z j 0 9 6 W u Y K k y 6 W T K p B j D R h V J C u o Y a R Q a Y I 4 j E j / X h y O / f 7 j 0 R p K s W D m W Y k 5 G g k a E o x M l a K 3 F q Q E G Z Q V A S K Q y l m k V v 3 G t 4 C c J 3 4 J a m D E p 3 I / Q o S i X N O h M E M a T 3 0 v c y E B V K G Y k Z m 1 S D X J E N 4 g k Z k a K l A n O i w W B w + g x d W S W A q l S 1 h 4 E L 9 P V E g r v W U x 7 a T I z P W q 9 5 c / M 8 b 5 i Z t h Q U V W W 6 I w M t F a c 6 g k X C e A k y o I t i w q S U I K 2 p v h X i M F M L G Z l W 1 I f i r L 6 + T X r P h X z W a 9 9 f 1 d q u M o w L O w D m 4 B D 6 4 A W 1 w B z q g C z D I w T N 4 B W / O k / P i v D s f y 9 Y N p 5 w 5 B X / g f P 4 A 3 Q W T M A = = < / l a t e x i t >",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On-manifold Regularization",
"sec_num": "3.1"
},
{
"text": "On-manifold sample Off-manifold sample Data manifold",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "x < l a t e x i t s h a 1 _ b a s e 6 4 = \" k O 3 6 F w q n z 6 9 P G e 5 U 1 Y s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "d T l 4 2 T 9 o = \" > A A A B 8 X i c b V D L S g M x F L 3 j s 9 Z X 1 a W b Y B F c l Z k q 2 G X B j c s K 9 o F t K Z k 0 0 4 Z m M k N y R y x D / 8 K N C 0 X c + j f u / B s z 7 S y 0 9 U D g c M 6 9 5 N z j x 1 I Y d N 1 v Z 2 1 9 Y 3 N r u 7 B T 3 N 3 b P z g s H R 2 3 T J R o x p s s k p H u + N R w K R R v o k D J O 7 H m N P Q l b / u T m 8 x v P 3 J t R K T u c R r z f k h H S g S C U b T S Q y + k O P a D 9 G k 2 K J X d i j s H W S V e T s q Q o z E o f f W G E U t C r p B J a k z X c 2 P s p 1 S j Y J L P i r 3 E 8 J i y C R 3 x r q W K h t z 0 0 3 n i G T m 3 y p A E k b Z P I Z m r v z d S G h o z D X 0 7 m S U 0 y 1 4 m / u d 1 E w x q / V S o O E G u 2 O K j I J E E I 5 K d T 4 Z C c 4 Z y a g l l W t i s h I 2 p p g x t S U V b g r d 8 8 i p p V S v e Z a V 6 d 1 W u 1 / I 6 C n A K Z 3 A B H l x D H W 6 h A U 1 g o O A Z X u H N M c 6 L 8 + 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "8 L E b X n H z n B P 7 A + f w B + 4 W R F g = = < / l a t e x i t >x < l a t e x i t s h a 1 _ b a s e 6 4 = \" p t Y S 2 Y t z p E m 1 9 I J M 5 K r 9 s 0 k F M H A = \" > A A A B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "+ 3 i c b V D L S s N A F J 3 U V 6 2 v W J d u B o v g q i R V s M u C G 5 c V 7 A O a U C a T S T t 0 8 m D m R l p C f s W N C 0 X c + i P u / B s n b R b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "a e m D g c M 6 9 3 D P H S w R X Y F n f R m V r e 2 d 3 r 7 p f O z g 8 O j 4 x T + t 9 F a e S s h 6 N R S y H H l F M 8 I j 1 g I N g w 0 Q y E n q C D b z Z X e E P n p h U P I 4 e Y Z E w N y S T i A e c E t D S 2 K w 7 w I X P M i c k M P W C b J 7 n Y 7 N h N a 0 l 8 C a x S 9 J A J b p j 8 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "v x Y 5 q G L A I q i F I j 2 0 r A z Y g E T g X L a 0 6 q W E L o j E z Y S N O I h E y 5 2 T J 7 j i + 1 4 u M g l v p F g J f q 7 4 2 M h E o t Q k 9 P F h H V u l e I / 3 m j F I K 2 m / E o S Y F F d H U o S A W G G B d F Y J 9 L R k E s N C F U c p 0 V 0 y m R h I K u q 6 Z L s N e / v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "E n 6 r a Z 9 3 W w 9 3 D Q 6 7 b K O K j p H F + g K 2 e g W d d A 9 6 q I e o m i O n t E r e j N y 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "8 V 4 N z 5 W o x W j 3 D l D f 2 B 8 / g D v Q p T 9 < / l a t e x i t > o\u21b5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" X f d b J C o J T 3 e M o z 4 S p 5 9 l m 8 G k c I I = \" > A A A B Figure 2 : The on-manifold and off-manifold samples generated by our calibration procedure. Mixup adopts a coarse linear interpolation and the generated data point may deviate from the data manifold.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 118,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "+ X i c b V D L S s N A F J 3 U V 6 2 v q E s 3 g 0 V w V Z I q 2 G X B j c s K 9 g F N C J P J p B 0 6 j z A z K Z T Q P 3 H j Q h G 3 / o k 7 / 8 Z p m 4 W 2 H r h w O O d e 7 r 0 n z h j V x v O + n c r W 9 s 7 u X n W / d n B 4 d H z i n p 7 1 t M w V J l 0 s m V S D G G n C q C B d Q w 0 j g 0 w R x G N G + v H k f u H 3 p 0 R p K s W T m W U k 5 G g k a E o x M l a K X D d I C D M o K g L F o U z T e e T W v Y a 3 B N w k f k n q o E Q n c r + C R O K c E 2 E w Q 1 o P f S 8 z Y Y G U o Z i R e S 3 I N c k Q n q A R G V o q E C c 6 L J a X z + G V V R K Y S m V L G L h U f 0 8 U i G s 9 4 7 H t 5 M i M 9 b q 3 E P / z h r l J W 2 F B R Z Y b I v B q U Z o z a C R c x A A T q g g 2 b G Y J w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "we constrain x * to stay in a small neighborhood of x, which ensures x * to stay close to the actual data manifold. This is different from existing interpolation methods such as Mixup (Zhang et al., 2018; Verma et al., 2019) . These methods adopt coarse linear interpolations either in the input space or latent feature space, and the generated data may significantly deviate from the data manifold.",
"cite_spans": [
{
"start": 184,
"end": 204,
"text": "(Zhang et al., 2018;",
"ref_id": "BIBREF38"
},
{
"start": 205,
"end": 224,
"text": "Verma et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "Note that our method not only interpolates x but also y. This can yield a soft label for x * , when x andx belong to different classes. Such an interpolation is analogous to semi-supervised learning, where soft pseudo labels are generated for the unlabelled data. These soft-labelled data essentially induce a smoothing effect, and prevent the model from making overconfident predictions toward one single class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "We remark that our method is more adaptive than the label smoothing method (M\u00fcller et al., 2019) . As each interpolated data point involves at most two classes, it is unnecessary to distribute probability mass to other classes in the soft label. In contrast, label smoothing is more rigid and enforces all classes to have equally nonzero probability mass in the soft label.",
"cite_spans": [
{
"start": 75,
"end": 96,
"text": "(M\u00fcller et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "We then define the on-manifold regularizer as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "R on (g \u2022 f ) = E (x ,y )\u223cSon D KL (y , g \u2022 f (x )),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "where S on denotes the set of all pseudo labelled data generated by our interpolation method, and D KL denotes the KL-divergence between two probability simplices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": null
},
{
"text": "The off-manifold regularizer, R 2 , encourages the model to yield low confidence outputs for samples outside the data manifold, and thus mitigates ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Off-manifold Regularization",
"sec_num": "3.2"
},
{
"text": "For each x i \u2208 B, randomly select {x i ,\u1ef9 i } from B, initialize x i \u2190 x i + v i with v i \u223c UNIF[\u2212\u03b4 on , \u03b4 on ] d \u2206 i \u2190 sign(\u2207 x i D x (f (x i ), f (x i ))) x i \u2190 \u03a0 x i \u2212x i \u221e\u2264\u03b4on (x i \u2212 \u03b4 on \u2206 i ) y \u2190 (1 \u2212 \u03b4 y )y i + \u03b4 y\u1ef9i // Generate off-manifold samples: For each x i \u2208 B, initialize x i \u2190 x i + v i with v i \u223c UNIF[\u2212\u03b4 off , \u03b4 off ] d \u2206 i \u2190 sign(\u2207 x i (g \u2022 f (x i ), y) x i \u2190 \u03a0 x i \u2212x i \u221e=\u03b4off (x i + \u03b4 off \u2206 i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Off-manifold Regularization",
"sec_num": "3.2"
},
{
"text": "Update \u03b8 using ADAM end for the over-confidence issue for out-of-distribution (OOD) data. Specifically, given a training sample (x, y), we generate an off-manifold pseudo sample x * by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Off-manifold Regularization",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x * = max x \u2208S(x,\u03b4 off ) (g \u2022 f (x ), y),",
"eq_num": "(7)"
}
],
"section": "Off-manifold Regularization",
"sec_num": "3.2"
},
{
"text": "where S(x, \u03b4 off ) denotes an \u221e sphere centered at x with a radius \u03b4 off . Since we expect x * to mimic OOD data, we first need to choose a relatively large \u03b4 off such that the sphere S(x, \u03b4 off ) can reach outside the data manifold. Then, we generate the pseudo off-manifold sample from the sphere along the adversarial direction. Existing literature (Stutz et al., 2019; Gilmer et al., 2018) has shown that such an adversarial direction points outward the data manifold.",
"cite_spans": [
{
"start": 352,
"end": 372,
"text": "(Stutz et al., 2019;",
"ref_id": "BIBREF30"
},
{
"start": 373,
"end": 393,
"text": "Gilmer et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Off-manifold Regularization",
"sec_num": "3.2"
},
{
"text": "By penalizing the prediction confidence for these off-manifold samples, we are able to encourage low prediction confidence for OOD data. Hence, we define the off-manifold regularizer as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Off-manifold Regularization",
"sec_num": "3.2"
},
{
"text": "R off (g \u2022 f ) = E x \u223cS off \u2212 H(g \u2022 f (x )), (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Off-manifold Regularization",
"sec_num": "3.2"
},
{
"text": "where S off denotes the set of all generated offmanifold samples, and H(\u2022) denotes the entropy of the probability simplex.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Off-manifold Regularization",
"sec_num": "3.2"
},
{
"text": "We can adopt stochastic gradient-type algorithms such as ADAM (Kingma and Ba, 2014) to optimize (4). At each iteration, we need to first solve two inner optimization problems in (5) and 7, and then plug x and x into (4) to compute the stochastic gradient. The two inner problems can be solved using the projected sign gradient update for multiple steps. In practice, we observe that one single update step with random initialization is already sufficient to efficiently optimize \u03b8. Such a phenomenon has also been observed in existing literature on adversarial training (Wong et al., 2019) . We summarize the overall training procedure in Algorithm 1.",
"cite_spans": [
{
"start": 570,
"end": 589,
"text": "(Wong et al., 2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.3"
},
{
"text": "To evaluate calibration performance for indistribution data, we measure the expected calibration error (ECE) and the misclassification detection score. For out-of-distribution data, we measure the OOD detection score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We detect the misclassified and OOD samples by model confidence, which is the output probability associated with the predicted labelP (X). Specifically, we setup a confidence threshold \u03c4 \u2208 [0, 1], and take the samples with confidence below the threshold, i.e.,P (X) < \u03c4 , as the misclassified or OOD samples. We can compute the detection F 1 score for every \u03c4 : F 1 (\u03c4 ), and obtain a calibration curve (F 1 (\u03c4 ) vs. \u03c4 ). Then, we set \u03c4 upper as the upper bound of the confidence threshold, since a well calibrated model should provide probabilities that reflect the true likelihood and it is not reasonable to use a large \u03c4 to detect them. We use the empirical Normalized Bounded Area Under the Calibration Curve (NBAUCC) as the overall detection score:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "NBAUCC \u03c4upper = 1 M M i=1 F 1 \u03c4 upper M i ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "where M is the number of sub-intervals for the numerical integration. We set M = 50 through-out the following experiments. Note that the traditional binary classification metrics, e.g., AUROC and AUPR, cannot measure the true calibration because the model can still achieve high scores even though it has high confidences for the misclassified and OOD samples. We provide more explanations of the metrics in Appendix C. We report the performance when \u03c4 upper = 0.5 here and the results when \u03c4 upper = 0.7 and 1 in Appendix D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For each dataset, we construct an in-distribution training set, an in-distribution testing set, and an OOD testing set. Specifically, we use the following datasets: 20NG 1 . The 20 Newsgroups dataset (20NG) contains news articles with 20 categories. We use Stanford Sentiment Treebank (SST-2) (Socher et al., 2012) as the OOD data. 20NG 15 . We take the first 15 categories of 20NG as the in-distribution data and the other 5 categories (20NG 5 ) as the OOD data. WOS (Kowsari et al., 2017) . Web of Science (WOS) dataset contains scientific articles with 134 categories. We use AGnews (Zhang et al., 2015) as the OOD data. WOS 100 . We use the first 100 classes of WOS as the in-distribution data and the other 34 classes (WOS 34 ) as the OOD data.",
"cite_spans": [
{
"start": 293,
"end": 314,
"text": "(Socher et al., 2012)",
"ref_id": "BIBREF29"
},
{
"start": 468,
"end": 490,
"text": "(Kowsari et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 586,
"end": 606,
"text": "(Zhang et al., 2015)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Yahoo (Chang et al., 2008) . This dataset contains questions with 10 categories posted to 'Yahoo! Answers'. We randomly draw 2000 from 140, 000 samples for each category as the training set. We use Yelp (Zhang et al., 2015) as the OOD data. Yahoo 8 . We use the first 8 classes of Yahoo as the in-distribution data and the other 2 classes (Yahoo 2 ) as the OOD data. The testing set of OOD detection consists of the in-distribution testing set and the OOD data. More dataset details can be found in Appendix A. We remark that 20NG 15 , WOS 100 , and Yahoo 8 are included to make OOD detection more challenging, as the OOD data and the training data come from similar data sources.",
"cite_spans": [
{
"start": 6,
"end": 26,
"text": "(Chang et al., 2008)",
"ref_id": "BIBREF1"
},
{
"start": 203,
"end": 223,
"text": "(Zhang et al., 2015)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "We consider the following baselines:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "\u2022 BERT (Devlin et al., 2019) is a pre-trained base BERT model stacked with one linear layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "\u2022 Temperature Scaling (TS) (Guo, 2017) is a postprocessing calibration method that learns a single parameter to rescale the logits on the development set after the model is fine-tuned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "\u2022 Monte Carlo Dropout (MCDP) (Gal and Ghahramani, 2016) applies dropout at testing time for multiple times and then averages the outputs.",
"cite_spans": [
{
"start": 29,
"end": 55,
"text": "(Gal and Ghahramani, 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "\u2022 Label Smoothing (LS) (M\u00fcller et al., 2019) smoothes the one-hot label by distributing a certain probability mass to other non ground-truth classes.",
"cite_spans": [
{
"start": 23,
"end": 44,
"text": "(M\u00fcller et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "\u2022 Entropy Regularized Loss (ERL) (Pereyra et al., 2017 ) adds a entropy penalty term to prevent DNNs from being over-confident.",
"cite_spans": [
{
"start": 33,
"end": 54,
"text": "(Pereyra et al., 2017",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "\u2022 Virtual Adversarial Training (VAT) (Miyato et al., 2018) introduces a smoothness-inducing adversarial regularizer to encourage the local Lipschitz continuity of DNNs.",
"cite_spans": [
{
"start": 37,
"end": 58,
"text": "(Miyato et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "\u2022 Mixup (Zhang et al., 2018; Thulasidasan et al., 2019) augments training data by linearly interpolating training samples in the input space.",
"cite_spans": [
{
"start": 8,
"end": 28,
"text": "(Zhang et al., 2018;",
"ref_id": "BIBREF38"
},
{
"start": 29,
"end": 55,
"text": "Thulasidasan et al., 2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "\u2022 Manifold-mixup (M-mixup) (Verma et al., 2019) is an extension of Mixup, which interpolates training samples in the hidden feature space.",
"cite_spans": [
{
"start": 27,
"end": 47,
"text": "(Verma et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "We use ADAM (Kingma and Ba, 2014) with \u03b2 1 = 0.9 and \u03b2 2 = 0.999 as the optimizer. For our method, we simply set \u03bb on = \u03bb off = 1, \u03b4 on = 10 \u22124 , \u03b4 off = 10 \u22123 , and \u03b4 y = 0.1 for all the experiments. We also conduct an extensive hyperparameter search for the baselines. See more details in Appendix B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "Our main results are summarized as follows: Expected Calibration Error: Table 1 reports the ECE and predictive accuracy of all the methods. Our method outperforms all the baselines on all the datasets in terms of ECE except for Yahoo, where only ERL is slightly better. Meanwhile, our method does not sacrifice the predictive accuracy. Misclassification Detection: Table 2 compares the NBAUCC 0.5 on misclassification detection of different methods. As shown, our method outperforms all the baselines on all the six datasets. Out-of-distribution Detection: Table 2 reports the NBAUCC 0.5 on OOD detection of different methods. Again, our method achieves the best performance on all the six datasets. The improvement is particularly remarkable on the 20NG dataset, where NBAUCC 0.5 increases from 47.00 to 63.92 compared with the strongest baseline. We also find Figure 3 : Calibration curves of OOD detection and misclassification detection on WOS. Our method can achieve high F 1 scores starting from a small threshold which indicates that it indeed provides low confidences for misclassified and OOD samples; the F 1 scores of the baselines peak at high thresholds which indicates that they are poorly calibrated.",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 1",
"ref_id": null
},
{
"start": 365,
"end": 372,
"text": "Table 2",
"ref_id": null
},
{
"start": 557,
"end": 564,
"text": "Table 2",
"ref_id": null
},
{
"start": 862,
"end": 870,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.4"
},
{
"text": "that detecting the unseen classes from the original dataset is much more challenging than detecting OOD samples from a totally different dataset. Significance Test: We perform the Wilcoxon signed rank test (Wilcoxon, 1992) for significance test. For each dataset, we conduct experiments using 5 different random seeds with significance level \u03b1 = 0.5. We find that our model outperforms other baselines on all the datasets significantly, with only exceptions of ERL in ECE on Yahoo and ERL in misclassification detection on 20NG.",
"cite_spans": [
{
"start": 206,
"end": 222,
"text": "(Wilcoxon, 1992)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.4"
},
{
"text": "We investigate the effects of the interpolation parameters for on-manifold data, i.e., \u03b4 on and \u03b4 y , and the perturbation size for off-manifold samples, i.e., \u03b4 off . The default values are \u03b4 on = 10 \u22124 , \u03b4 off = 10 \u22123 and \u03b4 y = 0.1. Figure 4 shows the reuslts on 20NG 15 , 20NG, WOS 100 , and WOS datasets. Our results are summarized as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 235,
"end": 243,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parameter Study",
"sec_num": "4.5"
},
{
"text": "\u2022 The performance of all metrics versus \u03b4 on is stable within a large range from 10 \u22125 to 10 \u22122 . When \u03b4 on is larger than 10 \u22121 , the predictive accuracy begins to drop.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Study",
"sec_num": "4.5"
},
{
"text": "\u2022 The performance versus \u03b4 off is more sensitive:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Study",
"sec_num": "4.5"
},
{
"text": "(1) when \u03b4 off is too small, ECE increases dramatically becasue the generated off-manifold samples are too close to the manifold and make the model under-confident. (2) when \u03b4 off is too large, the off-manifold regularization is too weak and OOD detection performance drops.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Study",
"sec_num": "4.5"
},
{
"text": "\u2022 In general, \u03b4 on should be small to let x stay on the data manifold while \u03b4 off should be large to let x leave the data manifold. However, the regularization effect of R on (R off ) depends on both \u03bb on Table 2 : NBAUCC 0.5 on misclassification detection and OOD detection. We report the average performance of 5 random initializations.",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parameter Study",
"sec_num": "4.5"
},
{
"text": "(\u03bb off ) and \u03b4 on (\u03b4 off ). Therefore, it is not necessary to let \u03b4 on be smaller than \u03b4 off . We can also tune \u03bb on and \u03bb off to achieve better performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Study",
"sec_num": "4.5"
},
{
"text": "\u2022 The performance versus \u03b4 y is relatively stable except for the metric of ECE. When \u03b4 y is larger than 0.2, ECE begins to increase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Study",
"sec_num": "4.5"
},
{
"text": "We investigate the effectiveness of the on-manifold regularizer R on and the off-manifold regularizer R off via ablation studies. Table 3 shows the results on the 20NG 15 and 20NG datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 137,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.6"
},
{
"text": "\u2022 As expected, removing either component in our method would result in a performance drop. It demonstrates that these two components complement each other. All the ablation models outperform the BERT baseline model, which demonstrates the effectiveness of each module.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.6"
},
{
"text": "\u2022 We observe that the optimal \u03b4 on is different when using only R on . This indicates that the hyperparameters of R on and R off should be jointly tuned, due to the joint effect of both components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.6"
},
{
"text": "\u2022 By removing R off , we observe a severe OOD performance degradation on the 20NG dataset (from 63.92 to 43.87). This indicates that R off is vital to out-of-distribution calibration. Meanwhile, the performance degradation is less severe on 20NG 15 (from 9.69 to 7.94). It is because R on can also help detect the OOD samples from similar data sources. (20NG 5 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.6"
},
{
"text": "\u2022 By removing R on , the in-distribution calibration performance drops as expected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.6"
},
{
"text": "Other Related Works: Lakshminarayanan et al. (2017) propose a model ensembling approach to improve model calibration. They first train multiple models with different initializations and then average their predictions. However, fine-tuning multiple language models requires extremely intensive computing resources. Kumar et al. (2018) propose a differentiable surrogate for the expected calibration error, called Table 3 : Ablation study on the 20NG 15 and 20NG datasets. For OOD detection and misclassification detection, we report BAUCC 0.5 . We set \u03b4 y = 0.1 and \u03b4 off = 10 \u22123 . maximum mean calibration error (MMCE), using kernel embedding. However, such a kernel embedding method is computationally expensive and not scalable to the large pre-trained language models.",
"cite_spans": [
{
"start": 21,
"end": 51,
"text": "Lakshminarayanan et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 314,
"end": 333,
"text": "Kumar et al. (2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 412,
"end": 419,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Works and Discussion",
"sec_num": "5"
},
{
"text": "Accelerating Optimization: To further improve the calibration performance of our method, we can leverage some recent minimax optimization techniques to better solve the two inner optimization problems in (5) and (7) without increasing the computational complexity. For example, Zhang et al. (2019) propose an efficient approximation algorithm based on Pontryagin's Maximal Principle to replace the multi-step projected gradient update for the inner optimization problem. Another option is the learning-to-learn framework (Jiang et al., 2018) , where the inner problem is solved by a learnt optimizer. These techniques can help us obtain x and x more efficiently.",
"cite_spans": [
{
"start": 278,
"end": 297,
"text": "Zhang et al. (2019)",
"ref_id": "BIBREF37"
},
{
"start": 521,
"end": 541,
"text": "(Jiang et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works and Discussion",
"sec_num": "5"
},
{
"text": "Connection to Robustness: The interpolated training samples can naturally promote the local Lipschitz continuity of our model. Such a local smoothness property has several advantages: (1) It makes the model more robust to the inherent noise in the data, e.g., noisy labels;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works and Discussion",
"sec_num": "5"
},
{
"text": "(2) it is particularly helpful to prevent overfitting and improve generalization, especially for low-resource tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works and Discussion",
"sec_num": "5"
},
{
"text": "Extensions: Our method is quite general and can be applied to other deep neural network-based problems besides language model fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works and Discussion",
"sec_num": "5"
},
{
"text": "We have proposed a regularization method to mitigate miscalibration of fine-tuned language models from a data augmentation perspective. Our method imposes two new regularizers using generated on-and off-manifold samples to improve both in-distribution and out-of-distribution calibration. All the data are publicly available. We also offer the links to the data as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://qwone.com/\u02dcjason/ 20Newsgroups/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "20NG:",
"sec_num": "1."
},
{
"text": "https://nlp.stanford.edu/ sentiment/index.html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SST-2:",
"sec_num": "2."
},
{
"text": "https://data.mendeley.com/ datasets/9rw3vkcfy4/2. 4. AGnews: https://github.com/yumeng5/ WeSTClass.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WOS:",
"sec_num": "3."
},
{
"text": "https://www.kaggle.com/ soumikrakshit/yahoo-answers-dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Yahoo:",
"sec_num": "5."
},
{
"text": "https://github.com/yumeng5/ WeSTClass.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Yelp:",
"sec_num": "6."
},
{
"text": "We use ADAM (Kingma and Ba, 2014) with \u03b2 1 = 0.9 and \u03b2 2 = 0.999 as the optimizer in all the datasets. We use the learning rate of 5 \u00d7 10 \u22125 and batch size 32 except 1 \u00d7 10 \u22125 and 16 for Yahoo 8 and Yahoo. We set the maximum number of epochs to 5 in Yahoo 8 and Yahoo and 10 in the other datasets. We use the dropout rate of 0.1 as in (Devlin et al., 2019) . The documents are tokenized using wordpieces and are chopped to spans no longer than 150 tokens on 20NG 15 and 20NG and 256 on other datasets.. Hyper-parameters: For our method, we use \u03bb on = \u03bb off = 1, \u03b4 on = 10 \u22124 , \u03b4 off = 10 \u22123 and \u03b4 y = 0.1 for all the datasets. We then conduct an extensive hyper-parameter search for the baselines: for label smoothing, we search the smoothing parameter from {0.05, 0.1} as in (M\u00fcller et al., 2019) ; for ERL, the penalty weight is chosen from {0.05, 0.1, 0.25, 0.5, 1, 2.5, 5}; for VAT, we search the perturbation size in {10 \u22123 , 10 \u22124 , 10 \u22125 } as in (Jiang et al., 2020) ; for Mixup, we search the interpolation parameter from {0.1, 0.2, 0.3, 0.4} as suggested in (Zhang et al., 2018; Thulasidasan et al., 2019) ; for Manifold-mixup, we search from {0.2, 0.4, 1, 2, 4}. We perform 10 stochastic forward passes for MCDP at test time. For hyperparameter tuning, we run all the methods 5 times and then take the average. The hyper-parameters are selected to get the best ECE on the development set of each dataset. The interpolation of Mixup is performed on the input embeddings obtained from the first layer of the language model; the interpolation of Manifold-mixup is performed on the features obtained from the last layer of the language model.",
"cite_spans": [
{
"start": 335,
"end": 356,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 776,
"end": 797,
"text": "(M\u00fcller et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 953,
"end": 973,
"text": "(Jiang et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 1067,
"end": 1087,
"text": "(Zhang et al., 2018;",
"ref_id": "BIBREF38"
},
{
"start": 1088,
"end": 1114,
"text": "Thulasidasan et al., 2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B Experiment Details",
"sec_num": null
},
{
"text": "Existing works on out-of-distribution (OOD) detection and misclassification detection (Hendrycks and Gimpel, 2016) use traditional binary classification metrics, e.g., AUPR and AUROC. As we discussed in Section 1 and 2, the output probability of a calibrated model should reflect the true likelihood. However, AUROC and AUPR cannot reflect true model calibration because the model can still achieve high scores even though it has high confidences for misclassified and OOD samples. We argue that it is more reasonable to use the Normalized Bounded Area Under the Calibration Curve (NBAUCC) defined as in Section 4. Table 5 shows an illustrative example. As can be seen, h 1 is better calibrated than h 2 , since h 1 can detect OOD samples under a wide range of threshold (0.15 < \u03c4 < 0.9) while h 2 requires an absurdly large threshold (0.85 < \u03c4 < 0.9). However, if we use the traditional AUPR and AUROC metrics, we will conclude that h 1 is as well calibrated as h 2 since AUPR h 1 = AUPR h 2 = 0.417 and AUROC h 1 = AUROC h 2 = 1. On the other hand, if we use NBAUCC, we will have NBAUCC h 1 1 = 0.845 > NBAUCC h 1 1 = 0.145, or NBAUCC h 1 0.5 = 0.773 > NBAUCC h 1 0.5 = 0 which can reflect the true calibration of the two classifiers.",
"cite_spans": [
{
"start": 86,
"end": 114,
"text": "(Hendrycks and Gimpel, 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 615,
"end": 622,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "C Metrics of Misclassification and Out-of-distribution detection",
"sec_num": null
},
{
"text": "We remark that it is more appropriate to use NBAUCC 0.5 than NBAUCC 1 since a calibrated model should provide low confidences for the misclassified and OOD samples and it is unreasonable to use a large threshold to detect them. Table 6 and 7 report the NBAUCCs of all the methods on misclassification and OOD detection when \u03c4 upper = 0.7 and \u03c4 upper = 1. Table 8 and 9 report the ablation study results of all the methods when \u03c4 upper = 0.7 and \u03c4 upper = 1. Figure 5 and 6 report the parameter study results of all the methods when \u03c4 upper = 0.7 and \u03c4 upper = 1. Table 8 : Ablation study on the 20NG 15 and 20NG datasets. For OOD detection and misclassification detection, we report NBAUCC 1 . We set \u03b4 y = 0.1 and \u03b4 off = 10 \u22123 . Table 9 : Ablation study on the 20NG 15 and 20NG datasets. For OOD detection and misclassification detection, we report NBAUCC 0.7 . We set \u03b4 y = 0.1 and \u03b4 off = 10 \u22123 . Figure 5: Parameter study of \u03b4 on , \u03b4 off and \u03b4 y . We use NBAUCC 1 for OOD and misclassification detection.",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 235,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 355,
"end": 362,
"text": "Table 8",
"ref_id": null
},
{
"start": 458,
"end": 466,
"text": "Figure 5",
"ref_id": null
},
{
"start": 563,
"end": 570,
"text": "Table 8",
"ref_id": null
},
{
"start": 731,
"end": 738,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "C Metrics of Misclassification and Out-of-distribution detection",
"sec_num": null
},
{
"text": "We use the 20 Newsgroups dataset from: http:// qwone.com/\u02dcjason/20Newsgroups/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported in part by the National Science Foundation award III-2008334, Amazon Faculty Award, and Google Faculty Award.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
},
{
"text": " Figure 6 : Parameter study of \u03b4 on , \u03b4 off and \u03b4 y . We use NBAUCC 0.7 for OOD and misclassification detection.",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 9,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Weight uncertainty in neural network",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Blundell",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Cornebise",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Daan",
"middle": [],
"last": "Wierstra",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1613--1622",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. 2015. Weight uncertainty in neural network. In International Con- ference on Machine Learning, pages 1613-1622.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Importance of semantic representation: Dataless classification",
"authors": [
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Srikumar",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "830--835",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming-Wei Chang, Lev Ratinov, Dan Roth, and Vivek Srikumar. 2008. Importance of semantic representa- tion: Dataless classification. In Proceedings of the Twenty-Third AAAI Conference on Artificial Intelli- gence, page 830-835.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Fair prediction with disparate impact: A study of bias in recidivism prediction instruments",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Chouldechova",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "5",
"issue": "",
"pages": "153--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism pre- diction instruments. Big data, 5(2):153-163.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1050--1059",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncer- tainty in deep learning. In International Conference on Machine Learning, pages 1050-1059.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Deep bayesian active learning with image data",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Riashat",
"middle": [],
"last": "Islam",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1183--1192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep bayesian active learning with image data. In International Conference on Machine Learning, pages 1183-1192.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The relationship between high-dimensional geometry and adversarial examples",
"authors": [
{
"first": "Justin",
"middle": [],
"last": "Gilmer",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Metz",
"suffix": ""
},
{
"first": "Fartash",
"middle": [],
"last": "Faghri",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Maithra",
"middle": [],
"last": "Schoenholz",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raghu",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brain",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1801.02774"
]
},
"num": null,
"urls": [],
"raw_text": "Justin Gilmer, Luke Metz, Fartash Faghri, Samuel S Schoenholz, Maithra Raghu, Martin Wattenberg, Ian Goodfellow, and G Brain. 2018. The relationship between high-dimensional geometry and adversarial examples. arXiv preprint arXiv:1801.02774.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "On calibration of modern neural networks",
"authors": [
{
"first": "Chuan",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Geoff",
"middle": [],
"last": "Pleiss",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Kilian Q",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1321--1330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Wein- berger. 2017. On calibration of modern neural net- works. In International Conference on Machine Learning, pages 1321-1330.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A deep network with visual text composition behavior",
"authors": [
{
"first": "Hongyu",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "372--377",
"other_ids": {
"DOI": [
"10.18653/v1/P17-2059"
]
},
"num": null,
"urls": [],
"raw_text": "Hongyu Guo. 2017. A deep network with visual text composition behavior. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 372-377, Vancouver, Canada. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In International Con- ference on Learning Representations.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning to defense by learning to attack",
"authors": [
{
"first": "Haoming",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Zhehui",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yuyang",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Tuo",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.01213"
]
},
"num": null,
"urls": [],
"raw_text": "Haoming Jiang, Zhehui Chen, Yuyang Shi, Bo Dai, and Tuo Zhao. 2018. Learning to defense by learning to attack. arXiv preprint arXiv:1811.01213.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "SMART: Robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization",
"authors": [
{
"first": "Haoming",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Tuo",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2177--2190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haoming Jiang, Pengcheng He, Weizhu Chen, Xi- aodong Liu, Jianfeng Gao, and Tuo Zhao. 2020. SMART: Robust and efficient fine-tuning for pre- trained natural language models through principled regularized optimization. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 2177-2190.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1746-1751.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Hdltex: Hierarchical deep learning for text classification",
"authors": [
{
"first": "Kamran",
"middle": [],
"last": "Kowsari",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Donald",
"suffix": ""
},
{
"first": "Mojtaba",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Kiana",
"middle": [
"Jafari"
],
"last": "Heidarysafa",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"S"
],
"last": "Meimandi",
"suffix": ""
},
{
"first": "Laura",
"middle": [
"E"
],
"last": "Gerber",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Barnes",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE International Conference on Machine Learning and Applications (ICMLA)",
"volume": "",
"issue": "",
"pages": "364--371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kamran Kowsari, Donald E Brown, Mojtaba Hei- darysafa, Kiana Jafari Meimandi, , Matthew S Ger- ber, and Laura E Barnes. 2017. Hdltex: Hierarchical deep learning for text classification. In IEEE Inter- national Conference on Machine Learning and Ap- plications (ICMLA), pages 364-371.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Trainable calibration measures for neural networks from kernel mean embeddings",
"authors": [
{
"first": "Aviral",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Sunita",
"middle": [],
"last": "Sarawagi",
"suffix": ""
},
{
"first": "Ujjwal",
"middle": [],
"last": "Jain",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2805--2814",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aviral Kumar, Sunita Sarawagi, and Ujjwal Jain. 2018. Trainable calibration measures for neural networks from kernel mean embeddings. In International Conference on Machine Learning, pages 2805- 2814.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Simple and scalable predictive uncertainty estimation using deep ensembles",
"authors": [
{
"first": "Balaji",
"middle": [],
"last": "Lakshminarayanan",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Pritzel",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Blundell",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6402--6413",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. 2017. Simple and scalable predic- tive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems, pages 6402-6413.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In International Con- ference on Learning Representations.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A simple unified framework for detecting outof-distribution samples and adversarial attacks",
"authors": [
{
"first": "Kimin",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kibok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jinwoo",
"middle": [],
"last": "Shin",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "7167--7177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. 2018. A simple unified framework for detecting out- of-distribution samples and adversarial attacks. In Advances in Neural Information Processing Systems, pages 7167-7177.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "RoBERTa: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Multiplicative normalizing flows for variational Bayesian neural networks",
"authors": [
{
"first": "Christos",
"middle": [],
"last": "Louizos",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2218--2227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christos Louizos and Max Welling. 2017. Multiplica- tive normalizing flows for variational Bayesian neu- ral networks. In International Conference on Ma- chine Learning, pages 2218-2227.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Virtual adversarial training: a regularization method for supervised and semisupervised learning",
"authors": [
{
"first": "Takeru",
"middle": [],
"last": "Miyato",
"suffix": ""
},
{
"first": "Masanori",
"middle": [],
"last": "Shin-Ichi Maeda",
"suffix": ""
},
{
"first": "Shin",
"middle": [],
"last": "Koyama",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ishii",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE transactions on pattern analysis and machine intelligence",
"volume": "41",
"issue": "",
"pages": "1979--1993",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2018. Virtual adversarial training: a regularization method for supervised and semi- supervised learning. IEEE transactions on pat- tern analysis and machine intelligence, 41(8):1979- 1993.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "When does label smoothing help?",
"authors": [
{
"first": "Rafael",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Kornblith",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4696--4705",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rafael M\u00fcller, Simon Kornblith, and Geoffrey E Hin- ton. 2019. When does label smoothing help? In Advances in Neural Information Processing Systems, pages 4696-4705.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Obtaining well calibrated probabilities using bayesian binning",
"authors": [
{
"first": "",
"middle": [],
"last": "Mahdi Pakdaman Naeini",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Gregory",
"suffix": ""
},
{
"first": "Milos",
"middle": [],
"last": "Cooper",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hauskrecht",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2901--2907",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahdi Pakdaman Naeini, Gregory F Cooper, and Milos Hauskrecht. 2015. Obtaining well calibrated proba- bilities using bayesian binning. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial In- telligence, page 2901-2907.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Predicting good probabilities with supervised learning",
"authors": [
{
"first": "Alexandru",
"middle": [],
"last": "Niculescu",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Mizil",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 2005,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "625--632",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandru Niculescu-Mizil and Rich Caruana. 2005. Predicting good probabilities with supervised learn- ing. In International Conference on Machine Learn- ing, page 625-632.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Regularizing neural networks by penalizing confident output distributions",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Pereyra",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Tucker",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Chorowski",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1701.06548"
]
},
"num": null,
"urls": [],
"raw_text": "Gabriel Pereyra, George Tucker, Jan Chorowski, \u0141ukasz Kaiser, and Geoffrey Hinton. 2017. Regular- izing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.10683"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Deep active learning for named entity recognition",
"authors": [
{
"first": "Yanyao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Hyokun",
"middle": [],
"last": "Yun",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": ""
},
{
"first": "Yakov",
"middle": [],
"last": "Kronrod",
"suffix": ""
},
{
"first": "Animashree",
"middle": [],
"last": "Anandkumar",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanyao Shen, Hyokun Yun, Zachary C. Lipton, Yakov Kronrod, and Animashree Anandkumar. 2018. Deep active learning for named entity recogni- tion. In International Conference on Learning Rep- resentations.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Deep Bayesian active learning for natural language processing: Results of a large-scale empirical study",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Siddhant",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2904--2909",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1318"
]
},
"num": null,
"urls": [],
"raw_text": "Aditya Siddhant and Zachary C. Lipton. 2018. Deep Bayesian active learning for natural language pro- cessing: Results of a large-scale empirical study. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 2904-2909, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Deep learning for NLP (without magic)",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Yoshua Bengio, and Christopher D. Manning. 2012. Deep learning for NLP (without magic). In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguistics: Tutorial Abstracts, page 5, Jeju Island, Korea. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Disentangling adversarial robustness and generalization",
"authors": [
{
"first": "David",
"middle": [],
"last": "Stutz",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Hein",
"suffix": ""
},
{
"first": "Bernt",
"middle": [],
"last": "Schiele",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "6976--6987",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Stutz, Matthias Hein, and Bernt Schiele. 2019. Disentangling adversarial robustness and generaliza- tion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6976-6987.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "On mixup training: Improved calibration and predictive uncertainty for deep neural networks",
"authors": [
{
"first": "Gopinath",
"middle": [],
"last": "Sunil Thulasidasan",
"suffix": ""
},
{
"first": "Jeff",
"middle": [
"A"
],
"last": "Chennupati",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Bilmes",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Michalak",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "13888--13899",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunil Thulasidasan, Gopinath Chennupati, Jeff A Bilmes, Tanmoy Bhattacharya, and Sarah Michalak. 2019. On mixup training: Improved calibration and predictive uncertainty for deep neural networks. In Advances in Neural Information Processing Systems, pages 13888-13899.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Manifold mixup: Better representations by interpolating hidden states",
"authors": [
{
"first": "Vikas",
"middle": [],
"last": "Verma",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Lamb",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Beckham",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Najafi",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Mitliagkas",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lopez-Paz",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "6438--6447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, and Yoshua Bengio. 2019. Manifold mixup: Better rep- resentations by interpolating hidden states. In In- ternational Conference on Machine Learning, pages 6438-6447.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Superglue: A stickier benchmark for general-purpose language understanding systems",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3266--3280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language un- derstanding systems. In Advances in Neural Infor- mation Processing Systems, pages 3266-3280.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5446"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Individual comparisons by ranking methods",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Wilcoxon",
"suffix": ""
}
],
"year": 1992,
"venue": "Breakthroughs in statistics",
"volume": "",
"issue": "",
"pages": "196--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Wilcoxon. 1992. Individual comparisons by ranking methods. In Breakthroughs in statistics, pages 196-202. Springer.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Fast is better than free: Revisiting adversarial training",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "Leslie",
"middle": [],
"last": "Rice",
"suffix": ""
},
{
"first": "J Zico",
"middle": [],
"last": "Kolter",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Wong, Leslie Rice, and J Zico Kolter. 2019. Fast is better than free: Revisiting adversarial training. In International Conference on Learning Represen- tations.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "You only propagate once: Accelerating adversarial training via maximal principle",
"authors": [
{
"first": "Dinghuai",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tianyuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yiping",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "227--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dinghuai Zhang, Tianyuan Zhang, Yiping Lu, Zhanx- ing Zhu, and Bin Dong. 2019. You only propagate once: Accelerating adversarial training via maximal principle. In Advances in Neural Information Pro- cessing Systems, pages 227-238.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "mixup: Beyond empirical risk minimization",
"authors": [
{
"first": "Hongyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Moustapha",
"middle": [],
"last": "Cisse",
"suffix": ""
},
{
"first": "Yann",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lopez-Paz",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empir- ical risk minimization. In International Conference on Learning Representations.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Character-level convolutional networks for text classification",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "649--657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in neural information pro- cessing systems, pages 649-657. * :26100 :26",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "o r a W y E e I 4 W w s W H V b A j + + s u b p N d s + D e N 5 u N t v d 0 q 4 6 i C C 3 A J r o E P 7 k A b P I A O 6 A I M p u A Z v I I 3 p 3 B e n H f n Y 9 V a c c q Z c / A H z u c P l n m T m A = = < / l a t e x i t >Mixup sampleInterpolation path",
"num": null
},
"TABREF2": {
"type_str": "table",
"text": "Model ECE Accuracy 20NG 15 20NG WOS 100 WOS Yahoo 8 Yahoo 20NG 15 20NG WOS 100 WOS Yahoo 8 Yahoo BERT 9.24 11.61 6.81 6.74 10.11 10.54 87.42 84.55 81.94 79.40 73.58 71.89 TS 4.42 8.17 3.63 4.43 5.18 4.24 87.42 84.55 81.94 79.40 73.58 71.89 MCDP 6.88 9.17 4.00 3.55 6.54 6.72 87.45 84.55 82.09 79.67 73.67 71.99 LS 4.35 6.15 4.35 4.67 4.89 3.61 87.54 85.02 81.95 79.47 73.66 71.54 ERL 7.16 6.10 3.74 3.35 3.42 2.96 87.67 84.83 81.96 79.48 73.63 72.01 VAT 9.07 11.28 7.27 6.76 10.96 7.92 87.61 85.20 81.65 79.71 73.71 72.08 Mixup 5.98 9.02 4.72 4.21 4.60 5.18 87.49 84.86 81.97 79.51 73.88 71.82 M-mixup 5.04 7.78 6.48 6.68 7.01 6.07 87.40 84.45 81.77 79.57 73.67 72.03 Ours 3.69 4.43 3.24 3.04 3.03 3.42 87.44 84.53 81.59 79.06 73.71 72.17 .51 20.65 24.80 10.75 11.29 5.86 31.84 26.77 58.02 11.62 19.84 M-mixup 2.16 3.16 16.94 19.39 9.09 11.79 2.36 26.08 24.08 51.39 10.08 22.41 Ours 9.10 10.76 26.93 30.80 14.34 17.88 9.69 63.92 35.60 71.13 14.94 29.40",
"content": "<table><tr><td colspan=\"3\">Table 1: ECE and accuracy (in percentage). We report the average performance of 5 random initializations.</td></tr><tr><td/><td>Misclassification Detection</td><td>OOD Detection</td></tr><tr><td>Data ( OOD )</td><td>20NG 15 20NG WOS 100 WOS Yahoo 8 Yahoo</td><td>20NG 15 20NG WOS 100 WOS Yahoo 8 Yahoo 20NG 5 SST-2 WOS 34 AGnews Yahoo 2 Yelp</td></tr><tr><td>BERT</td><td>2.30 2.86 16.53 20.52 7.47 8.43</td><td>2.66 21.65 23.12 49.84 8.35 13.88</td></tr><tr><td>TS</td><td colspan=\"2\">6.08 5.74 21.20 23.76 10.48 12.74 6.62 32.64 28.12 53.32 11.55 20.27</td></tr><tr><td>MCDP</td><td colspan=\"2\">4.37 5.28 20.44 24.16 10.12 10.75 3.99 25.10 27.28 53.52 9.98 15.93</td></tr><tr><td>LS</td><td colspan=\"2\">4.72 6.75 20.37 23.56 11.19 16.15 5.70 41.08 27.12 58.48 12.02 19.81</td></tr><tr><td>ERL</td><td colspan=\"2\">8.54 10.35 20.49 25.13 12.89 15.47 8.78 47.00 27.73 56.67 13.78 23.47</td></tr><tr><td>VAT</td><td colspan=\"2\">2.52 3.36 18.70 19.96 6.54 10.37 2.96 29.62 23.41 54.60 7.42 17.65</td></tr><tr><td>Mixup</td><td>4.99 4</td><td/></tr></table>",
"html": null,
"num": null
},
"TABREF3": {
"type_str": "table",
"text": "Parameter study of \u03b4 on , \u03b4 off and \u03b4 y .",
"content": "<table><tr><td>$FFXUDF\\</td><td>1*15 1* :26100 :26</td><td/><td>(&amp;(</td><td/><td>1*15 1* :26100 :26</td><td>0.5 0LVFODVVLILFDWLRQ1%$8&amp;&amp;</td><td>1*15 1* :26100 :26</td><td>22'1%$8&amp;&amp; 0.5</td><td>1*15 1* :26100 :26</td></tr><tr><td/><td>on</td><td/><td/><td/><td>on</td><td/><td>on</td><td>on</td></tr><tr><td>$FFXUDF\\</td><td>1*15 1* :26100 :26</td><td/><td>(&amp;(</td><td/><td>1*15 1* :26100 :26</td><td>0.5 0LVFODVVLILFDWLRQ1%$8&amp;&amp;</td><td/><td>1*15 1* :26100 :26</td><td>0.5 22'1%$8&amp;&amp;</td><td>1*15 1* :26100 :26</td></tr><tr><td/><td>off</td><td/><td/><td/><td>off</td><td/><td>off</td><td>off</td></tr><tr><td>$FFXUDF\\</td><td/><td>1*15 1* :26100 :26</td><td>(&amp;(</td><td>1*15 1* :26100 :26</td><td/><td>0.5 0LVFODVVLILFDWLRQ1%$8&amp;&amp;</td><td>1*15 1* :26100 :26</td><td>22'1%$8&amp;&amp; 0.5</td><td>1*15 1* :26100 :26</td></tr><tr><td/><td>y</td><td/><td/><td/><td>y</td><td/><td>y</td><td>y</td></tr><tr><td/><td colspan=\"5\">Figure 4: Dataset 20NG 15</td><td/><td/><td>20NG</td></tr><tr><td/><td>Model</td><td>\u03b4 on</td><td colspan=\"6\">Accuracy ECE OOD Mis Accuracy ECE OOD</td><td>Mis</td></tr><tr><td/><td>BERT</td><td>-</td><td/><td>87.42</td><td colspan=\"2\">9.24 2.66 2.30</td><td>84.55</td><td>11.61 21.65 2.86</td></tr><tr><td/><td>w/ R off</td><td>-</td><td/><td>86.48</td><td colspan=\"2\">6.51 6.22 6.09</td><td>83.90</td><td>7.98 55.40 7.12</td></tr><tr><td/><td colspan=\"2\">w/ R on 10 \u22122</td><td/><td>88.73</td><td colspan=\"2\">2.77 7.94 8.08</td><td>85.60</td><td>5.00 35.80 8.66</td></tr><tr><td/><td colspan=\"2\">w/ R on 10 \u22123</td><td/><td>88.29</td><td colspan=\"2\">3.52 7.39 6.83</td><td>85.69</td><td>4.43 38.00 9.01</td></tr><tr><td/><td colspan=\"2\">w/ R on 10 \u22124</td><td/><td>87.93</td><td colspan=\"2\">4.48 5.33 4.83</td><td>85.12</td><td>6.76 43.87 5.95</td></tr><tr><td/><td colspan=\"2\">w/ R on 10 \u22125</td><td/><td>87.61</td><td colspan=\"2\">4.69 3.83 4.73</td><td>85.39</td><td>6.35 35.70 5.30</td></tr><tr><td/><td colspan=\"2\">w/ Both 10 \u22124</td><td/><td>87.44</td><td colspan=\"2\">3.69 9.69 9.10</td><td>84.53</td><td>4.43 63.92 10.76</td></tr></table>",
"html": null,
"num": null
},
"TABREF5": {
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Dataset statistics and dataset split. '-' denotes</td></tr><tr><td>that this part is not used. The original Yahoo dataset</td></tr><tr><td>contains 140, 000 training samples for each class which</td></tr><tr><td>is too large; we randomly draw 2, 000 and 500 samples</td></tr><tr><td>for each class as our training and development set.</td></tr></table>",
"html": null,
"num": null
},
"TABREF7": {
"type_str": "table",
"text": "NBAUCC vs. AUROC/AUPR",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF8": {
"type_str": "table",
"text": "WOS 34 AGnews Yahoo 2 Yelp BERT 17.86 18.48 35.84 39.08 28.83 29.67 13.52 42.86 40.04 59.42 26.63 38.30 TS 23.74 23.58 38.34 40.76 31.10 32.63 19.74 50.00 42.96 60.70 28.30 42.07 MCDP 23.58 24.58 38.54 41.20 31.43 32.57 16.82 44.96 42.74 60.72 27.47 39.83 LS 21.22 23.24 37.22 40.12 30.93 34.30 18.76 55.24 42.54 63.62 27.87 40.77 ERL 24.04 25.68 37.87 41.17 32.27 33.90 22.10 54.20 42.67 62.10 28.73 43.37 VAT 17.80 17.50 35.90 38.80 27.87 31.13 13.00 49.00 40.30 62.50 25.80 40.63 Mixup 21.42 21.86 37.72 40.92 30.97 32.97 16.70 50.94 42.13 62.98 28.00 44.57 M-mixup 17.86 19.24 36.48 38.33 29.67 31.50 14.06 44.56 41.51 61.30 27.43 44.20 Ours 26.50 28.10 40.93 43.70 33.07 35.13 23.20 66.36 46.73 68.10 29.70 46.43",
"content": "<table><tr><td/><td>Misclassification Detection</td><td>OOD Detection</td></tr><tr><td>Data ( OOD )</td><td>20NG 15 20NG WOS 100 WOS Yahoo 8 Yahoo</td><td>20NG 15 20NG WOS 100 WOS Yahoo 8 Yahoo 20NG 5 SST-2</td></tr></table>",
"html": null,
"num": null
},
"TABREF9": {
"type_str": "table",
"text": "NBAUCC 1 on misclassification detection and OOD detection. We report the average performance of 5 random initializations. WOS 34 AGnews Yahoo 2 Yelp BERT 8.26 8.70 26.95 31.18 18.52 19.46 7.05 33.24 32.97 57.45 18.86 27.68 TS 14.60 13.72 31.73 33.89 22.32 24.61 12.91 43.55 37.84 59.86 22.17 34.03 MCDP 13.14 14.21 31.05 34.74 21.41 22.62 9.85 36.96 36.97 60.06 19.99 29.45 LS 12.45 14.24 30.92 33.51 22.94 27.52 11.63 49.60 36.04 65.28 22.38 33.00 ERL 17.92 20.04 30.83 35.26 25.07 27.34 15.43 55.69 36.69 61.93 24.07 36.74 VAT 8.44 9.66 29.39 30.57 17.23 21.74 7.26 41.35 32.56 60.81 17.64 31.17 Mixup 13.33 11.87 31.71 35.24 22.62 22.80 11.50 43.60 37.09 65.51 22.19 33.66 M-mixup 8.67 9.89 27.33 29.61 20.33 23.05 7.18 37.10 33.57 58.13 20.66 36.42 Ours 18.35 20.18 36.63 40.01 25.94 29.15 16.55 68.72 43.40 72.62 25.03 41.11",
"content": "<table><tr><td/><td>Misclassification Detection</td><td>OOD Detection</td></tr><tr><td>Data ( OOD )</td><td>20NG 15 20NG WOS 100 WOS Yahoo 8 Yahoo</td><td>20NG 15 20NG WOS 100 WOS Yahoo 8 Yahoo 20NG 5 SST-2</td></tr></table>",
"html": null,
"num": null
},
"TABREF10": {
"type_str": "table",
"text": "NBAUCC 0.7 on misclassification detection and OOD detection. We report the average performance of 5 random initializations.",
"content": "<table><tr><td colspan=\"2\">Dataset</td><td/><td>20NG 15</td><td/><td>20NG</td></tr><tr><td>Model</td><td>\u03b4 on</td><td colspan=\"4\">Accuracy ECE OOD Mis Accuracy ECE OOD Mis</td></tr><tr><td>BERT</td><td>-</td><td>87.42</td><td>9.24 13.52 17.86</td><td>84.55</td><td>11.61 42.86 18.48</td></tr><tr><td>w/ R off</td><td>-</td><td>86.48</td><td>6.51 18.10 24.53</td><td>83.90</td><td>7.98 63.73 25.40</td></tr><tr><td colspan=\"2\">w/ R on 10 \u22122</td><td>88.73</td><td>2.77 22.83 27.40</td><td>85.60</td><td>5.00 51.53 27.40</td></tr><tr><td colspan=\"2\">w/ R on 10 \u22123</td><td>88.29</td><td>3.52 21.03 24.13</td><td>85.69</td><td>4.43 53.87 26.30</td></tr><tr><td colspan=\"2\">w/ R on 10 \u22124</td><td>87.93</td><td>4.48 17.43 21.63</td><td>85.12</td><td>6.76 57.47 21.93</td></tr><tr><td colspan=\"2\">w/ R on 10 \u22125</td><td>87.61</td><td>4.69 15.73 21.43</td><td>85.39</td><td>6.35 52.07 21.63</td></tr><tr><td colspan=\"2\">w/ Both 10 \u22124</td><td>87.44</td><td>3.69 23.20 26.50</td><td>84.53</td><td>4.43 66.36 28.10</td></tr></table>",
"html": null,
"num": null
}
}
}
}