|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:02:38.187814Z" |
|
}, |
|
"title": "Why Knowledge Distillation Amplifies Gender Bias and How to Mitigate -from the Perspective of DistilBERT", |
|
"authors": [ |
|
{ |
|
"first": "Jaimeen", |
|
"middle": [], |
|
"last": "Ahn", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Danggeun", |
|
"middle": [], |
|
"last": "Market", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Inc", |
|
"middle": [ |
|
"Hwaran" |
|
], |
|
"last": "Lee", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jinhwa", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Alice", |
|
"middle": [ |
|
"Oh" |
|
], |
|
"last": "Kaist", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Knowledge distillation is widely used to transfer the language understanding of a large model to a smaller model. However, after knowledge distillation, it was found that the smaller model is more biased by gender compared to the source large model. This paper studies what causes gender bias to increase after the knowledge distillation process. Moreover, we suggest applying a variant of the mixup on knowledge distillation, which is used to increase generalizability during the distillation process, not for augmentation. By doing so, we can significantly reduce the gender bias amplification after knowledge distillation. We also conduct an experiment on the GLUE benchmark to demonstrate that even if the mixup is applied, it does not have a significant adverse effect on the model's performance.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Knowledge distillation is widely used to transfer the language understanding of a large model to a smaller model. However, after knowledge distillation, it was found that the smaller model is more biased by gender compared to the source large model. This paper studies what causes gender bias to increase after the knowledge distillation process. Moreover, we suggest applying a variant of the mixup on knowledge distillation, which is used to increase generalizability during the distillation process, not for augmentation. By doing so, we can significantly reduce the gender bias amplification after knowledge distillation. We also conduct an experiment on the GLUE benchmark to demonstrate that even if the mixup is applied, it does not have a significant adverse effect on the model's performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Knowledge distillation (Hinton et al., 2015) is one way to use the knowledge of a large language model under the limited resources by transferring the knowledge of a larger model to a smaller model. Under the supervision of the teacher model, the small model is trained to produce the same result as that of the teacher model. By doing so, small models can leverage the knowledge of larger models (Sanh et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 44, |
|
"text": "(Hinton et al., 2015)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 397, |
|
"end": 416, |
|
"text": "(Sanh et al., 2019)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To maintain the performance of the model trained by knowledge distillation, the distilled model focuses more on the majority appearing in the data (Hooker et al., 2020) . Recent studies have described that pre-trained language model also results in a more biased representation when distillation proceeds (Silva et al., 2021) . However, only the issue is reported, and what part of knowledge distillation causes an increase in bias is not explored, and no solution is provided.", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 168, |
|
"text": "(Hooker et al., 2020)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 325, |
|
"text": "(Silva et al., 2021)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper studies which part of knowledge distillation causes the increase of social bias and how to alleviate the problem in terms of Dis-tilBERT (Sanh et al., 2019) . We first examine what part that contributes to knowledge distillation brings social bias amplification. There is no difference between the distilled and original models except for size and training loss. Thus, we check from two perspectives: (1) the capacity of the model being distilled and (2) the loss used in knowledge distillation. Then we suggest leveraging mixup (Zhang et al., 2018) on the knowledge distillation loss to mitigate this amplification by giving generalizability during the training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 167, |
|
"text": "(Sanh et al., 2019)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 540, |
|
"end": 560, |
|
"text": "(Zhang et al., 2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We conduct the experiments from two measurements: social bias with the Sentence Embedding Test (SEAT) (May et al., 2019 ) and downstream task performance with the GLUE Benchmark . We report that the factors that increase the social bias are the student model's limited capacity and the cross-entropy loss term between the logit distribution of the student model and that of the teacher model. We also demonstrate that applying the mixup to knowledge distillation can reduce this increase without significant effect on the downstream task performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 119, |
|
"text": "(May et al., 2019", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our contributions can be summarized as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We reveal the capacity of the model and crossentropy loss in knowledge distillation have a negative effect on social bias.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We suggest mixup as a mitigation technique if it is applied during the knowledge distillation proceeds.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Knowledge distillation is trained so that a student model outputs the same output as a teacher model's for one input. It makes the student model have the problem-solving ability of the large model, even though the student model has a smaller structure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "DistilBERT, the model this study is mainly about, is trained with three loss terms. First, cross-entropy loss (L ce ) forces the logit distribution between the student model and the teacher model to be similar. Next, the student model learns language understanding itself with masked language modeling loss (L mlm ). Lastly, cosine loss between two model's output (L cos ) makes the direction of output embeddings between the student model and the teacher model closer (Sanh et al., 2019) . In total, the loss term of DistilBERT is as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 469, |
|
"end": 488, |
|
"text": "(Sanh et al., 2019)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Loss = L ce + L mlm + L cos .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this paper, we investigate stereotypical associations between male and female gender and attribute pairs, particularly from the perspective of sentence embeddings in knowledge distillation language models. For the attribute pairs, we consider Careers and Family, Math and Arts, and Science and Arts. If there exists a correlation between a certain gender and an attribute, the language model intrinsically and perpetually causes representational harm (Blodgett et al., 2020) through improper preconceptions. Additionally, when the language model is trained for other downstream tasks, such as occupation prediction (De-Arteaga et al., 2019; McGuire et al., 2021) , it may lead to an additional risk of genderstereotyped biases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 618, |
|
"end": 643, |
|
"text": "(De-Arteaga et al., 2019;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 644, |
|
"end": 665, |
|
"text": "McGuire et al., 2021)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bias Statement", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Since knowledge distillation (KD) has become a prevalent technique to efficiently train smaller models, it is vital to figure out to what extent the gender biases are amplified after knowledge distillations and which loss terms exacerbate the biases during the training. Our work firstly conducts the in-depth analysis and then proposes mitigation methods for the gender bias amplification during the KD process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bias Statement", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We measure the streotypical associations with the Sentence Embedding Association Test (SEAT) (May et al., 2019) 1 . The SEAT uses semantically bleached sentence templates such as \"This is a [attribute-word]\" or \"Here is [gender-word]\". Then the associations between a gender and an attribute are calculated by cosine similarities of sentence encoded embeddings. We leave the detailed equations to calculate the SEAT scores in Appendix B.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bias Statement", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "There are several tests in SEAT. This study focuses on C6, C7, and C8 categories related to gender bias. C6 tests similarity between embedding of Male/Female Names, and Career/Family attribute words. C7 and C8 measure the similarity between embeddings of male and female pronouns and embeddings of Math/Arts related words and Math/Science related words, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bias Statement", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this section, we conduct in-depth analyses about what brings gender bias amplification after knowledge distillation from the perspective of (1) the student model's capacity and (2) the loss used in the knowledge distillation process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gender Bias Amplification after KD", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We use 30% of the corpus constructed by two datasets, the Wikipedia dataset and Bookcorpus (Zhu et al., 2015) dataset that were used to create DistillBERT 2 . The distillation is trained for three epochs using four V100 GPUs. All other settings remain the same following the way Distil-BERT is trained. We list the settings in Appendix D.", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 109, |
|
"text": "(Zhu et al., 2015)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To figure out whether and to what extent the student model's parameter capacity affects the gender biases, we varied the number of layers of the student model (DistilBERT). Note that BERT and DistilBERT have the same architecture parameters except the number of layers. Figure 1 shows that the average SEAT scores are increasing as the number of layers is decreasing. Quantitatively, the number of layers has a strong negative correlation with the SEAT score (Pearson r = \u22120.82), which means that the smaller the capacity, the more severe the gender bias. This result also aligns with the previous study that reveals the models with limited capacity tend to exploit the biases in the dataset (Sanh et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 692, |
|
"end": 711, |
|
"text": "(Sanh et al., 2021)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 270, |
|
"end": 278, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Does the capacity of the student model matter?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "SEAT Loss Term L mlm + L cos + L ce L mlm + L ce L mlm + L", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Does the capacity of the student model matter?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To ascertain how each loss term contributes to the increase in SEAT scores in the knowledge distillation process, we conducted an ablation study against each loss term. As shown in Table 1 , the model trained without the distillation loss L ce results in the lowest average SEAT score (0.670) among the three loss functions. However, this model shows the lowest performance (75.2%) in the GLUE benchmark, whereas the model trained with all loss terms results the best with 76.7%. This implies that the transfer of the teacher's knowledge is helpful for general language understanding tasks while exacerbating gender bias simultaneously. Consequently, it can be concluded that the current knowledge distillation technique itself is also a factor in increasing gender biases.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 188, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Does the knowledge distillation process matter itself?", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "This section describes how to improve the distillation process to make gender bias not amplified even after knowledge distillation. We found two causes (capacity, loss term) in the previous section. Among them, we decide to modify the loss term because this study is targeting the fixed size model, DistillBERT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed method", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "According to the ablation study in Section 4.3, we ascertain distillation loss (L ce ) hurts gender bias scores in a huge portion. Our intuition to alleviate this amplification is to give supervision as fair as possible during the knowledge distillation is proceeded. One way is to reduce the SEAT score of the teacher model first and give its supervision to the student model. However, most of the existing methods (Liang et al., 2020b; Cheng et al., 2021) for the teacher are designed to work only on the special token ([CLS] ). It is not suitable for knowledge distillation that is trained with logits and embeddings on a token-by-token basis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 416, |
|
"end": 437, |
|
"text": "(Liang et al., 2020b;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 438, |
|
"end": 457, |
|
"text": "Cheng et al., 2021)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 521, |
|
"end": 527, |
|
"text": "([CLS]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed method", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In this paper, we use mixup (Zhang et al., 2018) on knowledge distillation to increase genderrelated generalization ability by using mixup. Specifically, when a gender-related word appears, we use the values generalized by a mixup in the knowledge distillation process. First, we employ the pre-defined gender word pair (D) set (w male : w female ) from the previous work (Bolukbasi et al., 2016) 3 . We next make the teacher's output logit (y) and student's input embedding (x) same or similar between two corresponding gendered terms with \u03bb drawn from Beta(\u03b1, \u03b1) when words in D appear:", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 48, |
|
"text": "(Zhang et al., 2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 396, |
|
"text": "(Bolukbasi et al., 2016)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed method", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "x = \u03bbx w male + (1 \u2212 \u03bb)x w femal\u0113 y = \u03bby w male + (1 \u2212 \u03bb)y w female ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed method", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": ". We train DistilBERT with the mixup applied instances (x,\u0233) for words in D and with the original instances (x, y) for the rest of words. Notice that we do not use mixup as a data augmentation technique but rather employ its idea in the knowledge distillation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed method", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We view the mixup as being worked as a regularizer rather than as a learning objective when knowledge distillation takes place (Chuang and Mroueh, 2021; Liang et al., 2020a) . Because the student model learns masked language modeling itself, the generalized gender information by the mixup will act as a regularizer not to be trapped in the information commonly appearing in the pretraining corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 152, |
|
"text": "(Chuang and Mroueh, 2021;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 153, |
|
"end": 173, |
|
"text": "Liang et al., 2020a)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed method", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Dataset We only use the same dataset in knowledge distillation used in Section 4. Also, we lever- age GLUE Benchmark to assess model performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental setup", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Baseline We set a baseline as the distilled model from a teacher model that was trained with a debiasing method (Kaneko and Bollegala, 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 140, |
|
"text": "(Kaneko and Bollegala, 2021)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental setup", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In Table 2 , we report the scores for each SEAT test and the average. It shows that mixup (Zhang et al., 2018) applied in the distillation process outperforms in terms of the average SEAT score. Compared to the baseline, distilled model under the supervision of the debiased teacher, mixup scores lower in four out of six tests (C6b, C7, C8, C8b). Table 2 also shows the results according to the part where the mixup is applied. We experimented with applying mixup to many different levels of representations in the distillation process: logits, teacher's output embeddings, and student's input embeddings. The proposed method that applies the mixup to inputs (input embeddings) and labels (logits) showed the best results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 110, |
|
"text": "(Zhang et al., 2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 348, |
|
"end": 355, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We also measure SEAT after applying the teacher's output embeddings. It is because, although not included in the original distillation, the cosine loss for embedding is included in the learning process of DistilBERT. However, Table 2 reports that the mixup on output embeddings increases the SEAT score in most tests and is even higher than the original distillation process.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 226, |
|
"end": 233, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We also checked the performance on downstream tasks when mixup is applied in knowledge distillation. Table 3 summarizes the results on GLUE benchmark. Compared to the model using the original distillation, the average performance remains the same.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 108, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In this paper, we study what causes gender bias amplification in the knowledge distillation process and how to alleviate the amplification by applying mixup in the knowledge distillation process. We confirmed that both the cross-entropy loss between the logits and the model capacity affects the increase of gender bias. Since this study focused on the DistilBERT, we alleviated the problem by modifying the knowledge distillation loss. We reported that the SEAT score decreased when the mixup was applied to the student's input embedding and the teacher's output logit in the distillation method when gender-related words appeared. We also showed that this method does not have a significant adverse effect on downstream tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "There are limitations in this study. First, we used sub-samples of the pre-training corpus. Although we checked that there was no significant differences when trained with a fraction of data in terms of the SEAT score and the GLUE score, the experimental results for the entire data should be explored. Second, we do not yet know why the SEAT score increases when the mixup is applied to the output embedding. The embeddings between the two genders are expected to be close, but we do not yet figure out why the scores are reversed contrary to expectations. We leave these as our future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We check the DistilBERT with 30% of the corpus preserves 98.73% of the performance of DistilBERT with the entire dataset on GLUE.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We list the pairs in Appendix C", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work has been financially supported by KAIST-NAVER Hypercreative AI Center.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There were several attempts to apply mixup in knowledge distillation. Du et al. (2021) uses a fair representation created by the medium of the embeddings of two sensitive attributes (the neutralization) in distillation. Students are trained with the neutralized embeddings created in this way so that the student's input is dependent on the teacher's output. MixKD (Liang et al., 2020a) applies mixup during knowledge distillation to get better performance on the GLUE benchmark. Notably, MixKD takes the method of training the teacher model as well as the student model when distillation proceeds. Our suggestion guarantees independence between student and teacher model inputs in this work, as DistilBERT is trained. Moreover, we train a taskagnostic model by applying a mixup to distillation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 86, |
|
"text": "Du et al. (2021)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Related Work", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Let X and Y be target embeddings, the embedding of sentence template with gender word in our case, and A and B as attribute words. The SEAT basically measures similarity difference between attribute words and target word w. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Sentence Embedding Association Test (SEAT)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[[\"woman\", \"man\"], [\"girl\", \"boy\"], [\"she\", \"he\"], [\"mother\", \"father\"], [\"daughter\", \"son\"], [\"gal\", \"guy\"], [\"female\", \"male\"], [\"her\", \"his\"], [\"herself\", \"himself\"], [\"Mary\", \"John\"]] \u2022 learning_rate = 5e-4\u2022 max_grad_norm = 5\u2022 adam_epsilon= 1e-6\u2022 initializer_range= 0.02\u2022 \u03b1 = 0.4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Gender Word Pairs", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 max_seq_length = 128\u2022 batch_size = 32\u2022 learning_rate = 2e-5\u2022 n_epochs = 3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D.2 GLUE Experiment Hyperparameters", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Language (technology) is power: A critical survey of\" bias", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Su Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Solon", |
|
"middle": [], |
|
"last": "Blodgett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Barocas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanna", |
|
"middle": [], |
|
"last": "Wallach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "nlp", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.14050" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Su Lin Blodgett, Solon Barocas, Hal Daum\u00e9 III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of\" bias\" in nlp. arXiv preprint arXiv:2005.14050.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Tolga", |
|
"middle": [], |
|
"last": "Bolukbasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Zou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Venkatesh", |
|
"middle": [], |
|
"last": "Saligrama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Kalai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4356--4364", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, page 4356-4364, Red Hook, NY, USA. Curran Associates Inc.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Fairfil: Contrastive neural debiasing method for pretrained text encoders", |
|
"authors": [ |
|
{ |
|
"first": "Pengyu", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weituo", |
|
"middle": [], |
|
"last": "Hao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siyang", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shijing", |
|
"middle": [], |
|
"last": "Si", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Carin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pengyu Cheng, Weituo Hao, Siyang Yuan, Shijing Si, and Lawrence Carin. 2021. Fairfil: Contrastive neu- ral debiasing method for pretrained text encoders. In International Conference on Learning Representa- tions.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Fair mixup: Fairness via interpolation", |
|
"authors": [ |
|
{ |
|
"first": "Ching-Yao", |
|
"middle": [], |
|
"last": "Chuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Youssef", |
|
"middle": [], |
|
"last": "Mroueh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2103.06503" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ching-Yao Chuang and Youssef Mroueh. 2021. Fair mixup: Fairness via interpolation. arXiv preprint arXiv:2103.06503.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Bias in bios: A case study of semantic representation bias in a high-stakes setting", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "De-Arteaga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexey", |
|
"middle": [], |
|
"last": "Romanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanna", |
|
"middle": [], |
|
"last": "Wallach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Chayes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Borgs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Chouldechova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sahin", |
|
"middle": [], |
|
"last": "Geyik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krishnaram", |
|
"middle": [], |
|
"last": "Kenthapadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam Tauman", |
|
"middle": [], |
|
"last": "Kalai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* '19", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "120--128", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3287560.3287572" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria De-Arteaga, Alexey Romanov, Hanna Wal- lach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In Proceedings of the Conference on Fair- ness, Accountability, and Transparency, FAT* '19, page 120-128, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Ruixiang Tang, Ahmed Awadallah, and Xia Hu. 2021. Fairness via representation neutralization", |
|
"authors": [ |
|
{ |
|
"first": "Mengnan", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Subhabrata", |
|
"middle": [], |
|
"last": "Mukherjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guanchu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mengnan Du, Subhabrata Mukherjee, Guanchu Wang, Ruixiang Tang, Ahmed Awadallah, and Xia Hu. 2021. Fairness via representation neutralization. Advances in Neural Information Processing Systems, 34.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Distilling the knowledge in a neural network", |
|
"authors": [ |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1503.02531" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Characterising bias in compressed models", |
|
"authors": [ |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Hooker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nyalleng", |
|
"middle": [], |
|
"last": "Moorosi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samy", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Denton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2010.03058" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy Bengio, and Emily Denton. 2020. Characteris- ing bias in compressed models. arXiv preprint arXiv:2010.03058.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Debiasing pre-trained contextualised embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Masahiro", |
|
"middle": [], |
|
"last": "Kaneko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danushka", |
|
"middle": [], |
|
"last": "Bollegala", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1256--1266", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.eacl-main.107" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Masahiro Kaneko and Danushka Bollegala. 2021. De- biasing pre-trained contextualised embeddings. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 1256-1266, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Mixkd: Towards efficient distillation of large-scale language models", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kevin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weituo", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dinghan", |
|
"middle": [], |
|
"last": "Hao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yufan", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weizhu", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Changyou", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Carin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2011.00593" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin J Liang, Weituo Hao, Dinghan Shen, Yufan Zhou, Weizhu Chen, Changyou Chen, and Lawrence Carin. 2020a. Mixkd: Towards efficient distilla- tion of large-scale language models. arXiv preprint arXiv:2011.00593.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Towards debiasing sentence representations", |
|
"authors": [ |
|
{ |
|
"first": "Irene", |
|
"middle": [ |
|
"Mengze" |
|
], |
|
"last": "Paul Pu Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chong", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Lim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Louis-Philippe", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Morency", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5502--5515", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.488" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and Louis- Philippe Morency. 2020b. Towards debiasing sen- tence representations. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 5502-5515, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "On measuring social biases in sentence encoders", |
|
"authors": [ |
|
{ |
|
"first": "Chandler", |
|
"middle": [], |
|
"last": "May", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shikha", |
|
"middle": [], |
|
"last": "Bordia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rachel", |
|
"middle": [], |
|
"last": "Rudinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "622--628", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1063" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622-628, Minneapolis, Min- nesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Science and math interest and gender stereotypes: The role of educator gender in informal science learning sites", |
|
"authors": [ |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Mcguire", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tina", |
|
"middle": [], |
|
"last": "Monzavi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Hoffman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fidelia", |
|
"middle": [], |
|
"last": "Law", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Irvin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Winterbottom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Hartstone-Rose", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Rutland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Burns", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurence", |
|
"middle": [], |
|
"last": "Butler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Drews", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Grace", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Fields", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kelly", |
|
"middle": [ |
|
"Lynn" |
|
], |
|
"last": "Mulvey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Frontiers in Psychology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3389/fpsyg.2021.503237" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luke McGuire, Tina Monzavi, Adam J. Hoffman, Fi- delia Law, Matthew J. Irvin, Mark Winterbottom, Adam Hartstone-Rose, Adam Rutland, Karen P. Burns, Laurence Butler, Marc Drews, Grace E. Fields, and Kelly Lynn Mulvey. 2021. Science and math in- terest and gender stereotypes: The role of educator gender in informal science learning sites. Frontiers in Psychology, 12.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.01108" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Learning from others' mistakes: Avoiding dataset biases without modeling them", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander M", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Sanh, Thomas Wolf, Yonatan Belinkov, and Alexander M Rush. 2021. Learning from others' mistakes: Avoiding dataset biases without modeling them. In International Conference on Learning Rep- resentations.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Towards a comprehensive understanding and accurate evaluation of societal biases in pre-trained transformers", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Silva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pradyumna", |
|
"middle": [], |
|
"last": "Tambwekar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Gombolay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2383--2389", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.naacl-main.189" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Silva, Pradyumna Tambwekar, and Matthew Gombolay. 2021. Towards a comprehensive under- standing and accurate evaluation of societal biases in pre-trained transformers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2383-2389, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanpreet", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Interna- tional Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "mixup: Beyond empirical risk minimization", |
|
"authors": [ |
|
{ |
|
"first": "Hongyi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Moustapha", |
|
"middle": [], |
|
"last": "Cisse", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Dauphin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Lopez-Paz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", |
|
"authors": [ |
|
{ |
|
"first": "Yukun", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Kiros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rich", |
|
"middle": [], |
|
"last": "Zemel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raquel", |
|
"middle": [], |
|
"last": "Urtasun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Torralba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanja", |
|
"middle": [], |
|
"last": "Fidler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the IEEE international conference on computer vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE in- ternational conference on computer vision, pages 19-27.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "1 https://github.com/W4ngatang/sent-bias/ Number of Layers vs. SEAT score trendline Figure 1: SEAT score by adjusting the number of layers of DistillBERT. The SEAT score and the number of layers in DistillBERT are negatively correlated (Pearson r = \u22120.82).", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"text": "SEAT and GLUE scores obtained by ablation of each part in distillation loss. C6 is tested with the names and C7 and C8 are gender pronouns. Thus, for each test, C6b is tested with a gender pronoun, and C7 and C8 are also tested with names.", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"text": "Kaneko and Bollegala, 2021) 0.889 0.294 0.509 1.192 0.838 1.292 0.836", |
|
"content": "<table><tr><td>Supervision</td><td>C6</td><td>C6b</td><td>C7</td><td>C7b</td><td>C8</td><td>C8b</td><td>Avg.</td></tr><tr><td colspan=\"8\">Original Teacher Output embeddings Debiased Teacher (Mixup Supervision Original Supervision Input embeddings Logits + Output embeddings Logits + Output embeddings + Input embeddings 1.246 0.049 0.566 1.367 0.407 1.144 0.796 1.236 0.499 0.907 1.428 0.534 1.347 0.992 1.215 0.460 0.761 1.541 0.650 1.420 1.008 1.305 0.049 0.460 1.334 0.465 1.342 0.830 1.310 0.397 1.325 0.989 0.863 1.321 1.034 Logits + Input embeddings (proposed) 1.176 0.062 0.447 1.218 0.310 1.211 0.738</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"text": "The result of applying mixup on distillation process in terms of SEAT score (lower scores indicate less social bias). The lowest score on each tests are marked in bold.", |
|
"content": "<table><tr><td>Task</td><td>Original Teacher</td><td>Mixup in distillation</td></tr><tr><td>MNLI QQP QNLI SST-2 CoLA STS-B MRPC RTE</td><td>80.6 85.9 86.5 90.4 44.8 83.2 82.2 59.9</td><td>80.4 85.3 86.2 90.7 43.6 83.2 81.7 62.1</td></tr><tr><td>Avg.</td><td>76.7</td><td>76.7</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"text": "The performance on the GLUE benchmark after applying the proposed mixup (Logits + Input Embeddings) in the knowledge distillation.", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |