|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T05:10:23.617510Z" |
|
}, |
|
"title": "Mitigating Biases in Toxic Language Detection through Invariant Rationalization", |
|
"authors": [ |
|
{ |
|
"first": "Yung-Sung", |
|
"middle": [], |
|
"last": "Chuang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Taiwan University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Mingye", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Hongyin", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Hung-Yi", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Taiwan University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Yun-Nung", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Taiwan University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Shang-Wen", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Automatic detection of toxic language plays an essential role in protecting social media users, especially minority groups, from verbal abuse. However, biases toward some attributes, including gender, race, and dialect, exist in most training datasets for toxicity detection. The biases make the learned models unfair and can even exacerbate the marginalization of people. Considering that current debiasing methods for general natural language understanding tasks cannot effectively mitigate the biases in the toxicity detectors, we propose to use invariant rationalization (INVRAT), a game-theoretic framework consisting of a rationale generator and predictors, to rule out the spurious correlation of certain syntactic patterns (e.g., identity mentions, dialect) to toxicity labels. We empirically show that our method yields lower false positive rate in both lexical and dialectal attributes than previous debiasing methods. 1 * * Work is not related to employment at Amazon.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Automatic detection of toxic language plays an essential role in protecting social media users, especially minority groups, from verbal abuse. However, biases toward some attributes, including gender, race, and dialect, exist in most training datasets for toxicity detection. The biases make the learned models unfair and can even exacerbate the marginalization of people. Considering that current debiasing methods for general natural language understanding tasks cannot effectively mitigate the biases in the toxicity detectors, we propose to use invariant rationalization (INVRAT), a game-theoretic framework consisting of a rationale generator and predictors, to rule out the spurious correlation of certain syntactic patterns (e.g., identity mentions, dialect) to toxicity labels. We empirically show that our method yields lower false positive rate in both lexical and dialectal attributes than previous debiasing methods. 1 * * Work is not related to employment at Amazon.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "As social media becomes more and more popular in recent years, many users, especially the minority groups, suffer from verbal abuse and assault. To protect these users from online harassment, it is necessary to develop a tool that can automatically detect the toxic language in social media. In fact, many toxic language detection (TLD) systems have been proposed in these years based on different models, such as support vector machines (SVM) (Gaydhani et al., 2018) , bi-directional long shortterm memory (BiLSTM) (Bojkovsk\u1ef3 and Pikuliak, 2019) , logistic regression (Davidson et al., 2017) and fine-tuning BERT (d'Sa et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 444, |
|
"end": 467, |
|
"text": "(Gaydhani et al., 2018)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 516, |
|
"end": 546, |
|
"text": "(Bojkovsk\u1ef3 and Pikuliak, 2019)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 592, |
|
"text": "(Davidson et al., 2017)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 609, |
|
"end": 633, |
|
"text": "BERT (d'Sa et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, the existing TLD systems exhibit some problematic and discriminatory behaviors (Zhou et al., 2021) . Experiments show that the tweets containing certain surface markers, such as identity terms and expressions in African American English (AAE), are more likely to be classified as hate speech by the current TLD systems (Davidson et al., 2017; Xia et al., 2020) , although some of them are not actually hateful. Such an issue is predominantly attributed to the biases in training datasets for the TLD models; when the models are trained on the biased datasets, these biases are inherited by the models and further exacerbated during the learning process (Zhou et al., 2021) . The biases in TLD systems can make the opinions from the members of minority groups more likely to be removed by the online platform, which may significantly hinder their experience as well as exacerbate the discrimination against them in real life.", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 107, |
|
"text": "(Zhou et al., 2021)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 351, |
|
"text": "(Davidson et al., 2017;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 352, |
|
"end": 369, |
|
"text": "Xia et al., 2020)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 662, |
|
"end": 681, |
|
"text": "(Zhou et al., 2021)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "So far, many debiasing methods have been developed to mitigate biases in learned models, such as data re-balancing (Dixon et al., 2018) , residual fitting (He et al., 2019; Clark et al., 2019) , adversarial training (Xia et al., 2020) and data filtering approach (Bras et al., 2020; Zhou et al., 2021) . While most of these works are successful on other natural language processing (NLP) tasks, their performance on debasing the TLD tasks are unsatisfactory (Zhou et al., 2021) . A possible reason is that the toxicity of language is more subjective and nuanced than general NLP tasks that often have unequivocally correct labels (Zhou et al., 2021) . As current debiasing techniques reduce the biased behaviors of models by correcting the training data or measuring the difficulty of modeling them, which prevents models from capturing spurious and nonlinguistic correlation between input texts and labels, the nuance of toxicity annotation can make such techniques insufficient for the TLD task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 135, |
|
"text": "(Dixon et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 155, |
|
"end": 172, |
|
"text": "(He et al., 2019;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 173, |
|
"end": 192, |
|
"text": "Clark et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 234, |
|
"text": "(Xia et al., 2020)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 263, |
|
"end": 282, |
|
"text": "(Bras et al., 2020;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 283, |
|
"end": 301, |
|
"text": "Zhou et al., 2021)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 477, |
|
"text": "(Zhou et al., 2021)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 630, |
|
"end": 649, |
|
"text": "(Zhou et al., 2021)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we address the challenge by combining the TLD classifier with the selective rationalization method, which is widely used to inter-pret the predictions of complex neural networks. Specifically, we use the framework of Invariant Rationalization (INVRAT) (Chang et al., 2020) to rule out the syntactic and semantic patterns in input texts that are highly but spuriously correlated with the toxicity label, and mask such parts during inference. Experimental results show that INVRAT successfully reduce the lexical and dialectal biases in the TLD model with little compromise on overall performance. Our method avoids superficial correlation at the level of syntax and semantics, and makes the toxicity detector learn to use generalizable features for prediction, thus effectively reducing the impact of dataset biases and yielding a fair TLD model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 267, |
|
"end": 287, |
|
"text": "(Chang et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Debiasing the TLD Task Researchers have proposed a range of debiasing methods for the TLD task. Some of them try to mitigate the biases by processing the training dataset. For example, Dixon et al. (2018) add additional non-toxic examples containing the identity terms highly correlated to toxicity to balance their distribution in the training dataset. Park et al. (2018) use the combination of debiased word2vec and gender swap data augmentation to reduce the gender bias in TLD task. Badjatiya et al. (2019) apply the strategy of replacing the bias sensitive words (BSW) in training data based on multiple knowledge generalization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 204, |
|
"text": "Dixon et al. (2018)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 487, |
|
"end": 510, |
|
"text": "Badjatiya et al. (2019)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous works", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Some researchers pay more attention to modifying the models and learning less biased features. Xia et al. (2020) use adversarial training to reduce the tendency of the TLD system to misclassify the AAE texts as toxic speech. Mozafari et al. (2020) propose a novel re-weighting mechanism to alleviate the racial bias in English tweets. Vaidya et al. (2020) implement a multi-task learning framework with an attention layer to prevent the model from picking up the spurious correlation between the certain trigger-words and toxicity labels.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 112, |
|
"text": "Xia et al. (2020)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 247, |
|
"text": "Mozafari et al. (2020)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous works", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Debiasing Other NLP Task There are many methods proposed to mitigate the biases in NLP tasks other than TLD. Clark et al. (2019) train a robust classifier in an ensemble with a bias-only model to learn the more generalizable patterns in training dataset, which are difficult to be learned by the naive bias-only model. Bras et al. (2020) develop AFLITE, an iterative greedy algorithm that can adversarially filter the biases from the training dataset, as well as the framework to support it. Utama et al. (2020) introduce a novel approach of regularizing the confidence of models on the biased examples, which successfully makes the models perform well on both in-distribution and out-of-distribution data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous works", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 Invariant Rationalization", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous works", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We propose TLD debiasing based on INVRAT in this paper. The goal of rationalization is to find a subset of inputs that 1) suffices to yield the same outcome 2) is human interpretable. Normally, we would prefer to find rationale in unsupervised ways because the lack of such annotations in the data. A typical formulation to find rationale is as following: Given the input-output pairs (X, Y ) from a text classification dataset, we use a classifier f to predict the labels f (X). To extract the rationale here, an intermediate rationale generator g is introduced to find a rationale Z = g(X), a masked version of X that can be used to predict the output Y, i.e. maximize mutual information between Z and Y . 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Formulation for Rationalization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "max m\u2208S I(Y ; Z) s.t. Z = m X (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Formulation for Rationalization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Regularization loss L reg is often applied to keep the rationale sparse and contiguous:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Formulation for Rationalization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Lreg = \u03bb1 1 N E [ m 1] \u2212 \u03b1 + \u03bb2E N n=2 |mn \u2212 mn\u22121|", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Formulation for Rationalization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Formulation for Rationalization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "INVRAT (Chang et al., 2020) introduces the idea of environment to rationalization. We assume that the data are collected from different environments with different prior distributions. Among these environments, the predictive power of spurious correlated features will be variant, while the genuine causal explanations always have invariant predictive power to Y . Thus, the desired rationale should satisfy the following invariant constraint:", |
|
"cite_spans": [ |
|
{ |
|
"start": 7, |
|
"end": 27, |
|
"text": "(Chang et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The INVRAT Framework", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "H(Y |Z, E) = H(Y |Z),", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "The INVRAT Framework", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where E is the given environment and H is the cross-entropy between the prediction and the ground truth Y . We can use a three-player framework to find the solution for the above equation: an environment-agnostic predictor f i (Z), an environment-aware predictor f e (Z, E), and a rationale generator g(X). The learning objective of the two predictors are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The INVRAT Framework", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L * i = min f i (\u2022) E [L (Y ; f i (Z))]", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "The INVRAT Framework", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L * e = min fe(\u2022,\u2022) E [L (Y ; f e (Z, E))]", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "The INVRAT Framework", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In addition to minimizing the invariant prediction loss L * i and the regularization loss L reg , the other objective of the rationale generator is to minimize the gap between L * i and L * e , that is: min", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The INVRAT Framework", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "g(\u2022) L * i + L reg + \u03bb diff \u2022 ReLU (L * i \u2212 L * e ) ,", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "The INVRAT Framework", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where ReLU is applied to prevent the penalty when L * i has been lower than L * e .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The INVRAT Framework", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We apply INVRAT to debiasing TLD task. For clarity, we seed our following description with a specific TLD dataset where we conducted experiment on, hate speech in Twitter created by Founta et al. (2018) and modified by Zhou et al. 2021, and we will show how to generalize our approach. The dataset contains 32K toxic and 54K non-toxic tweets. Following works done by Zhou et al. 2021, we focus on two types of biases in the dataset: lexical biases and dialectal biases. Lexical biases contain the spurious correlation of toxic language with attributes including Non-offensive minority identity (NOI), Offensive minority identity (OI), and Offensive non-identity (ONI); dialectal biases are relating African-American English (AAE) attribute directly to toxicity. All these attributes are tagged at the document level. We provide more details for the four attributes (NOI, OI, ONI, and AAE) in Appendix A.", |
|
"cite_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 202, |
|
"text": "Founta et al. (2018)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INVRAT for TLD Debiasing 4.1 TLD Dataset and its Biases", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We directly use the lexical and dialectal attributes as the environments in INVRAT for debiasing TLD 3 . Under these different environments, the predictive power of spurious correlation between original input texts X and output labels Y will change. Thus, in INVRAT, the rationale generator will learn to exclude the biased phrases that are spurious correlated to toxicity labels from the rationale Z. On the other hand, the predictive power for the genuine linguistic clues will be generalizable across environments, so the rationale generator attempts to keep them in the rationale Z.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Use INVRAT for Debiasing", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Since there is no human labeling for the attributes in the original dataset, we infer the labels following Zhou et al. (2021) . We match X with TOXTRIG, a handcrafted word bank collected for NOI, OI, and ONI; for dialectal biases, we use the topic model from Blodgett et al. (2016) to classify X into four dialects: AAE, white-aligned English (WAE), Hispanic, and other.", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 125, |
|
"text": "Zhou et al. (2021)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Use INVRAT for Debiasing", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We build two debiasing variants with the obtained attribute labels, INVRAT (lexical) and IN-VRAT (dialect). The former is learned with the compound loss function in Equation 6and four lexical-related environment subsets (NOI, OI, ONI, and none of the above); we train the latter using the same loss function but along with four dialectal environments (AAE, WAE, Hispanic, and other). In both variants, the learned f i (Z) is our environmentagnostic TLD predictor that classifies toxic languages based on generalizable clues. Also, in the INVRAT framework, the environment-aware predictor f e (Z, E) needs to access the environment information. We use an additional embedding layer Emb env to embed the environment id e into a ndimensional vector Emb env (e), where n is the input dimension of the pretrained language model. Word embeddings and Emb env (e) are summed to construct the input representation for f e .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Use INVRAT for Debiasing", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We leverage RoBERTa-base (Liu et al., 2019) as the backbone of our TLD models in experiments. F 1 scores and false positive rate (FPR) when specific attributes exist in texts are used to quantify TLD and debiasing performance, respectively. The positive label is \"toxic\" and the negative label is \"non-toxic\" for computing F 1 scores. When evaluating models debiased by INVRAT, we use the following strategy to balance F 1 and FPR, and have a stable performance measurement. We first select all checkpoints with F 1 scores no less than the best TLD performance in dev set by 3%. Then, we pick the checkpoint with the lowest dev set FPR among these selected ones to evaluate on the test set. We describe more training details and used hyperparameters in Appendix B.", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 43, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment Settings", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In the left four columns of Zhou et al. (2021) . The bottom section contains scores of our methods. When FPR is lower, the model is less biased by lexical associations for toxicity. We used RoBERTa-base, while RoBERTa-large is used in Zhou et al. (2021) . Thus, our Vanilla F 1 score is slightly lower than that of Zhou et al. (2021) by 0.5%.", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 46, |
|
"text": "Zhou et al. (2021)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 253, |
|
"text": "Zhou et al. (2021)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 333, |
|
"text": "Zhou et al. (2021)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quantitative Debiasing Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "bias. In addition to Vanilla, we include lexical removal, a naive baseline that simply removes all words existing in TOXTRIG before training and testing. For our INVRAT (lexical/dialect) model, we can see a significant reduction in the FPR of NOI, OI, and ONI over Vanilla (RoBERTa without debiasing). Our approach also yields consistent and usually more considerable bias reduction in all three attributes, compared to the ensemble and data filtering debiasing baselines discussed in Zhou et al. (2021) , where no approach improves in more than two attributes (e.g., LMIXIN-ONI reduces bias in ONI but not the rest two; DataMaps-Easy improves in NOI and ONI but has similar FPR to Vanilla in OI). The result suggests that INVRAT can effectively remove the spurious correlation between mentioning words in three lexical attributes and toxicity. Moreover, our INVRAT debiasing sacrifices little TLD performance 4 , which can sometimes be a concern for debiasing (e.g., the overall performance of LMIXIN). It is worth noting that the lexical removal baseline does not get as much bias reduction as our method, even inducing more bias in NOI. We surmise that the weak result arises from the limitation of TOXTRIG, since a word bank cannot enumerate all biased words, and there are always other terms that can carry the bias to the model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 485, |
|
"end": 503, |
|
"text": "Zhou et al. (2021)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quantitative Debiasing Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We summarize the debiasing results for the dialectal attribute in the rightmost column of Table 1 . Compared with the Vanilla model, our method effectively reduces the FPR of AAE, suggesting the consistent benefit of INVRAT in debiasing dialect biases. Although the results from data relabeling (Zhou et al., 2021) and some data filtering approaches are better than INVRAT, these approaches are complementary to INVRAT, and combining them presumably improves debiasing performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 295, |
|
"end": 314, |
|
"text": "(Zhou et al., 2021)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 97, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Quantitative Debiasing Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We demonstrate how INVRAT removes biases and keeps detectors focusing on genuine toxic clues by showing examples of generated rationales in Table 2 . Part (a) of Table 2 shows two utterances where both the baseline and our INVRAT debiasing predict the correct labels. We can see that when toxic terms appear in the sentence, the rationale generator will capture them. In part (b), we show three examples where the baseline model incorrectly predicts the sentences as toxic, presumably due to some biased but not toxic words (depend on the context) like #sexlife, Shits, bullshit. However, our rationale generator rules out these words and allows the TLD model to focus on main verbs in the sentences like keeps, blame, have. In part (c), we show some examples that our INVRAT model Other than #kids, what keeps you from the #sexlife you want? \u0236 \u0236 Shits crazy but bet they'll blame us... wait for it \u0236 \u0236 @user @user You don't have to pay for their bullshit read your rights read the law I don't pay fo. . .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 147, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 169, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Qualitative Study", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "(c) RT @user: my ex so ugly to me now like...i'll beat that hoe ass \u0236 @user Stop that, it's not your fault a scumbag decided to steal otems which were obviously meant for someone i. . . Table 2 : Examples from the test set with the predictions from vanilla and our models. denotes toxic labels, and \u0236 denotes non-toxic labels. The underlined words are selected as the rationale by our ratinoale generator. fails to generate the true answer, while the baseline model can do it correctly. In these two examples, we observe that our rationale generator remove the offensive words, probably due to the small degree of toxicity, while the annotator marked them as toxic sentences. Part (d) of Table 2 shows another common case that when the sentence can be easily classified as non-toxic, the rationale generator tends not to output any words, and the TLD model will output non-toxic label. It is probably caused by the non-stable predictive power of these non-toxic words (they are variant), so the rationale generator choose to rule them out and keep rationale clean and invariant.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 193, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 688, |
|
"end": 695, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "\u0236 \u0236", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we propose to use INVRAT to reduce the biases in the TLD models effectively. By separately using lexical and dialectal attributes as the environments in INVRAT framework, the rationale generator can learn to generate genuine linguistic clues and rule out spurious correlations. Experimental results show that our method can better mitigate both lexical and dialectal biases without sacrificing much overall accuracy. Furthermore, our method does not rely on complicated data filtering or relabeling process, so it can be applied to new datasets without much effort, showing the potential of being applied to practical scenarios.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Real examples of X, Z can be found inTable 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To generalize our method for any other attributes or datasets, one can simply map environments to the attributes in consideration for debiasing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There is some degradation in NOI, which may result from some performance fluctuation in the small dataset and the labeling issues mentioned inZhou et al. (2021). We see the degradation as an opportunity for future dive deep rather than concerns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We follow Zhou et al. (2021) to define four attributes (NOI, OI, ONI, and AAE) that are often falsy related to toxic language. NOI is mention of minoritized identities (e.g., gay, female, Muslim); OI mentions offensive words about minorities (e.g., queer, n*gga); ONI is mention of swear words (e.g., f*ck, sh*t). NOI should not be correlated with toxic language but is often found in hateful speech towards minorities (Dixon et al., 2018) . Although OI and ONI can be toxic sometimes, they are used to simply convey closeness or emphasize the emotion in specific contexts (Dynel, 2012) . AAE contains dialectal markers that are commonly used among African Americans. Even though AAE simply signals a cultural identity in the US (Green, 2002) , AAE markers are often falsy related to toxicity and cause content by Black authors to mean suppressed more often than non-Black authors (Sap et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 28, |
|
"text": "Zhou et al. (2021)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 419, |
|
"end": 439, |
|
"text": "(Dixon et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 573, |
|
"end": 586, |
|
"text": "(Dynel, 2012)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 729, |
|
"end": 742, |
|
"text": "(Green, 2002)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 881, |
|
"end": 899, |
|
"text": "(Sap et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Bias attributes", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use a single NVIDIA TESLA V100 (32G) for each experiment. The average runtime of experiments for Vanilla model in Table 1 are 2 hours. The INVRAT model in Table 1 need about 9 hours for a single experiment.The main hyperparameters are listed in Table 3 . More details can be found in our released code. We did not conduct hyperparameter search, but follow all settings in the official implementation of Zhou et al. (2021) 5 . One difference is that because INVRAT framework needs three RoBERTa models to run at the same time, we choose to use RoBERTabase, while Zhou et al. (2021) uses RoBERTa-large. As a result, our F 1 score for the Vanilla model is about 0.5 less than the score in Zhou et al. (2021).5 https://github.com/XuhuiZhou/Toxic_ Debias hyperparameter value optimizer AdamW adam epsilon 1.0 \u00d7 10 \u22128 learning rate 1.0 \u00d7 10 \u22125 training epochs 10 batch size 8 max gradient norm 1.0 weight decay 0.0 sparsity percentage (\u03b1) 0.2 sparsity lambda (\u03bb1) 1.0 continuity lambda (\u03bb2) 5.0 diff lambda (\u03bbdiff) 10.0 Table 3 : The main hyperparameters in the experiment. Sparsity percentage is the value of \u03b1 in L reg mentioned in equation 2; sparsity lambda and continuity lambda are \u03bb 1 and \u03bb 2 in equation 2; diff lambda is \u03bb diff in equation 6.", |
|
"cite_spans": [ |
|
{ |
|
"start": 406, |
|
"end": 426, |
|
"text": "Zhou et al. (2021) 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 565, |
|
"end": 583, |
|
"text": "Zhou et al. (2021)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 124, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 158, |
|
"end": 165, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 248, |
|
"end": 255, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1017, |
|
"end": 1024, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "B Training Details", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Stereotypical bias removal for hate speech detection task using knowledge-based generalizations", |
|
"authors": [ |
|
{ |
|
"first": "Pinkesh", |
|
"middle": [], |
|
"last": "Badjatiya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manish", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vasudeva", |
|
"middle": [], |
|
"last": "Varma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "The World Wide Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "49--59", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pinkesh Badjatiya, Manish Gupta, and Vasudeva Varma. 2019. Stereotypical bias removal for hate speech detection task using knowledge-based gen- eralizations. In The World Wide Web Conference, pages 49-59.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Demographic dialectal variation in social media: A case study of african-american english", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Su Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Blodgett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brendan O'", |
|
"middle": [], |
|
"last": "Green", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1119--1130", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Su Lin Blodgett, Lisa Green, and Brendan O'Connor. 2016. Demographic dialectal variation in social media: A case study of african-american english. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 1119-1130.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Stufiit at semeval-2019 task 5: Multilingual hate speech detection on twitter with muse and elmo embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Michal", |
|
"middle": [], |
|
"last": "Bojkovsk\u1ef3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mat\u00fa\u0161", |
|
"middle": [], |
|
"last": "Pikuliak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "464--468", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michal Bojkovsk\u1ef3 and Mat\u00fa\u0161 Pikuliak. 2019. Stufiit at semeval-2019 task 5: Multilingual hate speech de- tection on twitter with muse and elmo embeddings. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 464-468.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Adversarial filters of dataset biases", |
|
"authors": [ |
|
{ |
|
"first": "Swabha", |
|
"middle": [], |
|
"last": "Ronan Le Bras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chandra", |
|
"middle": [], |
|
"last": "Swayamdipta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rowan", |
|
"middle": [], |
|
"last": "Bhagavatula", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zellers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Sabharwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1078--1088", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ronan Le Bras, Swabha Swayamdipta, Chandra Bha- gavatula, Rowan Zellers, Matthew E Peters, Ashish Sabharwal, and Yejin Choi. 2020. Adversarial fil- ters of dataset biases. In International Conference on Machine Learning, pages 1078-1088. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "2020. Invariant rationalization", |
|
"authors": [ |
|
{ |
|
"first": "Shiyu", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mo", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommi", |
|
"middle": [], |
|
"last": "Jaakkola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1448--1458", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. 2020. Invariant rationalization. In Inter- national Conference on Machine Learning, pages 1448-1458. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Don't take the easy way out: Ensemble based methods for avoiding known dataset biases", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Yatskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4060--4073", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Clark, Mark Yatskar, and Luke Zettle- moyer. 2019. Don't take the easy way out: En- semble based methods for avoiding known dataset biases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4060-4073.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Automated hate speech detection and the problem of offensive language", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Davidson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dana", |
|
"middle": [], |
|
"last": "Warmsley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Macy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ingmar", |
|
"middle": [], |
|
"last": "Weber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the International AAAI Conference on Web and Social Media", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media, volume 11(1).", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Measuring and mitigating unintended bias in text classification", |
|
"authors": [ |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "Dixon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Sorensen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nithum", |
|
"middle": [], |
|
"last": "Thain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucy", |
|
"middle": [], |
|
"last": "Vasserman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "67--73", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat- ing unintended bias in text classification. In Pro- ceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67-73.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Bert and fasttext embeddings for automatic detection of toxic speech", |
|
"authors": [ |
|
{ |
|
"first": "Ashwin", |
|
"middle": [], |
|
"last": "Geet D'sa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Irina", |
|
"middle": [], |
|
"last": "Illina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dominique", |
|
"middle": [], |
|
"last": "Fohr", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Organization of Knowledge and Advanced Technologies\"(OCTA)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--5", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashwin Geet d'Sa, Irina Illina, and Dominique Fohr. 2020. Bert and fasttext embeddings for automatic detection of toxic speech. In 2020 International Multi-Conference on:\"Organization of Knowledge and Advanced Technologies\"(OCTA), pages 1-5. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Swearing methodologically: the (im) politeness of expletives in anonymous commentaries on youtube", |
|
"authors": [ |
|
{ |
|
"first": "Marta", |
|
"middle": [], |
|
"last": "Dynel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Journal of English studies", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "25--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marta Dynel. 2012. Swearing methodologically: the (im) politeness of expletives in anonymous commen- taries on youtube. Journal of English studies, 10:25- 50.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Large scale crowdsourcing and characterization of twitter abusive behavior", |
|
"authors": [ |
|
{ |
|
"first": "Antigoni", |
|
"middle": [], |
|
"last": "Founta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Constantinos", |
|
"middle": [], |
|
"last": "Djouvas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Despoina", |
|
"middle": [], |
|
"last": "Chatzakou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilias", |
|
"middle": [], |
|
"last": "Leontiadis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeremy", |
|
"middle": [], |
|
"last": "Blackburn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gianluca", |
|
"middle": [], |
|
"last": "Stringhini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Athena", |
|
"middle": [], |
|
"last": "Vakali", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the International AAAI Conference on Web and Social Media", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antigoni Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gi- anluca Stringhini, Athena Vakali, Michael Siriv- ianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abu- sive behavior. In Proceedings of the International AAAI Conference on Web and Social Media, volume 12(1).", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Detecting hate speech and offensive language on twitter using machine learning: An n-gram and tfidf based approach", |
|
"authors": [ |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Gaydhani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vikrant", |
|
"middle": [], |
|
"last": "Doma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shrikant", |
|
"middle": [], |
|
"last": "Kendre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laxmi", |
|
"middle": [], |
|
"last": "Bhagwat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1809.08651" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aditya Gaydhani, Vikrant Doma, Shrikant Kendre, and Laxmi Bhagwat. 2018. Detecting hate speech and offensive language on twitter using machine learn- ing: An n-gram and tfidf based approach. arXiv preprint arXiv:1809.08651.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "African American English: a linguistic introduction", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Lisa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Green", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lisa J Green. 2002. African American English: a lin- guistic introduction. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Unlearn dataset bias in natural language inference by fitting the residual", |
|
"authors": [ |
|
{ |
|
"first": "He", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sheng", |
|
"middle": [], |
|
"last": "Zha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haohan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "EMNLP-IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "He He, Sheng Zha, and Haohan Wang. 2019. Unlearn dataset bias in natural language inference by fitting the residual. EMNLP-IJCNLP 2019, page 132.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Roberta: A robustly optimized bert pretraining approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.11692" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Hate speech detection and racial bias mitigation in social media based on bert model", |
|
"authors": [ |
|
{ |
|
"first": "Marzieh", |
|
"middle": [], |
|
"last": "Mozafari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Reza", |
|
"middle": [], |
|
"last": "Farahbakhsh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "No\u00ebl", |
|
"middle": [], |
|
"last": "Crespi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "PloS one", |
|
"volume": "15", |
|
"issue": "8", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marzieh Mozafari, Reza Farahbakhsh, and No\u00ebl Crespi. 2020. Hate speech detection and racial bias mitiga- tion in social media based on bert model. PloS one, 15(8):e0237861.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Reducing gender bias in abusive language detection", |
|
"authors": [ |
|
{ |
|
"first": "Ji", |
|
"middle": [], |
|
"last": "Ho Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jamin", |
|
"middle": [], |
|
"last": "Shin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascale", |
|
"middle": [], |
|
"last": "Fung", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2799--2804", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Re- ducing gender bias in abusive language detection. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 2799-2804.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "The risk of racial bias in hate speech detection", |
|
"authors": [ |
|
{ |
|
"first": "Maarten", |
|
"middle": [], |
|
"last": "Sap", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dallas", |
|
"middle": [], |
|
"last": "Card", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saadia", |
|
"middle": [], |
|
"last": "Gabriel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th annual meeting of the association for computational linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1668--1678", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th annual meeting of the association for computational linguistics, pages 1668-1678.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Mind the trade-off: Debiasing nlu models without degrading the in-distribution performance", |
|
"authors": [ |
|
{ |
|
"first": "Nafise Sadat", |
|
"middle": [], |
|
"last": "Prasetya Ajie Utama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Moosavi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8717--8729", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Prasetya Ajie Utama, Nafise Sadat Moosavi, and Iryna Gurevych. 2020. Mind the trade-off: Debiasing nlu models without degrading the in-distribution perfor- mance. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 8717-8729.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Empirical analysis of multi-task learning for reducing identity bias in toxic comment detection", |
|
"authors": [ |
|
{ |
|
"first": "Ameya", |
|
"middle": [], |
|
"last": "Vaidya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Feng", |
|
"middle": [], |
|
"last": "Mai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Ning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the International AAAI Conference on Web and Social Media", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "683--693", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ameya Vaidya, Feng Mai, and Yue Ning. 2020. Em- pirical analysis of multi-task learning for reducing identity bias in toxic comment detection. In Pro- ceedings of the International AAAI Conference on Web and Social Media, volume 14, pages 683-693.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Demoting racial bias in hate speech detection", |
|
"authors": [ |
|
{ |
|
"first": "Mengzhou", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anjalie", |
|
"middle": [], |
|
"last": "Field", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yulia", |
|
"middle": [], |
|
"last": "Tsvetkov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Eighth International Workshop on Natural Language Processing for Social Media", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--14", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mengzhou Xia, Anjalie Field, and Yulia Tsvetkov. 2020. Demoting racial bias in hate speech detec- tion. In Proceedings of the Eighth International Workshop on Natural Language Processing for So- cial Media, pages 7-14.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Challenges in automated debiasing for toxic language detection", |
|
"authors": [ |
|
{ |
|
"first": "Xuhui", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maarten", |
|
"middle": [], |
|
"last": "Sap", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Swabha", |
|
"middle": [], |
|
"last": "Swayamdipta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3143--3155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xuhui Zhou, Maarten Sap, Swabha Swayamdipta, Yejin Choi, and Noah A Smith. 2021. Challenges in automated debiasing for toxic language detection. In Proceedings of the 16th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Main Volume, pages 3143-3155.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "(d) A shark washed up in the street after a cyclone in Australia \u0236 \u0236 \u0236" |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td>Test</td><td>NOI</td><td/><td>OI</td><td/><td/><td>ONI</td><td>AAE</td></tr><tr><td/><td/><td>F1 \u2191</td><td>F1 \u2191</td><td>FPR \u2193</td><td>F1 \u2191</td><td>FPR \u2193</td><td>F1 \u2191</td><td>FPR \u2193</td><td>F1 \u2191</td><td>FPR \u2193</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td>51.5</td><td>-</td><td>-</td></tr><tr><td/><td>LMIXIN-AAE</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>92.30.1 16.10.4</td></tr><tr><td>33% train</td><td>Random AFLite DataMaps-Ambig. DataMaps-Hard DataMaps-Easy</td><td colspan=\"8\">92.20.1 89.50.4 91.90.1 90.20.4 11.31.1 98.90.0 85.70.0 97.30.1 68.03.4 91.90.1 16.80.8 9.30.7 98.90.0 83.33.4 97.40.1 67.20.6 92.20.1 16.70.6 92.50.1 89.20.7 7.41.0 98.90.0 85.70.0 97.50.0 64.41.4 92.50.1 16.00.4 92.60.1 89.50.4 6.30.9 98.80.0 85.70.0 97.40.0 62.01.1 92.60.1 13.70.2 91.90.2 86.80.6 5.90.7 98.90.0 83.33.4 97.20.1 60.33.8 91.90.2 19.52.8</td></tr><tr><td colspan=\"2\">Ours (RoBERTa-base)</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>Vanilla</td><td colspan=\"2\">91.70.1 90.10.3</td><td colspan=\"6\">8.40.4 98.60.0 81.03.4 97.00.0 63.41.4 95.90.2 16.91.0</td></tr><tr><td/><td>lexical removal</td><td colspan=\"8\">90.90.0 86.00.7 18.31.5 98.10.1 78.60.0 96.40.0 61.70.2 95.10.1 18.70.6</td></tr><tr><td/><td>InvRat (lexical)</td><td colspan=\"2\">91.00.5 85.51.6</td><td colspan=\"6\">3.40.6 97.51.0 76.23.4 97.20.2 61.11.5 95.00.5 19.61.0</td></tr><tr><td/><td>InvRat (dialect)</td><td colspan=\"2\">91.00.1 85.90.7</td><td colspan=\"6\">3.40.5 97.60.5 71.45.8 97.10.1 57.92.2 93.11.0 14.01.2</td></tr></table>", |
|
"type_str": "table", |
|
"text": "we show the F 1 scores and FPR in the entire dataset and in the NOI, OI, and ONI attributes for measuring lexical Vanilla 92.30.0 89.80.3 10.21.3 98.80.1 85.70.0 97.30.1 64.70.8 92.30.0 16.80.3 LMIXIN-ONI 85.62.5 87.01.1 14.01.5 98.90.0 85.70.0 87.94.5 43.73.1 --LMIXIN-TOXTRIG 86.91.1 85.50.3 11.21.7 97.60.3 71.40.0 90.41.8 44." |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Evaluation of all debiasing methods on the Founta et al. (2018) test set. We show the mean and s.d. (subscript) of F 1 and FPR across 3 runs. The top two sections contain the scores reported in" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td colspan=\"3\">Gold Vanilla Ours</td></tr><tr><td>(a)</td><td>\u0236</td><td>\u0236</td><td>\u0236</td></tr><tr><td>(b)</td><td/><td/><td/></tr></table>", |
|
"type_str": "table", |
|
"text": "Oh my god there's a f**king STINKBUG and it's in my ASS @user yes I hear that it's great for a relationship to try and change your partner.." |
|
} |
|
} |
|
} |
|
} |