ACL-OCL / Base_JSON /prefixW /json /woah /2022.woah-1.22.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:10:31.381382Z"
},
"title": "Targeted Identity Group Prediction in Hate Speech Corpora",
"authors": [
{
"first": "Pratik",
"middle": [
"S"
],
"last": "Sachdeva",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "Berkeley"
}
},
"email": "[email protected]"
},
{
"first": "Renata",
"middle": [],
"last": "Barreto",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Claudia",
"middle": [],
"last": "Von Vacano",
"suffix": "",
"affiliation": {
"laboratory": "D-Lab University of California",
"institution": "",
"location": {
"settlement": "Berkeley"
}
},
"email": ""
},
{
"first": "Chris",
"middle": [
"J"
],
"last": "Kennedy",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The past decade has seen an abundance of work seeking to detect, characterize, and measure online hate speech. A related, but less studied problem, is the specification of identity groups targeted by that hate speech. Predictive accuracy on this task can supplement additional analyses beyond hate speech detection, motivating its study. Using the Measuring Hate Speech corpus, which provided annotations for targeted identity groups on roughly 50,000 social media comments, we create neural network models to perform multi-label binary prediction of identity groups targeted by a social media comment. Specifically, we study 8 broad identity groups and 12 identity subgroups within race and gender identity. We find that these networks exhibited good predictive performance, achieving ROC AUCs of greater than 0.9 and PR AUCs of greater than 0.7 on several identity groups. At the same time, we find performance suffered on identity groups less represented in the dataset. We validate model performance on the HateCheck and Gab Hate Corpora, finding that predictive performance generalizes in most settings. We additionally examine the performance of the model on comments targeting multiple identity groups. Lastly, we discuss issues with a standardized conceptualization of a \"target\" in hate speech corpora, and its relation to intersectionality. Our results demonstrate the feasibility of simultaneously detecting a broad range of targeted groups in social media comments, and offer suggestions for future work on modeling and dataset annotation for this task.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "The past decade has seen an abundance of work seeking to detect, characterize, and measure online hate speech. A related, but less studied problem, is the specification of identity groups targeted by that hate speech. Predictive accuracy on this task can supplement additional analyses beyond hate speech detection, motivating its study. Using the Measuring Hate Speech corpus, which provided annotations for targeted identity groups on roughly 50,000 social media comments, we create neural network models to perform multi-label binary prediction of identity groups targeted by a social media comment. Specifically, we study 8 broad identity groups and 12 identity subgroups within race and gender identity. We find that these networks exhibited good predictive performance, achieving ROC AUCs of greater than 0.9 and PR AUCs of greater than 0.7 on several identity groups. At the same time, we find performance suffered on identity groups less represented in the dataset. We validate model performance on the HateCheck and Gab Hate Corpora, finding that predictive performance generalizes in most settings. We additionally examine the performance of the model on comments targeting multiple identity groups. Lastly, we discuss issues with a standardized conceptualization of a \"target\" in hate speech corpora, and its relation to intersectionality. Our results demonstrate the feasibility of simultaneously detecting a broad range of targeted groups in social media comments, and offer suggestions for future work on modeling and dataset annotation for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The proliferation of hate speech on online platforms continues to be a significant human rights issue, associated with a host a negative consequences (Tsesis, 2002; Wilson, 2017) . Hate speech distinguishes itself from other types of toxic or offensive content in that it specifically targets an individual or group on the basis of their membership in an identity group, such as race, religion, gender, sexual orientation, etc. (Sellars, 2016) . Thus, developing methods that can identify and characterize hate speech, and its targets, is of paramount importance.",
"cite_spans": [
{
"start": 150,
"end": 164,
"text": "(Tsesis, 2002;",
"ref_id": "BIBREF36"
},
{
"start": 165,
"end": 178,
"text": "Wilson, 2017)",
"ref_id": "BIBREF42"
},
{
"start": 428,
"end": 443,
"text": "(Sellars, 2016)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given the scale of online hate speech, much effort has been made toward the development of automated approaches to classify or measure it given raw text (Fortuna and Nunes, 2018; Tontodimamma et al., 2021) . While initial efforts used binary labels, subsequent work has introduced additional labels that more finely characterize or measure hate speech (Kennedy et al., 2020; Kennedy et al., 2022) . These include studies that implicitly specify the targeted identity group, such as labeling speech as racism or sexism (Waseem and Hovy, 2016) .",
"cite_spans": [
{
"start": 153,
"end": 178,
"text": "(Fortuna and Nunes, 2018;",
"ref_id": "BIBREF12"
},
{
"start": 179,
"end": 205,
"text": "Tontodimamma et al., 2021)",
"ref_id": "BIBREF34"
},
{
"start": 352,
"end": 374,
"text": "(Kennedy et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 375,
"end": 396,
"text": "Kennedy et al., 2022)",
"ref_id": "BIBREF15"
},
{
"start": 518,
"end": 541,
"text": "(Waseem and Hovy, 2016)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Predicting the identity group targeted by social media content is useful beyond hate speech detection. Such algorithms could identify comments that target groups of interest for secondary analyses. These analyses include evaluating the impacts, such as adverse health outcomes, of social media targeting specific communities . Furthermore, leveraging knowledge of the target identity can better inform interventions or moderation of hateful content (Tekiroglu et al., 2020) . Thus, automated approaches to targeted identity prediction could serve these analyses by streamlining the process of labeling new corpora for study.",
"cite_spans": [
{
"start": 449,
"end": 473,
"text": "(Tekiroglu et al., 2020)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While some efforts have been made to develop algorithms that predict targeted identity groups, they have largely focused on classifying individual vs. group targets (Zampieri et al., 2019) or implicitly characterizing the target (Waseem and Hovy, 2016) . Predictive models capable of identifying a broad range of targeted protected classes have been less studied (Chiril et al., 2022) . Hate speech corpora that include the requisite range of targeted identity annotations have been limited until recently, opening the door to a full examination of this problem (Kennedy et al., 2020; Mathew et al., 2020; Kennedy et al., 2022) .",
"cite_spans": [
{
"start": 165,
"end": 188,
"text": "(Zampieri et al., 2019)",
"ref_id": "BIBREF44"
},
{
"start": 229,
"end": 252,
"text": "(Waseem and Hovy, 2016)",
"ref_id": "BIBREF41"
},
{
"start": 363,
"end": 384,
"text": "(Chiril et al., 2022)",
"ref_id": "BIBREF3"
},
{
"start": 562,
"end": 584,
"text": "(Kennedy et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 585,
"end": 605,
"text": "Mathew et al., 2020;",
"ref_id": "BIBREF22"
},
{
"start": 606,
"end": 627,
"text": "Kennedy et al., 2022)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we developed models to predict identity groups targeted by social media comments. Using the Measuring Hate Speech (MHS) corpus (Kennedy et al., 2020) , we trained neural networks to predict 8 identity group and 12 sub-group targets of hate speech. We demonstrated that these models exhibited good predictive performance, validating them within the MHS corpus and on external datasets. Lastly, we examined model performance on comments with multiple targets, finding that performance depended highly on those targets.",
"cite_spans": [
{
"start": 141,
"end": 163,
"text": "(Kennedy et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Hate Speech Detection and Measurement. This work builds on the long line of work investigating automated hate speech detection (Waseem and Hovy, 2016; Waseem, 2016; Del Vigna et al., 2017) . Currently, the state-ofthe-art approaches utilize large-scale transformer models with transfer learning to detect hate speech (Koufakou et al., 2020; Tran et al., 2020) . We use similar approaches in this work.",
"cite_spans": [
{
"start": 127,
"end": 150,
"text": "(Waseem and Hovy, 2016;",
"ref_id": "BIBREF41"
},
{
"start": 151,
"end": 164,
"text": "Waseem, 2016;",
"ref_id": "BIBREF39"
},
{
"start": 165,
"end": 188,
"text": "Del Vigna et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 317,
"end": 340,
"text": "(Koufakou et al., 2020;",
"ref_id": "BIBREF18"
},
{
"start": 341,
"end": 359,
"text": "Tran et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Targeted Identity Detection. Most work investigating the identification of identity targets in hate speech has viewed it as a sub-task of hate speech detection (Waseem et al., 2017) . Several works focused on hate speech detection have implicitly considered target identity via labels that contain information about the target of the speech, such as \"racism\", \"sexism\", and others (Kwok and Wang, 2013; Waseem and Hovy, 2016; Indurthi et al., 2019; Grimminger and Klinger, 2021) . Other work has considered hate speech targets in the context of \"single\" or \"group\" targets. Notably, the shared task OffensEval 2019 (Zampieri et al., 2019) included single vs. group target identification, which has been used in subsequent multi-task frameworks (Plaza-del Arco et al., 2021) . Lastly, Mossie and Wang (2020) consider the identification of ethnic groups in Ehtiopian social media comments.",
"cite_spans": [
{
"start": 160,
"end": 181,
"text": "(Waseem et al., 2017)",
"ref_id": "BIBREF40"
},
{
"start": 381,
"end": 402,
"text": "(Kwok and Wang, 2013;",
"ref_id": "BIBREF19"
},
{
"start": 403,
"end": 425,
"text": "Waseem and Hovy, 2016;",
"ref_id": "BIBREF41"
},
{
"start": 426,
"end": 448,
"text": "Indurthi et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 449,
"end": 478,
"text": "Grimminger and Klinger, 2021)",
"ref_id": "BIBREF13"
},
{
"start": 615,
"end": 638,
"text": "(Zampieri et al., 2019)",
"ref_id": "BIBREF44"
},
{
"start": 744,
"end": 773,
"text": "(Plaza-del Arco et al., 2021)",
"ref_id": "BIBREF25"
},
{
"start": 784,
"end": 806,
"text": "Mossie and Wang (2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Several works have sought to define the notion of \"targeting\" while providing analysis on what groups are targeted (ElSherief et al., 2018; Silva et al., 2016) . These works largely used rules or lexica based approaches for detection. explicitly define a \"target\" and corresponding \"aspects\", while developing neural networks to extract text matching these concepts in comments.",
"cite_spans": [
{
"start": 115,
"end": 139,
"text": "(ElSherief et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 140,
"end": 159,
"text": "Silva et al., 2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The creation of corpora that provide labels on targeted identity groups have allowed further analysis of targeted identity prediction (Mathew et al., 2020; Kennedy et al., 2020 Kennedy et al., , 2022 . Most relevant to this work is an analysis by Chiril et al. (2022) examining multi-task target identity prediction on a wide range of past corpora. Our study builds on these works by examining the performance on a thorough range of both broad target identity groups and more specific sub-groups.",
"cite_spans": [
{
"start": 134,
"end": 155,
"text": "(Mathew et al., 2020;",
"ref_id": "BIBREF22"
},
{
"start": 156,
"end": 176,
"text": "Kennedy et al., 2020",
"ref_id": "BIBREF16"
},
{
"start": 177,
"end": 199,
"text": "Kennedy et al., , 2022",
"ref_id": "BIBREF15"
},
{
"start": 247,
"end": 267,
"text": "Chiril et al. (2022)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "All code used in this work is available on the hate_measure repository 1 , which contains a codebase of various models applicable to the MHS dataset, and the hate_target repository 2 , which contains the code used for the analyses and figures described in this paper. All datasets were obtained as described by their corresponding entries on the Hate Speech Data website (Vidgen and Derczynski, 2020) .",
"cite_spans": [
{
"start": 371,
"end": 400,
"text": "(Vidgen and Derczynski, 2020)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "We trained and evaluated all models on the Measuring Hate Speech (MHS) corpus created by Kennedy et al. (2020) . We performed additional generalization evaluations on two other corpora: the Hate-Check Corpus (R\u00f6ttger et al., 2021) and Gab Hate Corpus (GHS) (Kennedy et al., 2022) . We chose to train on the MHS corpus because it was the largest dataset that covered a diverse range of platforms.",
"cite_spans": [
{
"start": 89,
"end": 110,
"text": "Kennedy et al. (2020)",
"ref_id": "BIBREF16"
},
{
"start": 208,
"end": 230,
"text": "(R\u00f6ttger et al., 2021)",
"ref_id": "BIBREF26"
},
{
"start": 257,
"end": 279,
"text": "(Kennedy et al., 2022)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "Measuring Hate Speech. The MHS corpus was constructed to facilitate the measurement of hate speech with item response theory. It consists of 50,070 hate speech comments obtained from Reddit, Youtube, and Twitter, labeled by 11,143 annotators. Annotations consisted of 10 survey items spanning a theorized spectrum of hatefulness. Additional annotations, of main interest for this work, included the target of the comment. Specifically, annotators were asked \"Is the [comment] directed at or about any individuals or groups based on...\", with the option to select among the following eight identity groups: race/ethnicity, religion, national origin or citizenship status, gender, sexual orientation, age, disability status, political identity; or \"none of the above\". Annotators could select more than one identity group. We note that the MHS corpus allows target identity annotations to include those that are the subject of supportive speech. Thus, \"target\" within the scope of this dataset can be understood to mean the identity group a comment speaks to, whether it is hateful or supportive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "For each identity group selected (if any), the annotator was prompted to select identity sub-groups. For example, if the annotator indicated a target based on race/ethnicity, they were asked to specify racial/ethnic sub-group identities, including: Black/African American, Hispanic/Latino, Asian, Middle Eastern, Native American or Alaska Native, Pacific Islander, Non-hispanic White, or an \"Other\" category with the option to provide written text. As another example, the possible sub-groups for gender identity included Men, Women, Non-binary, Transgender Men, Transgender Women, or Transgender unspecified (along with an \"Other\" category allowing for annotator specification). See Appendix B for all identity sub-groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "HateCheck Corpus. The HateCheck Corpus is comprised of a set of functional tests for hate speech detection models. The samples in Hate-Check are synthetically constructed to allow diagnostic assessment of model performance. These synthetic expressions generally make apparent who the target is, e.g., \"I hate [IDENTITY GROUP]\". Thus, they serve as a useful sanity check for validating the performance of a model. The HateCheck Corpus contains 3,901 comments, of which 3,606 have a labeled target. These targets are specifically labeled as \"gay people\", \"women\", \"disabled people\", \"Muslims\", \"black people\", \"trans people\", and \"immigrants\". To evaluate generalization performance, we recast these labels as follows: \"gay people\"\u2192 Sexual Orientation, \"women\" \u2192 Gender Identity, \"disabled people\" \u2192 Disability, \"Muslims\" \u2192 Religion, \"black people\" \u2192 Race, \"trans people\" \u2192 Gender Identity, and \"immigrants\" \u2192 National Origin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "Gab Hate Corpus. The Gab Hate Corpus (GHC) is comprised of 27,665 posts from the social media platform Gab (Kennedy et al., 2022) . Using a hierarchical coding typology, The posts were annotated for \"the presence of hate-based rhetoric.\" The corresponding identity group targets include nationality/regionalism, race/ethnicity, gender identity, religious/spiritual identity, sexual orientation, ideology, political identification, and mental/physical health status. We recast the ideology and political identification labels as a single \"political ideology\" label and map the remaining groups directly onto those of the MHS corpus.",
"cite_spans": [
{
"start": 107,
"end": 129,
"text": "(Kennedy et al., 2022)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "The GHC only includes target identity labels if the comment expressed hate toward those target identities. Since the MHS corpus includes target identity labels for either hateful or supportive speech, we omitted samples in the GHC which lacked target identity labels, resulting in a subcorpus of 7,801 comments. We did this since a model trained on the MHS may predict targets for the GHC that would have no corresponding label, since annotators would not have identified targets if they did not deem the comment hateful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "We performed minimal preprocessing on each data sample, including normalizing blank space and replacing URLs, phone numbers, and emails with respective tokens. We then passed each comment through a tokenizer corresponding to the base model architecture being trained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "3.2"
},
{
"text": "We formulated the task of predicting targeted identities as a multi-label binary prediction. However, each comment was annotated by more than one annotator. Annotators expressed moderate agreement on identifying the targeted groups, with Krippendorff's alphas ranging from 0.6 \u2212 0.75 (see Appendix C). We used soft labeling for training, where the proportion of annotators identifying an identity group as a target served as the \"label\". When calculating evaluation metrics, we only used binary labels by majority voting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "3.2"
},
{
"text": "Following Kennedy et al. (2020) , we removed annotators according to two quality checks revolving around the infit mean-square statistic (Linacre et al., 2002) , and satisfactory identification of target identities. Filtering annotators according to these quality checks resulted in 8,472 annotators remaining, with 39,565 accompanying comments.",
"cite_spans": [
{
"start": 10,
"end": 31,
"text": "Kennedy et al. (2020)",
"ref_id": "BIBREF16"
},
{
"start": 137,
"end": 159,
"text": "(Linacre et al., 2002)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "3.2"
},
{
"text": "We tested various pre-trained transformer architectures in predicting the multi-label binary outcome. Specifically, we used Universal Sentence Encoder (Cer et al., 2018) , BERT (Devlin et al., 2018) , and RoBERTa (Liu et al., 2019) as base models. We Error bars denote the standard deviation across the test folds. a. Precision, recall, and F1 score on test set data according to a 0.5 threshold, for each target group identity. b. ROC and PR AUC on test set data. Black lines denote the incidence rate (proportion of positive labels) of the corresponding target identity group. Identity groups are sorted in order of decreasing incidence rate.",
"cite_spans": [
{
"start": 151,
"end": 169,
"text": "(Cer et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 177,
"end": 198,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 213,
"end": 231,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.3"
},
{
"text": "stacked a feedforward layer on top of the model embeddings, and then placed M binary output layers, where M is the number of output groups under consideration. We applied dropout to the feedforward layer, with the specific rate chosen as a hyperparameter. We used pre-trained models obtained from HuggingFace (Wolf et al., 2020) .",
"cite_spans": [
{
"start": 309,
"end": 328,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.3"
},
{
"text": "We considered a variety of hyperparameter configurations when training models, varying the size of the dense layer, the batch size, and the dropout rate. The full set of configurations is listed in Appendix A. We used a validation set to determine the number of epochs to train on, as described below. We additionally weighted each sample by the square root of the number of annotators. Lastly, we used cross-entropy as the loss function for each output, and used the sum of individual losses as the loss for the entire network. We performed 5-fold cross validation to train and evaluate models. After shuffling the data across samples, we split the dataset into 5 folds. For each architecture, we trained 5 models, each using 4 folds for training and the remaining fold for evaluation. Each training fold was further split into training and validation sets. We then trained the model using the training set data with early stopping on the validation loss. When validation performance decreased past epoch E, we halted training, and retrained the model on the entire training fold for E epochs. We then evaluated the model performance on the test fold. Model evaluation metrics were reported across the 5 test folds. For out-of-corpus generalization tasks, we applied a model trained on the entire dataset, using the average number of epochs across folds during cross-validation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "3.4"
},
{
"text": "Since most labels we considered were imbalanced, we evaluated an array of complementary metrics. As is commonly done, we focused on a set of threshold-dependent metrics (precision, recall, F1 score) and threshold-agnostic metrics (ROC AUC and PR AUC) in the main text. We report two additional metrics-the accuracy over chance and log-odds difference-in the Appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.5"
},
{
"text": "We used traditional threshold-dependent metrics capturing false positive/false negative rates, including the precision, recall, and F1 score. We calculated these metrics using predictions at a threshold of 0.5, unless otherwise specified. We supplement the traditional metrics with threshold-agnostic metrics, including the area under the receiver operater characteristic curve (ROC AUC), and the area under the precision-recall curve (PR AUC). Importantly, we use the PR AUC in addition to ROC AUC as it may be more informative in imbalanced datasets (Davis and Goadrich, 2006) . We used macro-averaging to summarize a metric across labels. This process consisted of weighting each label's performance metric by their incidence rate when calculating an overall average. We considered two additional metrics: accu-racy over chance and the log-odds difference. For brevity, we describe them here, but report their values in Appendix A. We considered accuracy divided by chance performance in order to confirm that models did in fact generalize beyond that of a naive classifier which could artificially achieve high accuracy in imbalanced settings. In highly imbalanced settings (i.e., fewer than 1% of the labels in the positive class), accuracy over chance may not sufficiently capture the performance of a predictive model. This stems from the difficulty in improving performance in highly accurate regimes (e.g., it is more difficult to improve from 99% to 99.5% than 90% to 90.5% accuracy). Thus, we additionally turn to the log-odds difference:",
"cite_spans": [
{
"start": 552,
"end": 578,
"text": "(Davis and Goadrich, 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.5"
},
{
"text": "LOD = log a 1 \u2212 a \u2212 log b 1 \u2212 b (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.5"
},
{
"text": "where a is the test set accuracy and b is the baseline accuracy (e.g., chance). The log-odds difference more effectively weights the difficulty in achieving performance gains when the dataset is heavily imbalanced (e.g., the second term is very large).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.5"
},
{
"text": "Our main goal was the multi-label binary prediction of target identity groups. We first trained and evaluated models to predict the targeting of the broad identity groups. We repeated these experiments, but on identity sub-group predictions. We then evaluated the performance of the model on two additional datasets: the HateCheck and Gab Hate Corpora. Lastly, we evaluated the performance of the model on samples which had multiple targets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We first considered the task of predicting the identity group(s) targeted by a comment. We constructed a multi-label binary prediction task, with the binary outcomes corresponding to gender, race/ethnicity, sexual orientation, religion, national origin, politics, disability, and age (ordered in decreasing incidence rate). We then trained a variety of transformer-based neural networks to predict the targeting of each identity group in parallel. Each model consisted of a base network (pre-trained transformer model) stacked with a dense layer mapping onto the 8 identity groups, with variations on the hyperparameter configuration and data preparation. The full set of experiments and architectures, along with their performance, is listed in Appendix A. For brevity, we show results using a RoBERTa-Large base network with soft labels and training samples weighted by number of annotators (see Methods), which exhibited the best performance of the models we considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Targeted Identity Group Prediction",
"sec_num": "4.1"
},
{
"text": "We found that the model generally excelled at predicting the target of the comment, with performance varying according to the incidence rate of the label. We first evaluated model performance using threshold-dependent metrics such as precision, recall, and the F1 score ( Fig. 1a) . At a threshold of 0.5, the model achieved F1 scores from 0.7 \u2212 0.85 for the gender, race, sexual orientation, and religion labels. For national origin, politics, disability, and age, the F1 score decreased. This likely corresponds to the decrease in incidence rate for these labels ( Fig. 1b: black lines) . Additionally, precision generally exceeded recall, indicating that the model generally suffered from false negatives more often than false positives. This implies that the model could fail to identify comments which targeted identity groups, particularly for the national origin and political ideology labels.",
"cite_spans": [],
"ref_spans": [
{
"start": 272,
"end": 280,
"text": "Fig. 1a)",
"ref_id": "FIGREF0"
},
{
"start": 567,
"end": 588,
"text": "Fig. 1b: black lines)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Targeted Identity Group Prediction",
"sec_num": "4.1"
},
{
"text": "We examined the threshold-agnostic labels-ROC AUC and PR AUC-similarly finding that they indicated high predictive accuracy (Fig. 1b) . The ROC AUC values for all identity groups were above 0.90. Meanwhile, PR AUC values were above 0.80 for the gender, race, sexual orientation, and religion labels, above 0.60 for the politics and disability labels, and below 0.30 for age. The performance of the PR AUC roughly tracked with the incidence rate ( Fig. 1b) , as we might expect. We note that the PR AUC may be a better indicator of performance than the ROC AUC due to the imabalanced nature of the dataset (Davis and Goadrich, 2006) . Together, these results demonstrate that the model can simultaneously predict several targeted identity groups. However, this performance suffers on identity groups that are less represented in the dataset (e.g., age and disability).",
"cite_spans": [
{
"start": 605,
"end": 631,
"text": "(Davis and Goadrich, 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 124,
"end": 133,
"text": "(Fig. 1b)",
"ref_id": "FIGREF0"
},
{
"start": 447,
"end": 455,
"text": "Fig. 1b)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Targeted Identity Group Prediction",
"sec_num": "4.1"
},
{
"text": "We next considered the prediction of specific identity sub-groups. For example, secondary analyses on social media comments may be interested in comments targeting a specific gender identity (e.g., comments targeting women). To this end, we evaluated the performance of a similar taskmulti-label binary prediction-but the identity subgroups. We specifically focus on racial/ethnic iden- Figure 2 : Model performance on identity sub-groups varies strongly across sub-groups. The performance on target sub-group identity prediction across test folds of the MHS corpus as quantified by threshold-dependent and threshold-agnostic metrics. a-b. Precision, recall, and F1 score on the test set data according to a 0.5 threshold (a) and ROC/PR AUCs (b) for the racial sub-groups. c-d. Same as top row, but for the gender identity groups. Black lines denote the incidence rate (number of positive labels) of the corresponding target identity group. Identity groups are sorted in order of decreasing incidence rate.",
"cite_spans": [],
"ref_spans": [
{
"start": 387,
"end": 395,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Targeted Identity Sub-Group Prediction",
"sec_num": "4.2"
},
{
"text": "tity sub-groups (Black, White, Latinx, Asian, Middle Eastern, Pacific Islander, Native American, or some other group; listed in decreasing order of incidence rate) and gender identity sub-groups (women, men, non-binary; listed in decreasing order of incidence rate) because these groups were the most well-represented in the corpus. Within the gender identity sub-group task, we added an additional transgender label. As in the case of the broader identity groups, we found that the best performing model was a network with a RoBERTa-Large base with soft labels and weighted samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Targeted Identity Sub-Group Prediction",
"sec_num": "4.2"
},
{
"text": "We found that the best performing model exhibited high predictive performance on some racial identities (Fig. 2) . However, predictive performance was generally lower than that of the group identity prediction. We first evaluated thresholddependent metrics, finding that the model exhibited the best performance on Black-targeting speech, a median F1 score of 0.72. Similar to the target identity models, precision generally exceeded that of recall, implying the presence of false negatives. These discrepancies were most strongly observed in the racial groups which had the lowest incidence rate, including Middle Eastern, Pacific Islander, Native American, and the Other category (Fig. 2b: black lines) . Among the threshold-agnostic metrics, ROC AUC generally indicated superior predictive performance, though this may be a product of label imbalance (Davis and Goadrich, 2006) . PR AUC generally tracked with the F1 score (and the incidence rate). A notable exception is Asian identity, which exhibited higher PR AUC than Latinx identity, despite having a lower indicidence rate.",
"cite_spans": [
{
"start": 855,
"end": 881,
"text": "(Davis and Goadrich, 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 104,
"end": 112,
"text": "(Fig. 2)",
"ref_id": null
},
{
"start": 682,
"end": 705,
"text": "(Fig. 2b: black lines)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Targeted Identity Sub-Group Prediction",
"sec_num": "4.2"
},
{
"text": "Meanwhile, for the gender sub-groups, we observed worse performance relative to race. The best predictive performance was observed on identifying comments targeting women, with an F1 score of roughly 0.65. Interestingly, we observed substantially better predictive performance in identifying comments targeting transgender people compared to men, despite comparable incidence rates. Overall, we found that the reduced number of samples resulted in decreased predictive performance for many identity sub-groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Targeted Identity Sub-Group Prediction",
"sec_num": "4.2"
},
{
"text": "Thus far, we have examined model performance on held-out data within the MHS corpus, which consists of comments from Reddit, Twitter, and YouTube. However, past work has found that hate speech models exhibit a drop in performance on external corpora, particularly when those corpora are sourced from other platforms (Koufakou et al., 2020; Arango et al., 2019) . Therefore, we sought to assess out-of-corpus/platform performance of the trained model by evaluating it on two corpora: the HateCheck corpus and Gab Hate Corpus (GHC).",
"cite_spans": [
{
"start": 316,
"end": 339,
"text": "(Koufakou et al., 2020;",
"ref_id": "BIBREF18"
},
{
"start": 340,
"end": 360,
"text": "Arango et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models Generalize to External Corpora",
"sec_num": "4.3"
},
{
"text": "We first considered the HateCheck corpus because it served as a sanity check for model validation. The HateCheck corpus consists of functional tests for hate speech, which often clearly make apparent the targeted identity group (R\u00f6ttger et al., 2021) . Due to the relatively simple syntactic structure, we should expect a trained model to perform well at identifying targeted identities. We relabeled the HateCheck identity groups to align with the trained model, matching to 6 of its 8 identity groups (see Methods). We applied our model to all samples in the corpus and evaluated the performance.",
"cite_spans": [
{
"start": 228,
"end": 250,
"text": "(R\u00f6ttger et al., 2021)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models Generalize to External Corpora",
"sec_num": "4.3"
},
{
"text": "We found that the model exhibited superior predictive performance on the HateCheck corpus (Table 1: top). We obtained accuracies ranging from 0.97 \u2212 0.99 for each identity group, greatly exceeding that of chance, which ranged from 0.7 \u2212 0.86. At a threshold of 0.5, F1 scores were all above 0.90. Meanwhile, AUC scores were well above 0.95 for all identity groups, implying tight control of false positives and false negatives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models Generalize to External Corpora",
"sec_num": "4.3"
},
{
"text": "We supplemented the above generalization check with the Gab Hate Corpus (GHC), consisting of comments extracted from the social media platform Gab (Kennedy et al., 2022) . The GHC covers a wide range of target group identities that match closely with those of the MHS corpus. Furthermore, it presents a useful test case to evaluate the extent to which the target identity model generalizes to a new distribution of comments. We applied our model to the subset of comments on which the annotators specified a hateful target (see Methods).",
"cite_spans": [
{
"start": 147,
"end": 169,
"text": "(Kennedy et al., 2022)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models Generalize to External Corpora",
"sec_num": "4.3"
},
{
"text": "We found that the model generally performed well on the GHC, but exhibited a slight drop in predictive performance relative to the MHS corpus (Table 1: bottom). The model achieved accuracies ranging from 0.78 \u2212 0.98, well above chance. The model exhibited wide ranging F1 scores, with poor or average performance on the disability, national origin, and political affiliation groups. The ROC AUC and PR AUC scores similarly suggested good predictive performance, but were lower than those on the MHS corpus. Tracking with incidence rate, the model exhibited the best performance on the gender, race, religion, and sexual orientation categories. Overall, these results demonstrate that the predictive models generalize fairly well to novel, out-of-platform data.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 151,
"text": "(Table 1:",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Models Generalize to External Corpora",
"sec_num": "4.3"
},
{
"text": "Hate speech can target multiple identity groups, either referencing them as separate targets (e.g., referencing a Black person and woman separately) or as a single, intersectional target (e.g. referencing a Black woman, a single subject with racial and gender identity components). We sought to examine how well the classifier performed in scenarios where two identities were targeted in the same comment, either by annotation or prediction. We first examined the number of comments for each pair of target identity groups in the corpus. We assigned binary labels based on annotator majority voting for each target. Then, for each pair of identity groups, we calculated the number of comments which targeted both identity groups. The distribution of log-counts for each pair of identity groups is shown in Figure 3a . These counts generally aligned with the number of samples for each identity group. For example, (gender, race), the two largest identity groups in the corpus, had among the highest log-counts. However, the relationship between the identity groups also played a role in the observed counts. For example, (race/ethnicity, national origin) and (gender identity, sexual orientation) were the two combinations with the largest number of samples. This likely stems from the topic overlap within each pair.",
"cite_spans": [],
"ref_spans": [
{
"start": 806,
"end": 815,
"text": "Figure 3a",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Model Performance on Multiple Targets",
"sec_num": "4.4"
},
{
"text": "We might expect a classifier to perform well on identity group pairs with a large number of samples. The classifier could, however, produce errors on these pairs by mistaking one identity group for another. Furthermore, the classifier may predict multiple targets when only one target is present. In order to evaluate the performance of the model in these settings, we consolidated a sub-corpus of comments for which (i) annotators identified two targeted identity groups or (ii) the classifier identified two targeted identity groups. Thus, the subcorpus could contain either false negatives (classifier failed to predict both identity groups) or false positives (classifier mistakenly identified multiple identity groups). For each pair of identity groups, we calculated the average F1 score and PR AUC across the pair of labels (weighted by incidence rate). We note that we could only calculate these metrics when the classifier exhibited some false positives. If this did not occur, the F1 score and PR AUC would be undefined. We denote these rare instances with an X in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1075,
"end": 1083,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Model Performance on Multiple Targets",
"sec_num": "4.4"
},
{
"text": "We examined the distribution of the F1 score and PR AUC across the pairs of identity groups (Fig. 3b-c) . We found that, generally, the model exhibited worse performance on identity pairs which had the least number of samples, such as (age, disability) and (age, politics). On the other hand, the model generally performed well in cases where there were an abundance of samples, such as (race, gender). However, we observed other interesting relationships. For example, the model exhibited the best performance for identity pairs that were less related to each other, such as (age, sexual orientation), despite these pairs having lower counts. Notably, (origin, politics) exhibited markedly lower predictive performance, despite having more samples than other pairs. Together, these results highlight that performance on samples with multiple identity groups is modulated by the identity group pair under consideration.",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 103,
"text": "(Fig. 3b-c)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Model Performance on Multiple Targets",
"sec_num": "4.4"
},
{
"text": "We have demonstrated that transformer-based neural network models can achieve good predictive performance on classifying multiple targeted identity groups or sub-groups simultaneously. We additionally validated the models on out-of-corpus data, finding that the results indicated some degree of generalizability. These results largely serve to benchmark this task for future studies, but also raise additional questions on the definition and conceptual framing of \"targeting\" in hate speech corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We evaluated the performance of the model on multiple targets. However, the survey question prompting for identity targets did not distinguish between a single target with multiple identities, or multiple distinct targets. For example, a secondary analysis may be interested in comments that target Black women (at the intersection of racial and gender identity sub-groups), which are distinct from comments that separately target a Black person and a woman, but would be indistinguishable under the labeling scheme. The distinction is important, as the former setting corresponds to intersectional identity (Crenshaw, 2018) , on which datasets and machine learning algorithms have been demonstrated to exhibit biased coverage or performance (Kim et al., 2020) . Thus, the development of new labeling instruments that ask annotators to make the distinction between intersectional and multiple targets is of interest for future work. For example, Fortuna et al. (2019) developed a hierarchical labeling scheme which allowed for the the identification of intersectional targets in a Portugese dataset.",
"cite_spans": [
{
"start": 608,
"end": 624,
"text": "(Crenshaw, 2018)",
"ref_id": "BIBREF5"
},
{
"start": 742,
"end": 760,
"text": "(Kim et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 946,
"end": 967,
"text": "Fortuna et al. (2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In this work, we considered multi-label networks designed to simultaneously predict either identity groups or sub-groups. However, constructing networks that can simultaneously predict multiple sets of sub-groups is of interest, particularly for identifying intersectional targets in social media content. This can be viewed as multi-task problem, which may require adjustment to network architectures in order to achieve desirable performance (Crawshaw, 2020; Talat et al., 2018) . The development of multi-task networks with identity group specific sub-networks is of interest for future work (Plazadel Arco et al., 2021) . Such networks could, for example, contain sub-networks predicting racial identity sub-groups, gender identity sub-groups, and others, in parallel.",
"cite_spans": [
{
"start": 444,
"end": 460,
"text": "(Crawshaw, 2020;",
"ref_id": "BIBREF4"
},
{
"start": 461,
"end": 480,
"text": "Talat et al., 2018)",
"ref_id": "BIBREF32"
},
{
"start": 595,
"end": 623,
"text": "(Plazadel Arco et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We relied on synthesizing annotator responses into a single label for each comment, while incorpo-rating some knowledge of their disagreement. This approach generally falls in line with the weak perspectivist approach in predictive computing (Basile et al., 2021) . However, annotator disagreement on the identity group targets (Appendix C) indicates that there is some subjectivity in identifying targeted groups. Data perspectivist approaches more strongly incorporating different annotator responses are a viable path forward (Basile et al., 2021; Sudre et al., 2019; Uma et al., 2020) . At the same time, continued improvement in labeling instruments could further ameliorate these issues. For example, instruments that allow annotators to explain their reasoning in a structured fashion could shed light on why annotator disagreement is present. Qualitative examination of comments could support additional theorization of the the concept of \"targeting\". In this vein, following Kennedy et al. (2020) , it may be possible to develop a measurement scale for \"targeting\" to facilitate item response theory approaches on this task.",
"cite_spans": [
{
"start": 242,
"end": 263,
"text": "(Basile et al., 2021)",
"ref_id": "BIBREF1"
},
{
"start": 529,
"end": 550,
"text": "(Basile et al., 2021;",
"ref_id": "BIBREF1"
},
{
"start": 551,
"end": 570,
"text": "Sudre et al., 2019;",
"ref_id": "BIBREF31"
},
{
"start": 571,
"end": 588,
"text": "Uma et al., 2020)",
"ref_id": "BIBREF37"
},
{
"start": 984,
"end": 1005,
"text": "Kennedy et al. (2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Extensions to this work could facilitate parsing of the sentence to better elucidate the manner in which hateful comments refer to targets. For example, Shvets (2021) develop extraction networks to identify the text corresponding to both the \"target\" of a comment and its \"aspect\", or the characteristic attributed to the target. Such work could facilitate additional qualitative examination of comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "While hate speech is understood to \"target\" a person or group based on a characteristic, the notion of \"targeting\" is slightly different across datasets. For example, we used \"target\" to mean the identity group that a comment is directed toward, whether the comment exhibited positive or negative valence. This was framed in the context of a measurement scale spanning supportive and hateful speech (Kennedy et al., 2020) . However, other corpora limit their definition to content that is strictly hateful. These subtle distinctions limit the ability of out-of-corpus validation on datasets. For example, in this context, we could only use a subset of the GHC for generalization, since many comments were deemed not hateful (and thus did not have targeted identity annotations), despite referencing an identity group. Datasets may also reference the manner in which \"targeting\" occurs, such as calls to violence, usage of profanity, or implicit rhetoric (e.g., sarcasm or irony). Further work is needed to standardize these definitions to better inform the curation of future corpora. Table 2 : Full experimental results. LOD denotes \"log-odds difference\". USE denotes \"Universal Sentence Encoder\". \"H\" denotes the size of the hidden layer. \"B\" denotes batch size. \"D\" denotes dropout rate. Metrics are calculated by averaging across identity groups.",
"cite_spans": [
{
"start": 399,
"end": 421,
"text": "(Kennedy et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 1085,
"end": 1092,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "https://github.com/dlab-projects/ hate_measure 2 https://github.com/dlab-projects/ hate_target",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank members of the D-Lab for useful feedback and discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "People with physical disabilities (e.g., use of wheelchair), people with cognitive disorders (e.g., autism) or learning disabilities (e.g., Down syndrome), people with mental health problems (e.g., depression, addiction), visually impaired people, hearing impaired people, no specific disability ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Hate speech detection is not as easy as you may think: A closer look at model validation",
"authors": [
{
"first": "Aym\u00e9",
"middle": [],
"last": "Arango",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Poblete",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 42nd international acm sigir conference on research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "45--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aym\u00e9 Arango, Jorge P\u00e9rez, and Barbara Poblete. 2019. Hate speech detection is not as easy as you may think: A closer look at model validation. In Proceedings of the 42nd international acm sigir conference on research and development in information retrieval, pages 45-54.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Toward a perspectivist turn in ground truthing for predictive computing",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Cabitza",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Campagner",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Fell",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2109.04270"
]
},
"num": null,
"urls": [],
"raw_text": "Valerio Basile, Federico Cabitza, Andrea Campagner, and Michael Fell. 2021. Toward a perspectivist turn in ground truthing for predictive computing. arXiv preprint arXiv:2109.04270.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Universal sentence encoder",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sheng-Yi",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "Rhomni",
"middle": [],
"last": "St John",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Guajardo-Cespedes",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Tar",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.11175"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Emotionally informed hate speech detection: a multi-target perspective",
"authors": [
{
"first": "Patricia",
"middle": [],
"last": "Chiril",
"suffix": ""
},
{
"first": "Farah",
"middle": [],
"last": "Endang Wahyu Pamungkas",
"suffix": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "Benamara",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Moriceau",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2022,
"venue": "Cognitive Computation",
"volume": "14",
"issue": "1",
"pages": "322--352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patricia Chiril, Endang Wahyu Pamungkas, Farah Be- namara, V\u00e9ronique Moriceau, and Viviana Patti. 2022. Emotionally informed hate speech detection: a multi-target perspective. Cognitive Computation, 14(1):322-352.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multi-task learning with deep neural networks: A survey",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Crawshaw",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.09796"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Crawshaw. 2020. Multi-task learning with deep neural networks: A survey. arXiv preprint arXiv:2009.09796.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory, and antiracist politics",
"authors": [
{
"first": "Kimberl\u00e9",
"middle": [],
"last": "Crenshaw",
"suffix": ""
}
],
"year": 1989,
"venue": "Feminist legal theory",
"volume": "",
"issue": "",
"pages": "57--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kimberl\u00e9 Crenshaw. 2018. Demarginalizing the inter- section of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory, and antiracist politics [1989]. In Feminist legal theory, pages 57-80. Routledge.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automated hate speech detection and the problem of offensive language",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech de- tection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media, volume 11.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The relationship between precision-recall and roc curves",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Goadrich",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 23rd international conference on Machine learning",
"volume": "",
"issue": "",
"pages": "233--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jesse Davis and Mark Goadrich. 2006. The relationship between precision-recall and roc curves. In Proceed- ings of the 23rd international conference on Machine learning, pages 233-240.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Hate me, hate me not: Hate speech detection on facebook",
"authors": [
{
"first": "Fabio",
"middle": [
"Del"
],
"last": "Vigna",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Cimino",
"suffix": ""
},
{
"first": "Felice",
"middle": [],
"last": "Dell'orletta",
"suffix": ""
},
{
"first": "Marinella",
"middle": [],
"last": "Petrocchi",
"suffix": ""
},
{
"first": "Maurizio",
"middle": [],
"last": "Tesconi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Italian Conference on Cybersecurity (ITASEC17)",
"volume": "",
"issue": "",
"pages": "86--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Del Vigna, Andrea Cimino, Felice Dell'Orletta, Marinella Petrocchi, and Maurizio Tesconi. 2017. Hate me, hate me not: Hate speech detection on face- book. In Proceedings of the First Italian Conference on Cybersecurity (ITASEC17), pages 86-95.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Hate lingo: A target-based linguistic analysis of hate speech in social media",
"authors": [
{
"first": "Mai",
"middle": [],
"last": "Elsherief",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Belding",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mai ElSherief, Vivek Kulkarni, Dana Nguyen, William Yang Wang, and Elizabeth Belding. 2018. Hate lingo: A target-based linguistic analysis of hate speech in social media. In Proceedings of the Inter- national AAAI Conference on Web and Social Media, volume 12.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A hierarchically-labeled portuguese hate speech dataset",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "Joao",
"middle": [],
"last": "Rocha Da",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "S\u00e9rgio",
"middle": [],
"last": "Wanner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nunes",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "94--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paula Fortuna, Joao Rocha da Silva, Leo Wanner, S\u00e9r- gio Nunes, et al. 2019. A hierarchically-labeled por- tuguese hate speech dataset. In Proceedings of the Third Workshop on Abusive Language Online, pages 94-104.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A survey on automatic detection of hate speech in text",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "S\u00e9rgio",
"middle": [],
"last": "Nunes",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Comput. Surv",
"volume": "51",
"issue": "4",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3232676"
]
},
"num": null,
"urls": [],
"raw_text": "Paula Fortuna and S\u00e9rgio Nunes. 2018. A survey on automatic detection of hate speech in text. ACM Comput. Surv., 51(4).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Hate towards the political opponent: A Twitter corpus study of the 2020 US elections on the basis of offensive speech and stance detection",
"authors": [
{
"first": "Lara",
"middle": [],
"last": "Grimminger",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "171--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lara Grimminger and Roman Klinger. 2021. Hate to- wards the political opponent: A Twitter corpus study of the 2020 US elections on the basis of offensive speech and stance detection. In Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 171-180, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "FERMI at SemEval-2019 task 5: Using sentence embeddings to identify hate speech against immigrants and women in Twitter",
"authors": [
{
"first": "Vijayasaradhi",
"middle": [],
"last": "Indurthi",
"suffix": ""
},
{
"first": "Bakhtiyar",
"middle": [],
"last": "Syed",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Chakravartula",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "70--74",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2009"
]
},
"num": null,
"urls": [],
"raw_text": "Vijayasaradhi Indurthi, Bakhtiyar Syed, Manish Shri- vastava, Nikhil Chakravartula, Manish Gupta, and Vasudeva Varma. 2019. FERMI at SemEval-2019 task 5: Using sentence embeddings to identify hate speech against immigrants and women in Twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 70-74, Minneapo- lis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Introducing the gab hate corpus: defining and applying hate-based rhetoric to social media posts at scale. Language Resources and Evaluation",
"authors": [
{
"first": "Brendan",
"middle": [],
"last": "Kennedy",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Atari",
"suffix": ""
},
{
"first": "Aida",
"middle": [
"Mostafazadeh"
],
"last": "Davani",
"suffix": ""
},
{
"first": "Leigh",
"middle": [],
"last": "Yeh",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Omrani",
"suffix": ""
},
{
"first": "Yehsong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Kris",
"middle": [],
"last": "Coombs",
"suffix": ""
},
{
"first": "Shreya",
"middle": [],
"last": "Havaldar",
"suffix": ""
},
{
"first": "Gwenyth",
"middle": [],
"last": "Portillo-Wightman",
"suffix": ""
},
{
"first": "Elaine",
"middle": [],
"last": "Gonzalez",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brendan Kennedy, Mohammad Atari, Aida Mostafazadeh Davani, Leigh Yeh, Ali Omrani, Yehsong Kim, Kris Coombs, Shreya Havaldar, Gwenyth Portillo-Wightman, Elaine Gonzalez, et al. 2022. Introducing the gab hate corpus: defining and applying hate-based rhetoric to social media posts at scale. Language Resources and Evaluation, pages 1-30.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Constructing interval variables via faceted rasch measurement and multitask deep learning: a hate speech application",
"authors": [
{
"first": "J",
"middle": [],
"last": "Chris",
"suffix": ""
},
{
"first": "Geoff",
"middle": [],
"last": "Kennedy",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Bacon",
"suffix": ""
},
{
"first": "Claudia",
"middle": [
"Von"
],
"last": "Sahn",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vacano",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.10277"
]
},
"num": null,
"urls": [],
"raw_text": "Chris J Kennedy, Geoff Bacon, Alexander Sahn, and Claudia von Vacano. 2020. Constructing interval variables via faceted rasch measurement and multi- task deep learning: a hate speech application. arXiv preprint arXiv:2009.10277.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Intersectional bias in hate speech and abusive language datasets",
"authors": [
{
"first": "Jae Yeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Ortiz",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Nam",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Santiago",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Datta",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.05921"
]
},
"num": null,
"urls": [],
"raw_text": "Jae Yeon Kim, Carlos Ortiz, Sarah Nam, Sarah Santiago, and Vivek Datta. 2020. Intersectional bias in hate speech and abusive language datasets. arXiv preprint arXiv:2005.05921.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Hurtbert: incorporating lexical features with bert for the detection of abusive language",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Koufakou",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Endang Wahyu Pamungkas",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2020,
"venue": "Fourth Workshop on Online Abuse and Harms",
"volume": "",
"issue": "",
"pages": "34--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Koufakou, Endang Wahyu Pamungkas, Valerio Basile, Viviana Patti, et al. 2020. Hurtbert: incor- porating lexical features with bert for the detection of abusive language. In Fourth Workshop on On- line Abuse and Harms, pages 34-43. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Locate the hate: Detecting tweets against blacks",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Kwok",
"suffix": ""
},
{
"first": "Yuzhou",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2013,
"venue": "Twenty-seventh AAAI conference on artificial intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irene Kwok and Yuzhou Wang. 2013. Locate the hate: Detecting tweets against blacks. In Twenty-seventh AAAI conference on artificial intelligence.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "What do infit and outfit, mean-square and standardized mean. Rasch measurement transactions",
"authors": [
{
"first": "",
"middle": [],
"last": "John M Linacre",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "16",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John M Linacre et al. 2002. What do infit and outfit, mean-square and standardized mean. Rasch measure- ment transactions, 16(2):878.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Hatexplain: A benchmark dataset for explainable hate speech detection",
"authors": [
{
"first": "Binny",
"middle": [],
"last": "Mathew",
"suffix": ""
},
{
"first": "Punyajoy",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Seid Muhie Yimam",
"suffix": ""
},
{
"first": "Pawan",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Animesh",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mukherjee",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2012.10289"
]
},
"num": null,
"urls": [],
"raw_text": "Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukher- jee. 2020. Hatexplain: A benchmark dataset for explainable hate speech detection. arXiv preprint arXiv:2012.10289.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Vulnerable community identification using hate speech detection on social media",
"authors": [
{
"first": "Zewdie",
"middle": [],
"last": "Mossie",
"suffix": ""
},
{
"first": "Jenq-Haur",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "Information Processing & Management",
"volume": "57",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zewdie Mossie and Jenq-Haur Wang. 2020. Vulnerable community identification using hate speech detection on social media. Information Processing & Manage- ment, 57(3):102087.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Progress and push-back: How the killings of ahmaud arbery, breonna taylor, and george floyd impacted public discourse on race and racism on twitter. SSM-population health",
"authors": [
{
"first": "Shaniece",
"middle": [],
"last": "Thu T Nguyen",
"suffix": ""
},
{
"first": "Eli",
"middle": [
"K"
],
"last": "Criss",
"suffix": ""
},
{
"first": "Rebekah",
"middle": [
"I"
],
"last": "Michaels",
"suffix": ""
},
{
"first": "Jackson",
"middle": [
"S"
],
"last": "Cross",
"suffix": ""
},
{
"first": "Pallavi",
"middle": [],
"last": "Michaels",
"suffix": ""
},
{
"first": "Dina",
"middle": [],
"last": "Dwivedi",
"suffix": ""
},
{
"first": "Erica",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Krishay",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "Leah",
"middle": [
"H"
],
"last": "Mukhija",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "15",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thu T Nguyen, Shaniece Criss, Eli K Michaels, Re- bekah I Cross, Jackson S Michaels, Pallavi Dwivedi, Dina Huang, Erica Hsu, Krishay Mukhija, Leah H Nguyen, et al. 2021. Progress and push-back: How the killings of ahmaud arbery, breonna tay- lor, and george floyd impacted public discourse on race and racism on twitter. SSM-population health, 15:100922.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Multi-task learning with sentiment, emotion, and target detection to recognize hate speech and offensive language",
"authors": [
{
"first": "Flor",
"middle": [
"Miriam"
],
"last": "Plaza-Del Arco",
"suffix": ""
},
{
"first": "Sercan",
"middle": [],
"last": "Halat",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2109.10255"
]
},
"num": null,
"urls": [],
"raw_text": "Flor Miriam Plaza-del Arco, Sercan Halat, Sebastian Pad\u00f3, and Roman Klinger. 2021. Multi-task learn- ing with sentiment, emotion, and target detection to recognize hate speech and offensive language. arXiv preprint arXiv:2109.10255.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "HateCheck: Functional tests for hate speech detection models",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "R\u00f6ttger",
"suffix": ""
},
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Margetts",
"suffix": ""
},
{
"first": "Janet",
"middle": [],
"last": "Pierrehumbert",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "41--58",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.4"
]
},
"num": null,
"urls": [],
"raw_text": "Paul R\u00f6ttger, Bertie Vidgen, Dong Nguyen, Zeerak Waseem, Helen Margetts, and Janet Pierrehumbert. 2021. HateCheck: Functional tests for hate speech detection models. In Proceedings of the 59th An- nual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 41-58, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Defining hate speech",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Sellars",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "16--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Sellars. 2016. Defining hate speech. Berkman Klein Center Research Publication, 2016(20):16-48.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Targets and aspects in social media hate speech",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Shvets",
"suffix": ""
},
{
"first": "Paula",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Soler",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)",
"volume": "",
"issue": "",
"pages": "179--190",
"other_ids": {
"DOI": [
"10.18653/v1/2021.woah-1.19"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Shvets, Paula Fortuna, Juan Soler, and Leo Wanner. 2021. Targets and aspects in social media hate speech. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pages 179-190, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "System description for the Com-monGen task with the POINTER model",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Shvets",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)",
"volume": "",
"issue": "",
"pages": "161--165",
"other_ids": {
"DOI": [
"10.18653/v1/2021.gem-1.15"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Shvets. 2021. System description for the Com- monGen task with the POINTER model. In Pro- ceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pages 161-165, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Analyzing the targets of hate in online social media",
"authors": [
{
"first": "Leandro",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Mainack",
"middle": [],
"last": "Mondal",
"suffix": ""
},
{
"first": "Denzil",
"middle": [],
"last": "Correa",
"suffix": ""
},
{
"first": "Fabr\u00edcio",
"middle": [],
"last": "Benevenuto",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2016,
"venue": "Tenth international AAAI conference on web and social media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leandro Silva, Mainack Mondal, Denzil Correa, Fabr\u00ed- cio Benevenuto, and Ingmar Weber. 2016. Analyzing the targets of hate in online social media. In Tenth international AAAI conference on web and social media.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Let's agree to disagree: Learning highly debatable multirater labelling",
"authors": [
{
"first": "H",
"middle": [],
"last": "Carole",
"suffix": ""
},
{
"first": "Beatriz",
"middle": [],
"last": "Sudre",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Gomez Anson",
"suffix": ""
},
{
"first": "Chris",
"middle": [
"D"
],
"last": "Ingala",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Lane",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Jimenez",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Haider",
"suffix": ""
},
{
"first": "Ryutaro",
"middle": [],
"last": "Varsavsky",
"suffix": ""
},
{
"first": "Lorna",
"middle": [],
"last": "Tanno",
"suffix": ""
},
{
"first": "S\u00e9bastien",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ourselin",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Medical Image Computing and Computer-Assisted Intervention",
"volume": "",
"issue": "",
"pages": "665--673",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carole H Sudre, Beatriz Gomez Anson, Silvia In- gala, Chris D Lane, Daniel Jimenez, Lukas Haider, Thomas Varsavsky, Ryutaro Tanno, Lorna Smith, S\u00e9bastien Ourselin, et al. 2019. Let's agree to dis- agree: Learning highly debatable multirater labelling. In International Conference on Medical Image Com- puting and Computer-Assisted Intervention, pages 665-673. Springer.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Bridging the gaps: Multi task learning for domain transfer of hate speech detection",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Talat",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Joachim",
"middle": [],
"last": "Bingel",
"suffix": ""
}
],
"year": 2018,
"venue": "Online harassment",
"volume": "",
"issue": "",
"pages": "29--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeerak Talat, James Thorne, and Joachim Bingel. 2018. Bridging the gaps: Multi task learning for domain transfer of hate speech detection. In Online harass- ment, pages 29-55. Springer.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Generating counter narratives against online hate speech: Data and strategies",
"authors": [
{
"first": "Yi-Ling",
"middle": [],
"last": "Serra Sinem Tekiroglu",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guerini",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.04216"
]
},
"num": null,
"urls": [],
"raw_text": "Serra Sinem Tekiroglu, Yi-Ling Chung, and Marco Guerini. 2020. Generating counter narratives against online hate speech: Data and strategies. arXiv preprint arXiv:2004.04216.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Thirty years of research into hate speech: topics of interest and their evolution",
"authors": [
{
"first": "Alice",
"middle": [],
"last": "Tontodimamma",
"suffix": ""
},
{
"first": "Eugenia",
"middle": [],
"last": "Nissi",
"suffix": ""
},
{
"first": "Annalina",
"middle": [],
"last": "Sarra",
"suffix": ""
},
{
"first": "Lara",
"middle": [],
"last": "Fontanella",
"suffix": ""
}
],
"year": 2021,
"venue": "Scientometrics",
"volume": "126",
"issue": "1",
"pages": "157--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alice Tontodimamma, Eugenia Nissi, Annalina Sarra, and Lara Fontanella. 2021. Thirty years of research into hate speech: topics of interest and their evolution. Scientometrics, 126(1):157-179.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Kyumin Lee, and Se Rim Park. 2020. HABER-TOR: An efficient and effective deep hatespeech detector",
"authors": [
{
"first": "Thanh",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Changwei",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Yen",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7486--7502",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.606"
]
},
"num": null,
"urls": [],
"raw_text": "Thanh Tran, Yifan Hu, Changwei Hu, Kevin Yen, Fei Tan, Kyumin Lee, and Se Rim Park. 2020. HABER- TOR: An efficient and effective deep hatespeech de- tector. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7486-7502, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Destructive messages: How hate speech paves the way for harmful social movements",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Tsesis",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "27",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Tsesis. 2002. Destructive messages: How hate speech paves the way for harmful social move- ments, volume 27. NYU Press.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A case for soft loss functions",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Uma",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Fornaciari",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Silviu",
"middle": [],
"last": "Paun",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Human Computation and Crowdsourcing",
"volume": "8",
"issue": "",
"pages": "173--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandra Uma, Tommaso Fornaciari, Dirk Hovy, Sil- viu Paun, Barbara Plank, and Massimo Poesio. 2020. A case for soft loss functions. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, volume 8, pages 173-177.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Directions in abusive language training data, a systematic review: Garbage in, garbage out",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": 2020,
"venue": "Plos one",
"volume": "15",
"issue": "12",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen and Leon Derczynski. 2020. Direc- tions in abusive language training data, a system- atic review: Garbage in, garbage out. Plos one, 15(12):e0243300.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Are you a racist or am i seeing things? annotator influence on hate speech detection on twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the first workshop on NLP and computational social science",
"volume": "",
"issue": "",
"pages": "138--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem. 2016. Are you a racist or am i seeing things? annotator influence on hate speech detection on twitter. In Proceedings of the first workshop on NLP and computational social science, pages 138- 142.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Understanding abuse: A typology of abusive language detection subtasks",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.09899"
]
},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A ty- pology of abusive language detection subtasks. arXiv preprint arXiv:1705.09899.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL student research workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL student research workshop, pages 88-93.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Incitement on trial: Prosecuting international speech crimes",
"authors": [
{
"first": "Richard",
"middle": [
"Ashby"
],
"last": "",
"suffix": ""
},
{
"first": "Wilson",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Ashby Wilson. 2017. Incitement on trial: Pros- ecuting international speech crimes. Cambridge Uni- versity Press.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Predicting the type and target of offensive posts in social media",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1415--1420",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1144"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. Predicting the type and target of offensive posts in social media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1415-1420, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Transformer models are predictive of target identity groups. The performance on target group identity prediction across test folds of the MHS corpus as quantified by threshold-dependent and threshold-agnostic metrics.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Models exhibit diverse performance on multi-target samples. a. The log-count of samples for each pair of identity groups in the MHS corpus. b. The macro-F1 score evaluated on sub-corpora containing samples in which each pair of identity groups was targeted (according to annotators) or predicted to be targeted by the classifier. c. The PR AUC on the same sub-corpora, across identity group pairs.",
"uris": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Target identity models generalize to out-of-corpus, out-of-platform comments. The test performance of the target identity model (specifically, the model corresponding toFig. 1) on the HateCheck(top table)and Gab Hate Corpus(bottom table). The labels provided by each corpus were reassigned to align with the model's outputs (see Methods). Model predictions for identity groups without a corresponding label (age and political affiliation for HateCheck; age for GHC) were discarded. F1 score is calculated with a threshold of 0.5.",
"num": null
}
}
}
}