ACL-OCL / Base_JSON /prefixW /json /woah /2022.woah-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:09:59.785023Z"
},
"title": "MULTILINGUAL HATECHECK: Functional Tests for Multilingual Hate Speech Detection Models",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "R\u00f6ttger",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oxford",
"location": {}
},
"email": ""
},
{
"first": "Haitham",
"middle": [],
"last": "Seelawi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bocconi University",
"location": {}
},
"email": ""
},
{
"first": "Zeerak",
"middle": [],
"last": "Talat",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Simon Fraser University",
"location": {}
},
"email": ""
},
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Hate speech detection models are typically evaluated on held-out test sets. However, this risks painting an incomplete and potentially misleading picture of model performance because of increasingly well-documented systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, recent research has thus introduced functional tests for hate speech detection models. However, these tests currently only exist for English-language content, which means that they cannot support the development of more effective models in other languages spoken by billions across the world. To help address this issue, we introduce MULTILINGUAL HATECHECK (MHC), a suite of functional tests for multilingual hate speech detection models. MHC covers 34 functionalities across ten languages, which is more languages than any other hate speech dataset. To illustrate MHC's utility, we train and test a highperforming multilingual hate speech detection model, and reveal critical model weaknesses for monolingual and cross-lingual applications.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Hate speech detection models are typically evaluated on held-out test sets. However, this risks painting an incomplete and potentially misleading picture of model performance because of increasingly well-documented systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, recent research has thus introduced functional tests for hate speech detection models. However, these tests currently only exist for English-language content, which means that they cannot support the development of more effective models in other languages spoken by billions across the world. To help address this issue, we introduce MULTILINGUAL HATECHECK (MHC), a suite of functional tests for multilingual hate speech detection models. MHC covers 34 functionalities across ten languages, which is more languages than any other hate speech dataset. To illustrate MHC's utility, we train and test a highperforming multilingual hate speech detection model, and reveal critical model weaknesses for monolingual and cross-lingual applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Hate speech detection models play a key role in online content moderation and also enable scientific analysis and monitoring of online hate. Traditionally, models have been evaluated by their performance on held-out test sets. However, this practice risks painting an incomplete and misleading picture of model quality. Hate speech datasets are prone to exhibit systematic gaps and biases due to how they are sampled (Wiegand et al., 2019; Vidgen and Derczynski, 2020; Poletto et al., 2021) and annotated (Talat, 2016; Davidson et al., 2019; Sap et al., 2021) . Therefore, models may perform deceptively well by learning overly simplistic decision rules rather than encoding a generalisable understanding of the task (e.g. Niven and Kao, 2019; Geva et al., 2019; Shah et al., 2020) . Further, aggregate and thus abstract performance metrics such as accuracy and F1 score may obscure more specific model weaknesses (Wu et al., 2019) .",
"cite_spans": [
{
"start": 417,
"end": 439,
"text": "(Wiegand et al., 2019;",
"ref_id": "BIBREF53"
},
{
"start": 440,
"end": 468,
"text": "Vidgen and Derczynski, 2020;",
"ref_id": "BIBREF48"
},
{
"start": 469,
"end": 490,
"text": "Poletto et al., 2021)",
"ref_id": "BIBREF34"
},
{
"start": 505,
"end": 518,
"text": "(Talat, 2016;",
"ref_id": "BIBREF46"
},
{
"start": 519,
"end": 541,
"text": "Davidson et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 542,
"end": 559,
"text": "Sap et al., 2021)",
"ref_id": "BIBREF41"
},
{
"start": 723,
"end": 743,
"text": "Niven and Kao, 2019;",
"ref_id": "BIBREF29"
},
{
"start": 744,
"end": 762,
"text": "Geva et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 763,
"end": 781,
"text": "Shah et al., 2020)",
"ref_id": "BIBREF43"
},
{
"start": 914,
"end": 931,
"text": "(Wu et al., 2019)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For these reasons, recent hate speech research has introduced novel test sets and methods that allow for a more targeted evaluation of model functionalities (Calabrese et al., 2021; Kirk et al., 2021; Mathew et al., 2021; R\u00f6ttger et al., 2021b) . However, these novel test sets, like most hate speech datasets so far, focus on English-language content. A lack of effective evaluation hinders the development of higher-quality hate speech detection models for other languages. As a consequence, billions of non-English speakers across the world are given less protection against online hate, and even the largest social media platforms have clear language gaps in their content moderation (Simonite, 2021; Marinescu, 2021) .",
"cite_spans": [
{
"start": 157,
"end": 181,
"text": "(Calabrese et al., 2021;",
"ref_id": "BIBREF8"
},
{
"start": 182,
"end": 200,
"text": "Kirk et al., 2021;",
"ref_id": "BIBREF20"
},
{
"start": 201,
"end": 221,
"text": "Mathew et al., 2021;",
"ref_id": "BIBREF27"
},
{
"start": 222,
"end": 244,
"text": "R\u00f6ttger et al., 2021b)",
"ref_id": "BIBREF38"
},
{
"start": 688,
"end": 704,
"text": "(Simonite, 2021;",
"ref_id": "BIBREF44"
},
{
"start": 705,
"end": 721,
"text": "Marinescu, 2021)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a step towards closing these language gaps, we introduce MULTILINGUAL HATECHECK (MHC), which extends the English HATECHECK functional test suite for hate speech detection models (R\u00f6ttger et al., 2021b) to ten more languages. Functional testing evaluates models on sets of targeted test cases (Beizer, 1995) . Ribeiro et al. (2020) first applied this idea to structured model evaluation in NLP, and R\u00f6ttger et al. (2021b) used it to diagnose critical model weaknesses in English hate speech detection models. We create novel functional test suites for Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. 1 To our knowledge, MHC covers more languages than any other hate speech dataset.",
"cite_spans": [
{
"start": 181,
"end": 204,
"text": "(R\u00f6ttger et al., 2021b)",
"ref_id": "BIBREF38"
},
{
"start": 295,
"end": 309,
"text": "(Beizer, 1995)",
"ref_id": "BIBREF5"
},
{
"start": 312,
"end": 333,
"text": "Ribeiro et al. (2020)",
"ref_id": "BIBREF36"
},
{
"start": 401,
"end": 423,
"text": "R\u00f6ttger et al. (2021b)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The functional tests for each language in MHC broadly match those of the original HATECHECK, which were selected based on interviews with civil society stakeholders as well as a review of hate speech research. In each language, there are be-tween 25 and 27 tests for different kinds of hate speech (e.g. dehumanisation and threatening language) as well as contrasting non-hate, which may lexically resemble hate speech but is clearly nonhateful (e.g. counter speech). These contrasts make the test suites particularly challenging to models that rely on overly simplistic decision rules and thus enable more accurate evaluation of model functionalities (Gardner et al., 2020) . For each functional test, native-speaking language experts hand-crafted targeted test cases with clear gold standard labels, using the English cases as a starting point but adapting them to retain realism and cultural compatibility in the target language.",
"cite_spans": [
{
"start": 652,
"end": 674,
"text": "(Gardner et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We demonstrate MHC's utility as a diagnostic tool by evaluating a multilingual XLM-T model (Barbieri et al., 2021) fine-tuned on a combination of three widely-used hate speech datasets in Spanish, Italian and Portuguese. This model achieves strong performance on the respective held-out test sets. However, testing with MHC reveals that the model is 1) overly sensitive to key words and key phrases, 2) biased in its target coverage and 3) error-prone and inconsistent in cross-lingual transfer, in both zero-and many-shot settings. If this model was used to moderate content, these critical weaknesses could cause serious harm, leaving some users unprotected from hate while others are restricted in their freedom of expression. We hope that by revealing such weaknesses, MHC can play a key role in the development of better multilingual hate speech detection models. 2 R\u00f6ttger et al. (2021b) , we define hate speech as abuse that is targeted at a protected group or at its members for being a part of that group. Protected groups are groups based on age, disability, gender identity, race, national or ethnic origins, religion, sex or sexual orientation, which broadly reflects Western legal consensus, particularly the US 1964 Civil Rights Act, the UK's 2010 Equality Act and the EU's Charter of Fundamental Rights. Based on these definitions, we approach hate speech detection as the binary classification of content as either hateful or non-hateful.",
"cite_spans": [
{
"start": 91,
"end": 114,
"text": "(Barbieri et al., 2021)",
"ref_id": "BIBREF3"
},
{
"start": 871,
"end": 893,
"text": "R\u00f6ttger et al. (2021b)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Content Warning This article contains examples of hateful and abusive language. All examples are taken from MHC in their original language, to illustrate MHC's composition. 2 We make MHC available on GitHub.",
"cite_spans": [
{
"start": 173,
"end": 174,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Hate Speech Following",
"sec_num": null
},
{
"text": "Functional tests in MHC test the ability of hate speech detection models to correctly classify different types of hateful and non-hateful content. The tests are fine-grained, distinct and contrastive. For example, we have separate tests for hate expressed using profanity (e.g. DE:\"F*ck alle Juden\" 3 ) and non-hateful uses of profanity (e.g. DE:\"Ich hatte noch nie so einen guten F*ck\") in every language. Related functional tests are grouped into classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Defining Functional Tests",
"sec_num": "2.1"
},
{
"text": "We selected functional tests for each language in MHC to broadly match those from the original HATECHECK. R\u00f6ttger et al. (2021b) , in turn, motivated their selection of tests based on two factors: 1) a series of 21 interviews with NGO workers from the UK, US and Germany whose work directly relates to online hate, and 2) a review of previous hate speech research, particularly taxonomy work (e.g. Zampieri et al., 2019; Banko et al., 2020; Kurrek et al., 2020) , error analyses (e.g. Davidson et al., 2017; van Aken et al., 2018; and survey articles (e.g. Schmidt and Wiegand, 2017; Fortuna and Nunes, 2018; Vidgen et al., 2019) . All test cases are short text statements, and they are constructed to be clearly hateful or nonhateful according to our definition of hate speech.",
"cite_spans": [
{
"start": 106,
"end": 128,
"text": "R\u00f6ttger et al. (2021b)",
"ref_id": "BIBREF38"
},
{
"start": 398,
"end": 420,
"text": "Zampieri et al., 2019;",
"ref_id": "BIBREF57"
},
{
"start": 421,
"end": 440,
"text": "Banko et al., 2020;",
"ref_id": "BIBREF2"
},
{
"start": 441,
"end": 461,
"text": "Kurrek et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 485,
"end": 507,
"text": "Davidson et al., 2017;",
"ref_id": "BIBREF12"
},
{
"start": 508,
"end": 530,
"text": "van Aken et al., 2018;",
"ref_id": "BIBREF47"
},
{
"start": 557,
"end": 583,
"text": "Schmidt and Wiegand, 2017;",
"ref_id": "BIBREF42"
},
{
"start": 584,
"end": 608,
"text": "Fortuna and Nunes, 2018;",
"ref_id": "BIBREF15"
},
{
"start": 609,
"end": 629,
"text": "Vidgen et al., 2019)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Functional Tests",
"sec_num": "2.2"
},
{
"text": "Overall, there are 27 functional tests grouped into 11 classes for each of the ten languages in MHC, except for Mandarin, which has 25 functional tests. Compared to the 29 functional tests in HATECHECK, we 1) exclude slur homonyms and reclaimed slurs, because they have no direct equivalents in most MHC languages, and 2) adapt functional tests for spelling variations to non-Latin script in Arabic and Mandarin. For Mandarin, there are two fewer tests for spelling variations and thus two fewer tests overall compared to the other nine languages. As in HATECHECK, the tests cover distinct expressions of hate, as well as contrastive non-hate, which shares lexical features with hate but is unambiguously non-hateful. We provide example cases in different languages for each functional test in Appendix A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Functional Tests",
"sec_num": "2.2"
},
{
"text": "Distinct Expressions of Hate MHC tests different types of derogatory hate speech (F1-4) and hate expressed through threatening language (F5/6). It tests hate expressed using slurs (F7) and profanity (F8). MHC also tests hate expressed through pronoun reference (F10/11), negation (F12) and phrasing variants, specifically questions and opinions (F14/15). Lastly, MHC tests hate containing spelling variations such as missing characters or leet speak (F23-34), as well as spelling variations in non-Latin script for Arabic (F28-31) and Mandarin (F32-34). For example, there is an Arabic-specific test for spellings in Arabizi, the Arabic chat alphabet (F30), and a Mandarin-specific test for spellings in Pinyin, Mandarin's romanised version (F34).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Functional Tests",
"sec_num": "2.2"
},
{
"text": "Contrastive Non-Hate MHC tests non-hateful contrasts which use profanity (F9) and negation (F13) as well as protected group identifiers (F16/17). It also tests non-hateful contrasts in which hate speech is quoted or referenced, specifically counter speech, i.e. direct responses to hate speech which seek to act against it (F18/19). Lastly, MHC tests non-hateful contrasts which target outof-scope entities such as objects (F20-22) rather than a protected group.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Functional Tests",
"sec_num": "2.2"
},
{
"text": "All test cases in MHC are hand-crafted by nativespeaking language experts who have prior experience researching and/or annotating hate speech. 4 Each test case is a short statement that corresponds to exactly one gold standard label. HATECHECK's English test cases provide a starting point for MHC, but experts were encouraged to creatively adapt cases rather than providing literal translations, so as to retain relevance and realism. Adapting languagespecific idioms (e.g. \"murder that beat\"), slurs (e.g. \"c*nt\") and profanity (e.g. \"f*ck\") in particular required more creativity.",
"cite_spans": [
{
"start": 143,
"end": 144,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Test Cases",
"sec_num": "2.3"
},
{
"text": "Test cases are generated at scale using templates (Dixon et al., 2018; Garg et al., 2019; Ribeiro et al., 2020) , in which we replace tokens for protected group identifiers (e.g. NL:\"Ik haat [IDENT] .\") and slurs (e.g. NL:\"Voor mij ben je een [SLR] .\"). Compared to HATECHECK, the templates for MHC required more granular placeholders for gender-and case-inflected languages. German templates, for example, were instantiated based on gender and count of the identity group term, as well as its case: the male singular of Jew (\"[male_IDENT_S]\") in the German Akkusativ would be \"den Juden\" whereas its Nominativ would be \"der Jude\". The benefits of the template approach are that 1) MHC has an equal number of cases targeted at different protected groups in each language, and 2) the templates can easily be used to generate more test cases targeted at other protected groups in the future.",
"cite_spans": [
{
"start": 50,
"end": 70,
"text": "(Dixon et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 71,
"end": 89,
"text": "Garg et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 90,
"end": 111,
"text": "Ribeiro et al., 2020)",
"ref_id": "BIBREF36"
},
{
"start": 191,
"end": 198,
"text": "[IDENT]",
"ref_id": null
},
{
"start": 243,
"end": 248,
"text": "[SLR]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Test Cases",
"sec_num": "2.3"
},
{
"text": "MHC covers seven protected groups in each language. The selection of protected groups varies by language, with the goal of maximising the relevance of MHC to the cultural context of each language. Some groups, like women, are covered in all ten languages, whereas others, like lower caste people (HI:\"shudra\") are only covered in one language. For cases targeted at or referencing a particular protected group, we provide a secondary label for the group that is targeted. For an overview of protected groups covered by MHC in each language, see Appendix B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Test Cases",
"sec_num": "2.3"
},
{
"text": "In total, we generate 36,582 test cases across the ten languages in MHC, of which 25,511 (69.7%) are hateful and 11,071 (30.2%) are non-hateful. There is some variation in the number of test cases between different languages due to different numbers of functional tests and slurs, with Polish having the most cases (3,815) and Mandarin the least (3,378). 88.6% of all test cases are generated from templates. The remaining test cases do not use template tokens (e.g. PT:\"O clima hoje est\u00e1 f*da\") and are crafted individually. The average test case length is 8.5 words (std. dev. = 3.6) or 46.0 characters (std. dev. = 21.3). This resembles the composition of the original English HATECHECK.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Test Cases",
"sec_num": "2.3"
},
{
"text": "Validation To validate gold standard labels in MHC, all 36,582 test cases were reviewed by three native-speaking annotators. For each language we recruited at least 12 annotators, so that in total more than 120 annotators worked on MHC. 5 Annotation was prescriptive (R\u00f6ttger et al., 2021a) , meaning that annotators were tasked with adhering to clear annotation guidelines reflecting our definition of hate speech, rather than sharing their personal view on what is and is not hateful. 6 Compared to the original HATECHECK, where four out of five annotators confirmed the gold label for 99.4% of all test cases, there was more disagreement on MHC (see Appendix D). Upon review, language experts found that the vast majority of disagreement in MHC stemmed from annotators failing to apply the provided guidelines. For example, annotators mislabelled several instances of clearly hateful slurs (e.g. DE:\"Du bist eine F*tze, mehr nicht\"). Therefore, we did not exclude any cases from MHC. To enable further analysis and data filtering, we publish all annotator labels and mark up cases and templates where annotator majority labels differ from the gold labels.",
"cite_spans": [
{
"start": 237,
"end": 238,
"text": "5",
"ref_id": null
},
{
"start": 267,
"end": 290,
"text": "(R\u00f6ttger et al., 2021a)",
"ref_id": "BIBREF37"
},
{
"start": 487,
"end": 488,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Test Cases",
"sec_num": "2.3"
},
{
"text": "3 Testing Models with MHC",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Test Cases",
"sec_num": "2.3"
},
{
"text": "As a suite of functional tests, MHC is broadly applicable across hate speech detection models for the ten languages that it covers. Users can test multilingual models across all ten languages or use a language-specific test suite to test monolingual models. MHC is model agnostic, and can be used to compare different architectures or different datasets in zero-, few-or many-shot settings, and even commercial models for which public information on architecture and training data is limited.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Setup",
"sec_num": "3.1"
},
{
"text": "We test XLM-T (Barbieri et al., 2021) , an XLM-R model (Conneau et al., 2020) pre-trained on an additional 198 million Twitter posts in over 30 languages. 7 XLM-R is a widely-used architecture for multilingual language modelling, which has been shown to achieve near state-of-the-art performance on multilingual hate speech detection (Banerjee et al., 2021; Mandl et al., 2021) . We chose XLM-T over XLM-R after initial experiments showed the former to outperform the latter on several hate speech detection datasets as well as MHC.",
"cite_spans": [
{
"start": 14,
"end": 37,
"text": "(Barbieri et al., 2021)",
"ref_id": "BIBREF3"
},
{
"start": 55,
"end": 77,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 334,
"end": 357,
"text": "(Banerjee et al., 2021;",
"ref_id": "BIBREF1"
},
{
"start": 358,
"end": 377,
"text": "Mandl et al., 2021)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Transformer Models",
"sec_num": null
},
{
"text": "We fine-tune XLM-T on three widely-used hate speech datasets -one Spanish , one Italian (Sanguinetti et al., 2020) and one Portuguese (Fortuna et al., 2019 ). Accordingly, model performance is many-shot for Spanish, Italian and Portuguese, and zero-shot for all other languages.",
"cite_spans": [
{
"start": 88,
"end": 114,
"text": "(Sanguinetti et al., 2020)",
"ref_id": "BIBREF39"
},
{
"start": 134,
"end": 155,
"text": "(Fortuna et al., 2019",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Transformer Models",
"sec_num": null
},
{
"text": "All three datasets have an explicit label for hate speech that matches our definition of hate ( \u00a71), so that we can collapse all other labels into a single non-hateful label, to match MHC's binary format. We focus our discussion on XTC, an XLM-T model fine-tuned on a combination of these three datasets, which outperforms XLM-T models finetuned on the three datasets individually (see Appendix F). For the Spanish and Portuguese data, we use stratified 80/10/10 train/dev/test splits. For the Italian data, we use the original 91.6/8.4 train/test split, and then split the original training set into 90/10 train/dev portions. On the held-out test sets, XTC achieves 84.7 macro F1 for Spanish, 76.3 for Italian, and 73.3 for Portuguese, which is better than results reported in the original papers. 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Transformer Models",
"sec_num": null
},
{
"text": "Testing Commercial Models Few commercial models for hate speech detection are available for research use, and only a small subset of them can handle non-English language content. The best candidate for testing is Perspective, a free API built by Google's Jigsaw team. 9 Given an input text, Perspective provides percentage scores for attributes such as \"toxicity\" and \"identity attack\". The \"toxicity\" attribute covers a wide range of languages, including the ten in MHC. However, compared to hate speech, \"toxicity\" is a much broader concept, which includes other forms of abuse and profanity -some of which would be considered contrastive non-hate in the context of MHC. On the other hand, Perspective's \"identity attack\" aims to identify \"negative or hateful comments targeting someone because of their identity\" and thus aligns with our definition of hate speech ( \u00a71), but it is only available for three languages in MHC -German, Italian and Portuguese. For these three languages, XTC consistently outperforms Perspective (see Appendix H).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Transformer Models",
"sec_num": null
},
{
"text": "Performance Across Labels MHC reveals clear gaps in XTC's performance across all ten languages (Table 2) . Overall performance in terms of macro F1 is best on Mandarin (71.5), Italian (69.6) and Spanish (69.5), and worst on Hindi (58.1), Arabic (59.4) and Polish (66.2). F1 scores are higher for hateful cases than for non-hateful cases across all languages, with Hindi and Arabic exhibiting the biggest differences between hate and non-hate (\u223c40pp). For hateful cases, XTC performs best in terms of F1 score on Portuguese (83.5) and worst on Polish (76.1), but performance differences are Performance Across Functional Tests Evaluating XTC on each functional test across languages (Table 1) reveals specific model weaknesses. XTC performs better than a random binarychoice baseline (50% accuracy) on all functional tests for hate, with the exception of Spanish statements with hateful slurs (F7, 43.3% accuracy). Explicit dehumanisation (F3), threatening language (F5/6) and hate expressed using profanity (F8) appear to be the least challenging for the model, with relatively high and consistent accuracy across languages. In comparison, XTC generally performs worse on implicit hate (F4) and spelling variations (F23+). For other hateful functional tests, performance differs noticeably between languages. For example, XTC is very accurate on F10: hate expressed through reference in subsequent clauses in Spanish (94.0%), but much less so on Polish hate of the same kind (65.0%). Performance is worst on hate expressed using slurs (F7), with XTC misclassifying Spanish (43.3%), Polish (51.1%) and Italian statements (52.1%) at particularly high rates.",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 104,
"text": "(Table 2)",
"ref_id": "TABREF2"
},
{
"start": 682,
"end": 691,
"text": "(Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "83.0 - - - - - - - - - F29: AR: Repeated characters hateful 82.9 - - - - - - - - - F30: AR: Arabizi (Arabic chat alphabet) hateful 60.9 - - - - - - - - - F31: AR: Accepted alt. spellings hateful 85.6 - - - - - - - - - F32: ZH: Homophone char. replacement hateful - - - - - - 89.3 - - - F33: ZH: Character decomposition hateful - - - - - - 87.7 - - - F34: ZH: Pinyin spelling hateful - - - - - - 76.5 - - -",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "The only non-hateful functional test where performance is better than a random choice baseline across all ten languages is for non-hateful uses of profanity (F9). In contrast, negated hate (F13) and counter speech (F18/19), are misclassified at very high rates in every language. For example, XTC is only 9.6% accurate on Arabic denouncements of hate that make direct reference to it (F22). Performance differs more noticeably across languages for non-hateful uses of protected group identity terms (F16/17) and abuse against non-protected targets (F20-22). For example, XTC is 81.4% accurate on positive statements using group identifiers in Italian, but only 40.0% accurate on the same type of statement in Hindi.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "Performance Across Target Groups MHC can show whether models exhibit unintended biases in target coverage (Dixon et al., 2018) by testing their performance on cases generated from [IDENT] templates. These cases are exactly the same, apart from the group that is targeted (e.g. IT: \"Odio i gay\" and IT:\"Odio gli immigrati\"). To illustrate, Table 3 shows model accuracy on these cases for French, Italian and Mandarin, which all cover the same seven protected groups. In French, XTC performs substantially worse on cases targeting gay people (56.4 macro F1) than on equivalent cases targeting women (70.9) or immigrants (66.8). In Italian, XTC performs particularly poorly on cases targeting disabled people (64.4), and in Mandarin on cases targeting black people (66.7), compared to other target groups.",
"cite_spans": [
{
"start": 106,
"end": 126,
"text": "(Dixon et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 339,
"end": 346,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "MHC reveals clear functional weaknesses in the XTC model across all ten languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.3"
},
{
"text": "First, XTC is overly sensitive to key words and phrases. XTC does have some ability to distinguish between different uses of potentially hateful key words, as demonstrated by its relatively good performance on both hateful and non-hateful uses of profanity across languages (F8/9). However, in languages like Hindi and Arabic, even entries which refer positively to protected groups (F17) are misclassified as hateful at high rates. Accuracy on non-hateful contrasts to hateful phrases, like negated hate (F13) and counter speech (F18/19), is low across all languages. Thus, XTC appears to rely at least in part on overly simplistic decision rules (e.g. that the use of group identifiers is a signal for hate speech). It does not sufficiently register linguistic signals like negation, that reframe hateful phrases into clearly non-hateful statements (e.g. FR:\"Je ne d\u00e9teste pas les blacks.\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.3"
},
{
"text": "Second, XTC is biased in its target coverage across languages, classifying hate directed against some protected groups less accurately than equivalent cases directed at others (Table 3) .",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 185,
"text": "(Table 3)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.3"
},
{
"text": "These are clear weaknesses that were not apparent from XTC's strong performance on held-out test sets. They match findings from HATECHECK for monolingual English models (R\u00f6ttger et al., 2021b) . If XTC was used to moderate content, these weaknesses could cause serious harm. In particular, misclassifying counter speech risks undermining positive efforts to fight hate speech, and biased target coverage may create and entrench biases in the protections afforded to different groups. However, the multilingual nature of MHC also allows for additional, novel insights.",
"cite_spans": [
{
"start": 169,
"end": 192,
"text": "(R\u00f6ttger et al., 2021b)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.3"
},
{
"text": "First, we can evaluate cross-lingual performance in both zero-and many-shot settings (Table 2) . XTC performs particularly well on Italian, Spanish and Portuguese -the languages it was fine-tuned on -but also on French, which is another Romance language. Performance on other European languages is also relatively high. By contrast, Hindi and Arabic clearly stand out as particularly challenging, with substantially lower performance. This suggests that cross-lingual transfer works better across more closely related languages and poses a challenge for more dissimilar languages. 10 Cultural differences across language settings may also affect transferability. We may for example expect hate in Italian and French to be more similar to each other than to hate in Hindi, along such dimensions as who the targets of hate are, which would likely affect the cross-lingual performance of hate speech detection models. Both hypotheses could be explored in future research.",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 94,
"text": "(Table 2)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.3"
},
{
"text": "Second, we can evaluate differences in language-specific model behaviour, again in zero-as well as many-shot settings. For example, XTC tends to overpredict hate in Hindi and Arabic, both zero-shot, whereas it tends to underpredict hate in many-shot Spanish and zero-shot Polish (Table 1) . XTC also exhibits different target biases across languages, for zero-shot settings like in French and Mandarin as well as many-shot Italian (Table 3) . This suggests that, in addition to accounting for differences in high-level performance, multilingual models may require very different calibration and adaptation across languages, even for languages they were not directly fine-tuned on. Overall, the insights generated by MHC suggest two potential steps towards the development of more effective multilingual hate speech detection models: 1) creating training data in diverse languages to reduce language gaps, even for models with significant cross-lingual transfer abilities, and 2) evaluating and addressing language-specific model biases as well as differences in performance across languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 279,
"end": 288,
"text": "(Table 1)",
"ref_id": "TABREF1"
},
{
"start": 431,
"end": 440,
"text": "(Table 3)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.3"
},
{
"text": "The limitations of the original HATECHECK also apply to MHC. First, MHC diagnoses specific model weaknesses rather than generalisable model strengths, and should be used to complement rather than substitute evaluation on held-out test sets of real-world hate speech. Second, MHC does not test functionalities related to context outside of individual documents or modalities other than text. Third, MHC only covers a limited set of protected groups and slurs across languages, but can easily be expanded using the provided case templates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "4"
},
{
"text": "The multilingual nature of MHC creates additional considerations. First, comparisons of performance between languages are not strictly like-forlike, because cases in different languages are not literal translations of each other. This limitation is compounded for Arabic and Mandarin, which have unique functional tests for spelling variations. Second, even though MHC includes a diverse set of ten languages, these languages still only make up a fraction of languages spoken across the world. To our knowledge, MHC covers more languages than any other hate speech dataset, but hundreds of other languages remain neglected and should be considered for future expansions of MHC. Third, the selection of functional tests in MHC is based on HATECHECK, which was informed in part by interviews in an anglo-centric setting. We worked with native-speaking language experts and created additional tests to account for non-Latin scripts in Arabic and Mandarin, but future research may consider additional interviews or other languagespecific steps to inform expansions of MHC. Lastly, individual languages, like the ten included in MHC, are not monolithic but vary between speakers, especially across geographic regions and sociodemographic groups. We use widely-spoken dialects for the ten languages in MHC (see \u00a71), but cannot cover all variations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "4"
},
{
"text": "Diagnostic Hate Speech Datasets The concept of functional testing from software engineering (Beizer, 1995) was first applied to NLP model evaluation by Ribeiro et al. (2020) . The original HATE-CHECK (R\u00f6ttger et al., 2021b ) then introduced functional tests for hate speech detection models, using hand-crafted test cases to diagnose model weaknesses on different kinds of hate and non-hate. Kirk et al. (2021) applied the same framework to emoji-based hate. Manerba and Tonelli (2021) provide smaller-scale functional test for abuse detection systems. Other research has instead collected real-world examples of hate and annotated them for more fine-grained labels, such as the hate target, to enable more comprehensive error analysis (e.g. Mathew et al., 2021; . Instead of creating a static dataset, Calabrese et al. (2021) devise a hate speech-specific data augmentation technique based on simple heuristics to create additional test cases based on model training data. MHC is the first non-English diagnostic dataset for hate speech detection models.",
"cite_spans": [
{
"start": 92,
"end": 106,
"text": "(Beizer, 1995)",
"ref_id": "BIBREF5"
},
{
"start": 152,
"end": 173,
"text": "Ribeiro et al. (2020)",
"ref_id": "BIBREF36"
},
{
"start": 200,
"end": 222,
"text": "(R\u00f6ttger et al., 2021b",
"ref_id": "BIBREF38"
},
{
"start": 392,
"end": 410,
"text": "Kirk et al. (2021)",
"ref_id": "BIBREF20"
},
{
"start": 742,
"end": 762,
"text": "Mathew et al., 2021;",
"ref_id": "BIBREF27"
},
{
"start": 803,
"end": 826,
"text": "Calabrese et al. (2021)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Non-English Hate Speech Data English is by far the most common language for hate speech datasets, as recent reviews by Vidgen and Derczynski (2020) and Poletto et al. (2021) confirm. Encouragingly, more and more non-English datasets are being created, particularly for shared tasks (e.g. Wiegand et al., 2018; Ptaszynski et al., 2019; Fersini et al., 2020; Zampieri et al., 2020; Mulki and Ghanem, 2021) . However, very few datasets cover more than one language (Ousidhoum et al., 2019; , and to our knowledge no dataset covers as many languages as MHC.",
"cite_spans": [
{
"start": 119,
"end": 147,
"text": "Vidgen and Derczynski (2020)",
"ref_id": "BIBREF48"
},
{
"start": 152,
"end": 173,
"text": "Poletto et al. (2021)",
"ref_id": "BIBREF34"
},
{
"start": 288,
"end": 309,
"text": "Wiegand et al., 2018;",
"ref_id": "BIBREF54"
},
{
"start": 310,
"end": 334,
"text": "Ptaszynski et al., 2019;",
"ref_id": "BIBREF35"
},
{
"start": 335,
"end": 356,
"text": "Fersini et al., 2020;",
"ref_id": "BIBREF14"
},
{
"start": 357,
"end": 379,
"text": "Zampieri et al., 2020;",
"ref_id": null
},
{
"start": 380,
"end": 403,
"text": "Mulki and Ghanem, 2021)",
"ref_id": "BIBREF28"
},
{
"start": 462,
"end": 486,
"text": "(Ousidhoum et al., 2019;",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Multilingual Hate Speech Detection The scarcity of non-English hate speech datasets has motivated research into few-and zero-shot crosslingual hate speech detection, i.e. detection with little or no training data in the target language. However, model performance is generally found to be lacking in such settings (Stappen et al., 2020; Leite et al., 2020; Nozza, 2021) . Others have thus explored data augmentation techniques based on machine translation, which yield limited improvements (Pamungkas et al., 2021; Wang and Banko, 2021) . Overall, multilingual models trained or finetuned directly on the target languages, i.e. in manyshot settings, are still consistently found to perform best (Aluru et al., 2020; Pelicon et al., 2021) . MHC's functional tests are model-agnostic and can be used to evaluate multilingual hate speech detection models trained on any amount of data.",
"cite_spans": [
{
"start": 314,
"end": 336,
"text": "(Stappen et al., 2020;",
"ref_id": "BIBREF45"
},
{
"start": 337,
"end": 356,
"text": "Leite et al., 2020;",
"ref_id": "BIBREF22"
},
{
"start": 357,
"end": 369,
"text": "Nozza, 2021)",
"ref_id": "BIBREF30"
},
{
"start": 490,
"end": 514,
"text": "(Pamungkas et al., 2021;",
"ref_id": "BIBREF32"
},
{
"start": 515,
"end": 536,
"text": "Wang and Banko, 2021)",
"ref_id": "BIBREF52"
},
{
"start": 695,
"end": 715,
"text": "(Aluru et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 716,
"end": 737,
"text": "Pelicon et al., 2021)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this article, we introduced MULTILINGUAL HATECHECK (MHC), a suite of functional tests for multilingual hate speech detection models. MHC expands the English-language HATECHECK (R\u00f6ttger et al., 2021b) to ten additional languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. To our knowledge, MHC covers more languages than any other hate speech dataset. Across the languages, native-speaking language experts created 36,582 test cases, which provide contrasts between hateful and non-hateful content. This makes MHC challenging to hate speech detection models and allows for a more effective evaluation of model quality.",
"cite_spans": [
{
"start": 179,
"end": 202,
"text": "(R\u00f6ttger et al., 2021b)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We demonstrated MHC's utility as a diagnostic tool by testing a high-performing multilingual transformer model, which was fine-tuned on three widely-used hate speech datasets in three different languages. MHC revealed the model to be 1) overly sensitive to key words and key phrases, 2) biased in its target coverage and 3) error-prone and inconsistent in cross-lingual transfer, in both zeroand many-shot settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "So far, hate speech research has primarily focused on English-language content and thus neglected billions of non-English speakers across the world. We hope that MHC can contribute to closing this language gap and that by diagnosing specific model weaknesses across languages it can support the development of better multilingual hate speech detection models in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "B. LANGUAGE VARIETY MHC covers ten languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "C. SPEAKER DEMOGRAPHICS All test cases across the ten languages in MHC were handcrafted by native-speaking language experts -one per language. All ten had previously worked on hate speech as researchers and/or annotators. Six out of ten experts identify as women, the rest as men. Four out of ten identify as non-White.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "D. ANNOTATOR DEMOGRAPHICS More than 120 annotators provided annotations on MHC, with at least 12 annotators per language. Annotators were recruited on Appen, a crowdworking provider. Appen gave no demographic information beyond guaranteeing that annotators were native speakers of the languages in which they completed their work. In setting up the annotation task and communicating with annotators, we followed guidance for protecting and monitoring annotator wellbeing provided by Vidgen et al. (2019) . E. SPEECH SITUATION All test cases were created between November 2021 and January 2022.",
"cite_spans": [
{
"start": 483,
"end": 503,
"text": "Vidgen et al. (2019)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "F. TEXT CHARACTERISTICS The composition of the dataset is described in detail in \u00a72.2 and \u00a72.3 of the article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Annotator disagreement on MHC (Table 5) is higher than on the original HATECHECK (R\u00f6ttger et al., 2021b) , where four out of five annotators agreed on the gold label in 99.4% of cases. There is a lot of variation in disagreement across languages, with most having less than 5% disagreement, and only Mandarin and French more than 10%. Upon review, our language experts found that the vast majority of disagreements stemmed from annotator error, where annotators failed to apply the explicit, prescriptive annotation guidelines they received. For example, hate and more general abuse were often confused, and abuse against non-protected targets was often labelled as hateful. Therefore, we did not exclude any cases from MHC. To enable further analysis and data filtering, we provide annotator labels with the test suite and mark up cases and templates where there is disagreement between the annotator majority labels and the gold labels from our language experts. , which in turn originates in the dataset. The other 4,100 tweets were collected as part of the Italian hate speech monitoring project \"Contro l'Odio\" (Capozzi et al., 2019).",
"cite_spans": [
{
"start": 81,
"end": 104,
"text": "(R\u00f6ttger et al., 2021b)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 30,
"end": 39,
"text": "(Table 5)",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "D Annotator Disagreement on MHC",
"sec_num": null
},
{
"text": "Annotation The Sanguinetti et al. (2018) tweets were annotated in two phases, first by expert annotators, then by crowdworkers from CrowdFlower. Each tweet was annotated by two to three annotators for six attributes: hate speech, aggressiveness, offensiveness, irony, stereotype, and intensity. For inter-annotator agreement, the authors report a Krippendorff's Alpha of 38% for CrowdFlower, and a Cohen's Kappa of 45% for the expert annotators. The \"Contro l'Odio\" tweets were annotated by crowdworkers, but inter-annotator agreement was not reported. (Sanguinetti et al., 2020) .",
"cite_spans": [
{
"start": 553,
"end": 579,
"text": "(Sanguinetti et al., 2020)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "D Annotator Disagreement on MHC",
"sec_num": null
},
{
"text": "Data We use all 8,100 tweets (41.8% hate). Annotation All tweets in the dataset were annotated as either hateful or non-hateful by 18 non-expert Portuguese native speakers were hired. Each tweet was annotated by three annotators, and inter-annotator agreement was low, with a Cohen's Kappa of 0.17.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Annotator Disagreement on MHC",
"sec_num": null
},
{
"text": "We use all 5,668 tweets (31.5% hate).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "Definition of Hate Speech \"Hate speech is language that attacks or diminishes, that incites violence or hate against groups, based on specific characteristics such as physical appearance, religion, descent, national or ethnic origin, sexual orientation, gender identity or other, and it can occur with different linguistic styles, even in subtle forms or when humour is used.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "Sampling Tweets were sampled using three methods: 1) monitoring potential victims of hate accounts, 2) retrieving tweets from the history of identified haters, and 3) retrieving tweets using neutral and derogatory keywords, polarising hashtags, and stems. This yielded 19,600 tweets, of which 6,600 are in Spanish and the rest in English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E.3 Basile et al. (2019) Spanish Data",
"sec_num": null
},
{
"text": "Annotation The dataset was annotated for three attributes: hate speech, target range (individuals or groups), and aggressiveness. First, all data was annotated by at least three Figure Eight crowdworkers. Inter-annotator agreement on Spanish hate speech was high, with a Cohen's Kappa of 0.89. Second, two experts annotated each tweet. The final label was assigned based on majority vote across the crowd and expert annotators.",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 190,
"text": "Figure Eight",
"ref_id": null
}
],
"eq_spans": [],
"section": "E.3 Basile et al. (2019) Spanish Data",
"sec_num": null
},
{
"text": "Data We use all 6,600 Spanish tweets, of which 41.5% are labelled as hateful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E.3 Basile et al. (2019) Spanish Data",
"sec_num": null
},
{
"text": "Definition of Hate Speech \"Any communication that disparages a person or a group on the basis of some characteristic such as race, color, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E.3 Basile et al. (2019) Spanish Data",
"sec_num": null
},
{
"text": "Before using the datasets for fine-tuning, we remove newline and tab characters. We replace URLs and user mentions with [URL] and [USER] tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E.4 Pre-Processing",
"sec_num": null
},
{
"text": "We denote the three XLM-T models trained on Italian Sanguinetti et al. (2020) , Portuguese Fortuna et al. 2019and Spanish as XLM-IT, XLM-PT and XLM-ES respectively. XTC denotes the XLM-T model trained on the combination of all three datasets, for which we report results in the main body of this article. XTC generally outperforms the monolingual models when compared on the respective held-out test sets (Table 6) as well as MHC (Table 7) . ",
"cite_spans": [
{
"start": 52,
"end": 77,
"text": "Sanguinetti et al. (2020)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 430,
"end": 439,
"text": "(Table 7)",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "F XLM-T Model Comparison",
"sec_num": null
},
{
"text": "Model Architecture We implemented XLM-T model (Barbieri et al., 2021) using the transformers Python library (Wolf et al., 2020) . XLM-T is an XLM-R (Conneau et al., 2020) model pre-trained on an additional 198 million Twitter posts in over 30 languages. It has 12 layers, a hidden layer size of 768, 12 attention heads and a total of 278 million parameters. For sequence classification, we added a linear layer with softmax output.",
"cite_spans": [
{
"start": 46,
"end": 69,
"text": "(Barbieri et al., 2021)",
"ref_id": "BIBREF3"
},
{
"start": 108,
"end": 127,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF55"
},
{
"start": 148,
"end": 170,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "G XLM-T Model Details",
"sec_num": null
},
{
"text": "Fine-Tuning All models use unweighted crossentropy loss and the AdamW optimiser (Loshchilov and Hutter, 2019 ) with a 5e-5 learning rate and a 0.01 weight decay. For regularisation, we set a 10% dropout probability, and for batch size we use 32. For each model, we train for 50 epochs, with an early stopping strategy with a patience of 5 epochs, with respect to improvements in the binary F1-score on the validation split. We store the checkpoint with the highest binary F1-score and use it as our final model.",
"cite_spans": [
{
"start": 80,
"end": 108,
"text": "(Loshchilov and Hutter, 2019",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "G XLM-T Model Details",
"sec_num": null
},
{
"text": "We ran all computations on an AWS \"g4dn.2xlarge\" server equipped with one NVIDIA T4 GPU card. The average wall time for each each training step was around 3 seconds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation",
"sec_num": null
},
{
"text": "We make the XTC model available for download on HuggingFace.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Access",
"sec_num": null
},
{
"text": "We test Perspective's \"identity attack\" attribute and convert the percentage score to a binary label using a 50% cutoff. Testing was done in February 2022. On the held-out test sets for Italian (Sanguinetti et al., 2020) and Portuguese (Fortuna et al., 2019) , Perspective scored 70.7 and 64.1 macro F1. Perspective is outperformed on both languages by XTC, which scored 76.3 and 84.7 (Table 6 ).",
"cite_spans": [
{
"start": 194,
"end": 220,
"text": "(Sanguinetti et al., 2020)",
"ref_id": "BIBREF39"
},
{
"start": 236,
"end": 258,
"text": "(Fortuna et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 385,
"end": 393,
"text": "(Table 6",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "H Google Perspective Results",
"sec_num": null
},
{
"text": "On MHC, for the three languages it supports, Perspective (Table 8) Table 8 : Performance of the Perspective API across the three languages it supports in MHC. F1 score for hateful and non-hateful cases, and overall macro F1 score.",
"cite_spans": [],
"ref_spans": [
{
"start": 57,
"end": 66,
"text": "(Table 8)",
"ref_id": null
},
{
"start": 67,
"end": 74,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "H Google Perspective Results",
"sec_num": null
},
{
"text": "On dialects: we use Egyptian Arabic in Arabic script, European Dutch and French, High German, Standard Italian and Polish, Standard Hindi in Latin script, Standard Mandarin in Chinese script, Brazilian Portuguese and Argentinian Spanish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use ISO 639-1 codes to denote the different languages when giving test case examples. SeeTable 2for a legend.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See Appendix C for more detail on the language experts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See Appendix C for more detail on annotator recruitment.6 Annotation guidelines are available on GitHub.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the XLM-T implementation hosted on Hugging-Face: huggingface.co/cardiffnlp/twitter-xlm-roberta-base.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See Appendix E for details on each dataset and preprocessing, and Appendix G for details on model training. 9 www.perspectiveapi.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The surprisingly good performance of XTC on Mandarin is a caveat, which may in part be explained by Mandarin being more prevalent than Arabic or Hindi in XLM-R's pre-training corpus(Conneau et al., 2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was commissioned from Rewire by Google's Jigsaw team. All authors worked on this project in their capacity as researchers at Rewire. We thank all annotators and language experts for their work, and all reviewers for their constructive feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "See Table 4 on the next page.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 11,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Example MHC Test Cases",
"sec_num": null
},
{
"text": "MHC covers seven protected groups per language.Arabic: women / , trans people / , gay people / , black people / , Jews / , disabled people / , immigrants / Dutch: women / vrouwen, trans people / transgenders, gay people / homoseksuelen, black people / zwarte personen, disabled people / gehandicapten, immigrants / immigranten, Muslims / Moslims French: women / femmes, trans people / trans, gay people / homos, black people / blacks, disabled people / handicap\u00e9s, Muslims / musulmans, refugees / r\u00e9fugi\u00e9s German: women / Frauen, trans people / Transmenschen, gay people / Schwule, black people / Schwarze, disabled people / Behinderte Jews / Juden, refugees / Fl\u00fcchtlinge Hindi: women / aurato, trans people / hijro, gay people / gays, North-East Indians / assamese, Muslims / musalmano, Lower Caste / shudro, Black people / negro Italian: women / donne, trans people / trans, gay people / gay, black people / persone di colore, disabled people / disabili, Muslims / musulmani, immigrants / immigrati Mandarin: women / \u5973\u4eba, trans people / \u8de8 \u6027\u522b\u8005, gay people / \u540c\u6027\u604b, black people / \u9ed1 \u4eba, disabled people / \u6b8b\u75be\u4eba, Muslims / \u7a46\u65af\u6797, foreigners / \u5916\u56fd\u4eba Polish: women / kobiety, trans people / osoby transp\u0142ciowe, gay people / geje, Asian people / azjaci, disabled people / niepe\u0142nosprawni, Jews / Zydzi, immigrants / imigranci Portuguese: women / mulheres, black people / negros, gay people / gays, trans people / pessoas trans, indigenous people / ind\u00edgenas, Jews / judeus, disabled people / deficientes Spanish: women / mujeres, black people / negros, gay people / gays, trans people / trans, indigenous people / ind\u00edgenas, Jews / jud\u00edos, disabled people / discapacitados",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Protected Groups in MHC",
"sec_num": null
},
{
"text": "Following Bender and Friedman (2018), we provide a data statement, which documents the generation and provenance of test cases in MHC.A. CURATION RATIONALE The goal of our research was to construct MHC, a multilingual suite of functional tests for hate speech detection models. For this purpose, our team of nativespeaking language experts generated a total of 36,582 short text documents in ten different languages, by hand and by using simple templates for group identifiers and slurs ( \u00a72.3). Each document corresponds to one functional test and a binary gold standard label (hateful or non-hateful).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Data Statement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A deep dive into multilingual hate speech classification",
"authors": [
{
"first": "Binny",
"middle": [],
"last": "Sai Saketh Aluru",
"suffix": ""
},
{
"first": "Punyajoy",
"middle": [],
"last": "Mathew",
"suffix": ""
},
{
"first": "Animesh",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mukherjee",
"suffix": ""
}
],
"year": 2020,
"venue": "Machine Learning and Knowledge Discovery in Databases. Applied Data Science and Demo Track: European Conference, ECML PKDD 2020",
"volume": "",
"issue": "",
"pages": "423--439",
"other_ids": {
"DOI": [
"10.1007/978-3-030-67670-4_26"
]
},
"num": null,
"urls": [],
"raw_text": "Sai Saketh Aluru, Binny Mathew, Punyajoy Saha, and Animesh Mukherjee. 2020. A deep dive into multi- lingual hate speech classification. In Machine Learn- ing and Knowledge Discovery in Databases. Applied Data Science and Demo Track: European Confer- ence, ECML PKDD 2020, Ghent, Belgium, Septem- ber 14-18, 2020, Proceedings, Part V, page 423-439, Berlin, Heidelberg. Springer-Verlag.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Exploring transformer based models to identify hate speech and offensive content in english and indo-aryan languages",
"authors": [
{
"first": "Somnath",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Maulindu",
"middle": [],
"last": "Sarkar",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Punyajoy",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Mithun",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2111.13974"
]
},
"num": null,
"urls": [],
"raw_text": "Somnath Banerjee, Maulindu Sarkar, Nancy Agrawal, Punyajoy Saha, and Mithun Das. 2021. Exploring transformer based models to identify hate speech and offensive content in english and indo-aryan lan- guages. arXiv preprint arXiv:2111.13974.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Unified Taxonomy of Harmful Content",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Brendon",
"middle": [],
"last": "Mackeen",
"suffix": ""
},
{
"first": "Laurie",
"middle": [],
"last": "Ray",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourth Workshop on Online Abuse and Harms",
"volume": "",
"issue": "",
"pages": "125--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko, Brendon MacKeen, and Laurie Ray. 2020. A Unified Taxonomy of Harmful Content. In Proceedings of the Fourth Workshop on Online Abuse and Harms, pages 125-137. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "XLM-T: A multilingual language model toolkit for twitter",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Espinosa Anke",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2104.12250"
]
},
"num": null,
"urls": [],
"raw_text": "Francesco Barbieri, Luis Espinosa Anke, and Jose Camacho-Collados. 2021. XLM-T: A multilingual language model toolkit for twitter. arXiv preprint arXiv:2104.12250.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Nozza",
"middle": [],
"last": "Debora",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Francisco Manuel Rangel",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
}
],
"year": 2019,
"venue": "13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Nozza Debora, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, Manuela Sanguinetti, et al. 2019. Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter. In 13th International Workshop on Semantic Evalua- tion, pages 54-63. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Black-box testing: techniques for functional testing of software and systems",
"authors": [
{
"first": "Boris",
"middle": [],
"last": "Beizer",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boris Beizer. 1995. Black-box testing: techniques for functional testing of software and systems. John Wi- ley & Sons, Inc.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science",
"authors": [
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
},
{
"first": "Batya",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "587--604",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00041"
]
},
"num": null,
"urls": [],
"raw_text": "Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Overview of the evalita 2018 hate speech detection task",
"authors": [
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Dell'orletta Felice",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Poletto",
"suffix": ""
},
{
"first": "Tesconi",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maurizio",
"suffix": ""
}
],
"year": 2018,
"venue": "EVALITA 2018-Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian",
"volume": "2263",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristina Bosco, Dell'Orletta Felice, Fabio Poletto, Manuela Sanguinetti, and Tesconi Maurizio. 2018. Overview of the evalita 2018 hate speech detection task. In EVALITA 2018-Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian, volume 2263, pages 1-9. CEUR.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Aaa: Fair evaluation for abuse detection systems wanted",
"authors": [
{
"first": "Agostina",
"middle": [],
"last": "Calabrese",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Bevilacqua",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "Rocco",
"middle": [],
"last": "Tripodi",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2021,
"venue": "13th ACM Web Science Conference 2021",
"volume": "",
"issue": "",
"pages": "243--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agostina Calabrese, Michele Bevilacqua, Bj\u00f6rn Ross, Rocco Tripodi, and Roberto Navigli. 2021. Aaa: Fair evaluation for abuse detection systems wanted. In 13th ACM Web Science Conference 2021, pages 243- 252.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Computational linguistics against hate: Hate speech detection and visualization on social media in the\" contro l'odio\" project",
"authors": [
{
"first": "T",
"middle": [
"E"
],
"last": "Arthur",
"suffix": ""
},
{
"first": "Mirko",
"middle": [],
"last": "Capozzi",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Poletto",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Giancarlo",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Cataldo",
"middle": [],
"last": "Ruffo",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Musto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polignano",
"suffix": ""
}
],
"year": 2019,
"venue": "6th Italian Conference on Computational Linguistics",
"volume": "2481",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur TE Capozzi, Mirko Lai, Valerio Basile, Fabio Poletto, Manuela Sanguinetti, Cristina Bosco, Vi- viana Patti, Giancarlo Ruffo, Cataldo Musto, Marco Polignano, et al. 2019. Computational linguistics against hate: Hate speech detection and visualization on social media in the\" contro l'odio\" project. In 6th Italian Conference on Computational Linguistics, CLiC-it 2019, volume 2481, pages 1-6. CEUR-WS.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Racial bias in hate speech and abusive language detection datasets",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Debasmita",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "25--35",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3504"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Debasmita Bhattacharya, and Ing- mar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In Proceedings of the Third Workshop on Abusive Language Online, pages 25-35, Florence, Italy. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automated hate speech detection and the problem of offensive language",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "512--515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detec- tion and the problem of offensive language. In Pro- ceedings of the 11th International AAAI Conference on Web and Social Media, pages 512-515. Associa- tion for the Advancement of Artificial Intelligence.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Measuring and mitigating unintended bias in text classification",
"authors": [
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society",
"volume": "",
"issue": "",
"pages": "67--73",
"other_ids": {
"DOI": [
"10.1145/3278721.3278729"
]
},
"num": null,
"urls": [],
"raw_text": "Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat- ing unintended bias in text classification. In Pro- ceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67-73. Association for Computing Machinery.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "AMI @ EVALITA2020: Automatic misogyny identification",
"authors": [
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th evaluation campaign of Natural Language Processing and Speech tools for Italian",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisabetta Fersini, Debora Nozza, and Paolo Rosso. 2020. AMI @ EVALITA2020: Automatic misog- yny identification. In Proceedings of the 7th eval- uation campaign of Natural Language Processing and Speech tools for Italian (EVALITA 2020), Online. CEUR.org.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A survey on automatic detection of hate speech in text",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "S\u00e9rgio",
"middle": [],
"last": "Nunes",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "51",
"issue": "4",
"pages": "1--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paula Fortuna and S\u00e9rgio Nunes. 2018. A survey on automatic detection of hate speech in text. ACM Computing Surveys (CSUR), 51(4):1-30.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A hierarchically-labeled Portuguese hate speech dataset",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Rocha Da",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Soler-Company",
"suffix": ""
},
{
"first": "S\u00e9rgio",
"middle": [],
"last": "Wanner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nunes",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "94--104",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3510"
]
},
"num": null,
"urls": [],
"raw_text": "Paula Fortuna, Jo\u00e3o Rocha da Silva, Juan Soler- Company, Leo Wanner, and S\u00e9rgio Nunes. 2019. A hierarchically-labeled Portuguese hate speech dataset. In Proceedings of the Third Workshop on Abusive Language Online, pages 94-104, Florence, Italy. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Evaluating models' local decision boundaries via contrast sets",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Victoria",
"middle": [],
"last": "Basmov",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Bogin",
"suffix": ""
},
{
"first": "Sihao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Dheeru",
"middle": [],
"last": "Dua",
"suffix": ""
},
{
"first": "Yanai",
"middle": [],
"last": "Elazar",
"suffix": ""
},
{
"first": "Ananth",
"middle": [],
"last": "Gottumukkala",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Ilharco",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Khashabi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jiangming",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": null,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "1307--1323",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.117"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Yoav Artzi, Victoria Basmov, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nel- son F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating models' local decision boundaries via contrast sets. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1307-1323, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Counterfactual fairness in text classification through robustness",
"authors": [
{
"first": "Sahaj",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Perot",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "Ed",
"middle": [
"H"
],
"last": "Chi",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Beutel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES '19",
"volume": "",
"issue": "",
"pages": "219--226",
"other_ids": {
"DOI": [
"10.1145/3306618.3317950"
]
},
"num": null,
"urls": [],
"raw_text": "Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H. Chi, and Alex Beutel. 2019. Counterfactual fairness in text classification through robustness. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES '19, page 219-226, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets",
"authors": [
{
"first": "Mor",
"middle": [],
"last": "Geva",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1161--1166",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1107"
]
},
"num": null,
"urls": [],
"raw_text": "Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an inves- tigation of annotator bias in natural language under- standing datasets. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1161-1166, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate",
"authors": [
{
"first": "Hannah",
"middle": [
"Rose"
],
"last": "Kirk",
"suffix": ""
},
{
"first": "Bertram",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "R\u00f6ttger",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Thrush",
"suffix": ""
},
{
"first": "Scott A",
"middle": [],
"last": "Hale",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2108.05921"
]
},
"num": null,
"urls": [],
"raw_text": "Hannah Rose Kirk, Bertram Vidgen, Paul R\u00f6ttger, Tris- tan Thrush, and Scott A Hale. 2021. Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate. arXiv preprint arXiv:2108.05921.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Towards a comprehensive taxonomy and largescale annotated corpus for online slur usage",
"authors": [
{
"first": "Jana",
"middle": [],
"last": "Kurrek",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Haji",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Saleem",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ruths",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourth Workshop on Online Abuse and Harms",
"volume": "",
"issue": "",
"pages": "138--149",
"other_ids": {
"DOI": [
"10.18653/v1/2020.alw-1.17"
]
},
"num": null,
"urls": [],
"raw_text": "Jana Kurrek, Haji Mohammad Saleem, and Derek Ruths. 2020. Towards a comprehensive taxonomy and large- scale annotated corpus for online slur usage. In Pro- ceedings of the Fourth Workshop on Online Abuse and Harms, pages 138-149, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Toxic language detection in social media for Brazilian Portuguese: New dataset and multilingual analysis",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Jo\u00e3o Augusto Leite",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scarton",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "914--924",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jo\u00e3o Augusto Leite, Diego Silva, Kalina Bontcheva, and Carolina Scarton. 2020. Toxic language detec- tion in social media for Brazilian Portuguese: New dataset and multilingual analysis. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Lan- guage Processing, pages 914-924, Suzhou, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 7th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In Proceedings of the 7th International Conference on Learning Represen- tations.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Overview of the hasoc subtrack at fire 2021: Hate speech and offensive content identification in english and indoaryan languages",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Mandl",
"suffix": ""
},
{
"first": "Sandip",
"middle": [],
"last": "Modha",
"suffix": ""
},
{
"first": "Hiren",
"middle": [],
"last": "Gautam Kishore Shahi",
"suffix": ""
},
{
"first": "Shrey",
"middle": [],
"last": "Madhu",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Satapara",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Tharindu",
"middle": [],
"last": "Sch\u00e4fer",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Durgesh",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nandini",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2112.09301"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Mandl, Sandip Modha, Gautam Kishore Shahi, Hiren Madhu, Shrey Satapara, Prasenjit Majumder, Johannes Sch\u00e4fer, Tharindu Ranasinghe, Marcos Zampieri, Durgesh Nandini, et al. 2021. Overview of the hasoc subtrack at fire 2021: Hate speech and offensive content identification in english and indo- aryan languages. arXiv preprint arXiv:2112.09301.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Finegrained fairness analysis of abusive language detection systems with CheckList",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Marchiori Manerba",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Tonelli",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)",
"volume": "",
"issue": "",
"pages": "81--91",
"other_ids": {
"DOI": [
"10.18653/v1/2021.woah-1.9"
]
},
"num": null,
"urls": [],
"raw_text": "Marta Marchiori Manerba and Sara Tonelli. 2021. Fine- grained fairness analysis of abusive language detec- tion systems with CheckList. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pages 81-91, Online. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Facebook's content moderation language barrier",
"authors": [
{
"first": "Delia",
"middle": [],
"last": "Marinescu",
"suffix": ""
}
],
"year": 2021,
"venue": "New America",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Delia Marinescu. 2021. Facebook's content moderation language barrier. New America.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Hatexplain: A benchmark dataset for explainable hate speech detection",
"authors": [
{
"first": "Binny",
"middle": [],
"last": "Mathew",
"suffix": ""
},
{
"first": "Punyajoy",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Seid Muhie Yimam",
"suffix": ""
},
{
"first": "Pawan",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Animesh",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mukherjee",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "35",
"issue": "",
"pages": "14867--14875",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukher- jee. 2021. Hatexplain: A benchmark dataset for ex- plainable hate speech detection. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 35, pages 14867-14875.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Working notes of the workshop arabic misogyny identification (armi-2021)",
"authors": [
{
"first": "Hala",
"middle": [],
"last": "Mulki",
"suffix": ""
},
{
"first": "Bilal",
"middle": [],
"last": "Ghanem",
"suffix": ""
}
],
"year": 2021,
"venue": "Forum for Information Retrieval Evaluation, FIRE 2021",
"volume": "",
"issue": "",
"pages": "7--8",
"other_ids": {
"DOI": [
"10.1145/3503162.3503178"
]
},
"num": null,
"urls": [],
"raw_text": "Hala Mulki and Bilal Ghanem. 2021. Working notes of the workshop arabic misogyny identification (armi- 2021). In Forum for Information Retrieval Evalu- ation, FIRE 2021, page 7-8, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Probing neural network comprehension of natural language arguments",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Niven",
"suffix": ""
},
{
"first": "Hung-Yu",
"middle": [],
"last": "Kao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4658--4664",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1459"
]
},
"num": null,
"urls": [],
"raw_text": "Timothy Niven and Hung-Yu Kao. 2019. Probing neu- ral network comprehension of natural language argu- ments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4658-4664, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Exposing the limits of zero-shot cross-lingual hate speech detection",
"authors": [
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Debora Nozza. 2021. Exposing the limits of zero-shot cross-lingual hate speech detection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Multilingual and multi-aspect hate speech analysis",
"authors": [
{
"first": "Nedjma",
"middle": [],
"last": "Ousidhoum",
"suffix": ""
},
{
"first": "Zizheng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Hongming",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Dit-Yan",
"middle": [],
"last": "Yeung",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4675--4684",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1474"
]
},
"num": null,
"urls": [],
"raw_text": "Nedjma Ousidhoum, Zizheng Lin, Hongming Zhang, Yangqiu Song, and Dit-Yan Yeung. 2019. Multi- lingual and multi-aspect hate speech analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4675- 4684, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A joint learning approach with knowledge injection for zero-shot cross-lingual hate speech detection",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Endang Wahyu Pamungkas",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2021,
"venue": "Information Processing & Management",
"volume": "58",
"issue": "4",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.ipm.2021.102544"
]
},
"num": null,
"urls": [],
"raw_text": "Endang Wahyu Pamungkas, Valerio Basile, and Viviana Patti. 2021. A joint learning approach with knowl- edge injection for zero-shot cross-lingual hate speech detection. Information Processing & Management, 58(4):102544.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Investigating crosslingual training for offensive language detection",
"authors": [
{
"first": "Andra\u017e",
"middle": [],
"last": "Pelicon",
"suffix": ""
},
{
"first": "Ravi",
"middle": [],
"last": "Shekhar",
"suffix": ""
},
{
"first": "Bla\u017e",
"middle": [],
"last": "\u0160krlj",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Purver",
"suffix": ""
},
{
"first": "Senja",
"middle": [],
"last": "Pollak",
"suffix": ""
}
],
"year": 2021,
"venue": "PeerJ Computer Science",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.7717/peerj-cs.559"
]
},
"num": null,
"urls": [],
"raw_text": "Andra\u017e Pelicon, Ravi Shekhar, Bla\u017e \u0160krlj, Matthew Purver, and Senja Pollak. 2021. Investigating cross- lingual training for offensive language detection. PeerJ Computer Science, 7:e559. Publisher: PeerJ Inc.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Resources and benchmark corpora for hate speech detection: a systematic review. Language Resources and Evaluation",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Poletto",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "55",
"issue": "",
"pages": "477--523",
"other_ids": {
"DOI": [
"10.1007/s10579-020-09502-8"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Poletto, Valerio Basile, Manuela Sanguinetti, Cristina Bosco, and Viviana Patti. 2021. Resources and benchmark corpora for hate speech detection: a systematic review. Language Resources and Evalua- tion, 55(2):477-523.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Results of the poleval 2019 shared task 6: First dataset and open shared task for automatic cyberbullying detection in polish twitter",
"authors": [
{
"first": "Michal",
"middle": [],
"last": "Ptaszynski",
"suffix": ""
},
{
"first": "Agata",
"middle": [],
"last": "Pieciukiewicz",
"suffix": ""
},
{
"first": "Pawe\u0142",
"middle": [],
"last": "Dyba\u0142a",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the PolEval 2019 Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michal Ptaszynski, Agata Pieciukiewicz, and Pawe\u0142 Dy- ba\u0142a. 2019. Results of the poleval 2019 shared task 6: First dataset and open shared task for automatic cy- berbullying detection in polish twitter. Proceedings of the PolEval 2019 Workshop, page 89.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Beyond accuracy: Behavioral testing of NLP models with CheckList",
"authors": [
{
"first": "Tongshuang",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Guestrin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4902--4912",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.442"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Be- havioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4902- 4912, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Two contrasting data annotation paradigms for subjective nlp tasks",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "R\u00f6ttger",
"suffix": ""
},
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Janet",
"middle": [
"B"
],
"last": "Pierrehumbert",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2112.07475"
]
},
"num": null,
"urls": [],
"raw_text": "Paul R\u00f6ttger, Bertie Vidgen, Dirk Hovy, and Janet B Pierrehumbert. 2021a. Two contrasting data anno- tation paradigms for subjective nlp tasks. arXiv preprint arXiv:2112.07475.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "HateCheck: Functional tests for hate speech detection models",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "R\u00f6ttger",
"suffix": ""
},
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Zeerak",
"middle": [],
"last": "Talat",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Margetts",
"suffix": ""
},
{
"first": "Janet",
"middle": [],
"last": "Pierrehumbert",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "41--58",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.4"
]
},
"num": null,
"urls": [],
"raw_text": "Paul R\u00f6ttger, Bertie Vidgen, Dong Nguyen, Zeerak Ta- lat, Helen Margetts, and Janet Pierrehumbert. 2021b. HateCheck: Functional tests for hate speech detec- tion models. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 41-58, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Haspeede 2@ evalita2020: Overview of the evalita 2020 hate speech detection task",
"authors": [
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "Gloria",
"middle": [],
"last": "Comandini",
"suffix": ""
},
{
"first": "Elisa",
"middle": [
"Di"
],
"last": "Nuovo",
"suffix": ""
},
{
"first": "Simona",
"middle": [],
"last": "Frenda",
"suffix": ""
},
{
"first": "Marco",
"middle": [
"Antonio"
],
"last": "Stranisci",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Caselli",
"middle": [],
"last": "Tommaso",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Irene",
"middle": [],
"last": "Russo",
"suffix": ""
}
],
"year": 2020,
"venue": "EVALITA 2020 Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuela Sanguinetti, Gloria Comandini, Elisa Di Nuovo, Simona Frenda, Marco Antonio Stranisci, Cristina Bosco, Caselli Tommaso, Viviana Patti, Irene Russo, et al. 2020. Haspeede 2@ evalita2020: Overview of the evalita 2020 hate speech detection task. In EVALITA 2020 Seventh Evaluation Cam- paign of Natural Language Processing and Speech Tools for Italian, pages 1-9. CEUR.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "An Italian Twitter corpus of hate speech against immigrants",
"authors": [
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Poletto",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Stranisci",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuela Sanguinetti, Fabio Poletto, Cristina Bosco, Viviana Patti, and Marco Stranisci. 2018. An Italian Twitter corpus of hate speech against immigrants. In Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Annotators with attitudes: How annotator beliefs and identities bias toxic language detection",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Vianna",
"suffix": ""
},
{
"first": "Xuhui",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, and Noah A. Smith. 2021. Annotators with attitudes: How annotator beliefs and identities bias toxic language detection.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A survey on hate speech detection using natural language processing",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.18653/v1/W17-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- cessing. In Proceedings of the Fifth International Workshop on Natural Language Processing for So- cial Media, pages 1-10, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Predictive biases in natural language processing models: A conceptual framework and overview",
"authors": [
{
"first": "",
"middle": [],
"last": "Deven Santosh",
"suffix": ""
},
{
"first": "H",
"middle": [
"Andrew"
],
"last": "Shah",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5248--5264",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.468"
]
},
"num": null,
"urls": [],
"raw_text": "Deven Santosh Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive biases in natural language processing models: A conceptual framework and overview. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 5248-5264, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Facebook is everywhere; its moderation is nowhere close",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Simonite",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Simonite. 2021. Facebook is everywhere; its mod- eration is nowhere close. Wired.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Cross-lingual zero-and few-shot hate speech detection utilising frozen transformer language models and AXEL. CoRR, abs",
"authors": [
{
"first": "Lukas",
"middle": [],
"last": "Stappen",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Brunn",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [
"W"
],
"last": "Schuller",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lukas Stappen, Fabian Brunn, and Bj\u00f6rn W. Schuller. 2020. Cross-lingual zero-and few-shot hate speech detection utilising frozen transformer language mod- els and AXEL. CoRR, abs/2004.13850.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Are you a racist or am I seeing things? Annotator influence on hate speech detection on Twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Talat",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Workshop on NLP and Computational Social Science",
"volume": "",
"issue": "",
"pages": "138--142",
"other_ids": {
"DOI": [
"10.18653/v1/W16-5618"
]
},
"num": null,
"urls": [],
"raw_text": "Zeerak Talat. 2016. Are you a racist or am I seeing things? Annotator influence on hate speech detection on Twitter. In Proceedings of the First Workshop on NLP and Computational Social Science, pages 138- 142, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Challenges for toxic comment classification: An in-depth error analysis",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Betty Van Aken",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Risch",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Krestel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "L\u00f6ser",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "33--42",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5105"
]
},
"num": null,
"urls": [],
"raw_text": "Betty van Aken, Julian Risch, Ralf Krestel, and Alexan- der L\u00f6ser. 2018. Challenges for toxic comment clas- sification: An in-depth error analysis. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 33-42, Brussels, Belgium. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Directions in abusive language training data, a systematic review: Garbage in, garbage out",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": 2020,
"venue": "PLOS ONE",
"volume": "15",
"issue": "12",
"pages": "",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0243300"
]
},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen and Leon Derczynski. 2020. Direc- tions in abusive language training data, a system- atic review: Garbage in, garbage out. PLOS ONE, 15(12):e0243300.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Detecting East Asian prejudice on social media",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "Ella",
"middle": [],
"last": "Guest",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Margetts",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Broniatowski",
"suffix": ""
},
{
"first": "Zeerak",
"middle": [],
"last": "Talat",
"suffix": ""
},
{
"first": "Austin",
"middle": [],
"last": "Botelho",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Rebekah",
"middle": [],
"last": "Tromble",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourth Workshop on Online Abuse and Harms",
"volume": "",
"issue": "",
"pages": "162--172",
"other_ids": {
"DOI": [
"10.18653/v1/2020.alw-1.19"
]
},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen, Scott Hale, Ella Guest, Helen Margetts, David Broniatowski, Zeerak Talat, Austin Botelho, Matthew Hall, and Rebekah Tromble. 2020. Detect- ing East Asian prejudice on social media. In Pro- ceedings of the Fourth Workshop on Online Abuse and Harms, pages 162-172, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Challenges and frontiers in abusive content detection",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Rebekah",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Margetts",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "80--93",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3509"
]
},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019. Challenges and frontiers in abusive content detec- tion. In Proceedings of the Third Workshop on Abu- sive Language Online, pages 80-93, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Learning from the worst: Dynamically generated datasets to improve online hate detection",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Thrush",
"suffix": ""
},
{
"first": "Zeerak",
"middle": [],
"last": "Talat",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1667--1682",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.132"
]
},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen, Tristan Thrush, Zeerak Talat, and Douwe Kiela. 2021. Learning from the worst: Dynamically generated datasets to improve online hate detection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1667-1682, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Practical Transformer-based Multilingual Text Classification",
"authors": [
{
"first": "Cindy",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers",
"volume": "",
"issue": "",
"pages": "121--129",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-industry.16"
]
},
"num": null,
"urls": [],
"raw_text": "Cindy Wang and Michele Banko. 2021. Practical Transformer-based Multilingual Text Classification. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies: Industry Papers, pages 121-129, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Detection of abusive language: The problem of biased datasets",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kleinbauer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "602--608",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1060"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of abusive language: The problem of biased datasets. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 602-608, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Overview of the GermEval 2018 Shared Task on the Identification of Offensive Language",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Siegel",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of GermEval 2018, 14th Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Melanie Siegel, and Josef Ruppen- hofer. 2018. Overview of the GermEval 2018 Shared Task on the Identification of Offensive Language. In Proceedings of GermEval 2018, 14th Conference on Natural Language Processing (KONVENS 2018).",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Errudite: Scalable, reproducible, and testable error analysis",
"authors": [
{
"first": "Tongshuang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Marco",
"middle": [
"Tulio"
],
"last": "Ribeiro",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Heer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Weld",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "747--763",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1073"
]
},
"num": null,
"urls": [],
"raw_text": "Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel Weld. 2019. Errudite: Scalable, reproducible, and testable error analysis. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 747-763, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Predicting the type and target of offensive posts in social media",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1415--1420",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1144"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. Predicting the type and target of offensive posts in social media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1415-1420, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Zeses Pitenis, and \u00c7agr\u0131 \u00c7\u00f6ltekin. 2020. SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (Offen-sEval 2020)",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Pepa",
"middle": [],
"last": "Atanasova",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Karadzhov",
"suffix": ""
},
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of SemEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and \u00c7agr\u0131 \u00c7\u00f6ltekin. 2020. SemEval-2020 Task 12: Multilingual Offen- sive Language Identification in Social Media (Offen- sEval 2020). In Proceedings of SemEval.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The Spanish Basile et al. (2019) dataset contains 4,950 tweets, of which 41.5% are labelled as hateful. The Italian Sanguinetti et al. (2020) dataset contains 8,100 tweets, of which 41.8% are labelled as hateful. The Portuguese Fortuna et al. (2019) dataset contains 5,670 tweets, of which 31.5% are labelled as hateful.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"content": "<table><tr><td>Language</td><td colspan=\"3\">F1-h F1-nh Mac. F1</td></tr><tr><td colspan=\"2\">79.1 80.1 82.6 82.6 78.5 81.5 81.8 76.1 Portuguese / PT 83.5 Arabic / AR Dutch / NL French / FR German / DE Hindi / HI Italian / IT Mandarin / ZH Polish / PL Spanish / ES 79.9</td><td>39.8 53.3 52.6 55.2 37.7 57.8 61.1 56.4 53.4 59.1</td><td>59.4 66.7 67.6 68.9 58.1 69.6 71.5 66.2 68.5 69.5</td></tr></table>",
"text": "MHC covers 34 functionalities in 11 classes with a total of n = 36,582 test cases. 69.74% of cases (25,511 in 25 functional tests) are labelled hateful, 30.26% (11,071 in 9 functional tests) are labelled non-hateful. The right-most columns report accuracy (%) of the the XTC model ( \u00a73.1) across functional tests for each language. small across languages (< 8pp). For nonhateful cases, on the other hand, performance varies considerably across languages (< 24pp), with XTC performing best on Mandarin (61.1) and worst on Hindi (39.8).",
"type_str": "table",
"num": null,
"html": null
},
"TABREF2": {
"content": "<table/>",
"text": "",
"type_str": "table",
"num": null,
"html": null
},
"TABREF4": {
"content": "<table/>",
"text": "",
"type_str": "table",
"num": null,
"html": null
},
"TABREF6": {
"content": "<table/>",
"text": "Example test cases for each of the 34 functional tests in MHC. Examples were selected at random.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF8": {
"content": "<table><tr><td>E Datasets for Model Fine-Tuning</td></tr><tr><td>E.1 Sanguinetti et al. (2020) Italian Data</td></tr><tr><td>Sampling The authors compiled 8,100 tweets sampled using keywords. 4,000 tweets come from</td></tr><tr><td>HaSpeeDe 2018</td></tr></table>",
"text": "Proportion of entries and absolute number of entries where at least 2/3 annotators disagreed with the expert gold label, for each language in MHC.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF9": {
"content": "<table><tr><td>Sampling Fortuna et al. (2019) initially collected 42,930 tweets based on a search of 29 user pro-</td></tr><tr><td>files, 19 keywords and ten hashtags. They then fil-</td></tr><tr><td>tered the tweets, keeping only Portuguese-language</td></tr><tr><td>tweets, and removing duplicates and retweets, re-</td></tr><tr><td>sulting in 33,890 tweets. Finally, they set a cap of</td></tr><tr><td>a maximum of 200 tweets per search method, to</td></tr><tr><td>create the final dataset of 5,668 tweets.</td></tr></table>",
"text": "Hate Speech \"Language that spreads, incites, promotes or justifies hatred or violence towards the given target, or a message that aims at dehumanizing, delegitimizing, hurting or intimidating the target. The targets are Immigrants, Muslims, and Roma groups, or individual members of such groups.\" E.2Fortuna et al. (2019) Portuguese Data",
"type_str": "table",
"num": null,
"html": null
},
"TABREF11": {
"content": "<table><tr><td colspan=\"5\">Lang. XLM-IT XLM-PT XLM-ES XTC</td></tr><tr><td>AR NL FR DE HI IT ZH PL PT ES</td><td>51.3 59.9 57.5 62.1 48.2 53.6 61.8 57.5 58.6 60.0</td><td>45.8 49.5 50.5 46.9 44.4 47.0 42.7 49.2 64.2 50.1</td><td>51.4 59.6 62.2 59.5 47.4 54.6 53.2 58.2 56.0 64.4</td><td>59.4 66.7 67.6 68.9 58.1 69.6 71.5 66.2 68.5 69.5</td></tr></table>",
"text": "Macro F1 for each fine-tuned model on its respective test set and for XTC on all test sets.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF12": {
"content": "<table/>",
"text": "Macro F1 across languages on MHC for each of our fine-tuned models.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF13": {
"content": "<table><tr><td>Language</td><td colspan=\"3\">F1-h F1-nh Macro F1</td></tr><tr><td>German / DE Italian / IT Portuguese / PT</td><td>84.1 69.6 84.2</td><td>54.9 61.2 47.6</td><td>69.5 65.4 65.9</td></tr></table>",
"text": "performs worse than XTC(Table 2)in terms of macro F1 for Italian and Portuguese, and around equally well for German.",
"type_str": "table",
"num": null,
"html": null
}
}
}
}