ACL-OCL / Base_JSON /prefixW /json /woah /2022.woah-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:10:22.908085Z"
},
"title": "\"Zo Grof !\": A Comprehensive Corpus for Offensive and Abusive Language in Dutch",
"authors": [
{
"first": "Ward",
"middle": [],
"last": "Ruitenbeek",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {
"country": "The Netherlands"
}
},
"email": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zwart",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {
"country": "The Netherlands"
}
},
"email": ""
},
{
"first": "Robin",
"middle": [],
"last": "Van Der Noord",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {
"country": "The Netherlands"
}
},
"email": ""
},
{
"first": "Zhenja",
"middle": [],
"last": "Gnezdilov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {
"country": "The Netherlands"
}
},
"email": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {
"country": "The Netherlands"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a comprehensive corpus for the study of socially unacceptable language in Dutch. The corpus extends and revise an existing resource with more data and introduces a new annotation dimension for offensive language, making it a unique resource in the Dutch language panorama. Each language phenomenon (abusive and offensive language) in the corpus has been annotated with a multilayer annotation scheme modelling the explicitness and the target(s) of the abuse/offence in the message. We have conducted a new set of experiments with different classification algorithms on all annotation dimensions. Monolingual Pre-Trained Language Models prove as the best systems, obtaining a macro-average F1 of 0.828 for binary classification of offensive language, and 0.579 for the targets of offensive messages. Furthermore, the best system obtains a macro-average F1 of 0.637 for distinguishing between abusive and offensive messages.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a comprehensive corpus for the study of socially unacceptable language in Dutch. The corpus extends and revise an existing resource with more data and introduces a new annotation dimension for offensive language, making it a unique resource in the Dutch language panorama. Each language phenomenon (abusive and offensive language) in the corpus has been annotated with a multilayer annotation scheme modelling the explicitness and the target(s) of the abuse/offence in the message. We have conducted a new set of experiments with different classification algorithms on all annotation dimensions. Monolingual Pre-Trained Language Models prove as the best systems, obtaining a macro-average F1 of 0.828 for binary classification of offensive language, and 0.579 for the targets of offensive messages. Furthermore, the best system obtains a macro-average F1 of 0.637 for distinguishing between abusive and offensive messages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Social Media platforms have become an intrinsic part of the lives of lots of people. A phenomenon that accompanies Social Media platforms, with serious impacts on society, is the presence of socially unacceptable language. Socially unacceptable language is to be regarded as a generic umbrella term comprehending many different user-generated language phenomena such as toxic language (Karan and \u0160najder, 2019; Bhat et al., 2021) , offensive language (Zampieri et al., 2019c; , abusive language (Karan and \u0160najder, 2018; Caselli et al., 2020; Wiegand et al., 2021) , hate speech (Waseem and Hovy, 2016a; Davidson et al., 2019; Basile et al., 2019) , among others. While manually monitoring and flagging these phenomena is impossible, there has been a growing interest in the Computational Linguistics (CL) and Natural Language Processing (NLP) communities to develop automatic systems to flag messages containing these phenomena.",
"cite_spans": [
{
"start": 385,
"end": 410,
"text": "(Karan and \u0160najder, 2019;",
"ref_id": "BIBREF20"
},
{
"start": 411,
"end": 429,
"text": "Bhat et al., 2021)",
"ref_id": null
},
{
"start": 451,
"end": 475,
"text": "(Zampieri et al., 2019c;",
"ref_id": "BIBREF40"
},
{
"start": 495,
"end": 520,
"text": "(Karan and \u0160najder, 2018;",
"ref_id": "BIBREF19"
},
{
"start": 521,
"end": 542,
"text": "Caselli et al., 2020;",
"ref_id": null
},
{
"start": 543,
"end": 564,
"text": "Wiegand et al., 2021)",
"ref_id": "BIBREF36"
},
{
"start": 579,
"end": 603,
"text": "(Waseem and Hovy, 2016a;",
"ref_id": "BIBREF34"
},
{
"start": 604,
"end": 626,
"text": "Davidson et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 627,
"end": 647,
"text": "Basile et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Besides the limitations of this type of reactive interventions, previous work (Nozza, 2021) has shown the necessity of language specific resources for these phenomena to properly train systems. This work contributes in this direction by presenting a comprehensive dataset to identify socially unacceptable language in Twitter messages in Dutch. We integrate and extend DALC v1.0 (Caselli et al., 2021) by introducing a new annotation layer for offensive language and expanding the size of the dataset from 8,156 messages to 11,292. The main contribution of this paper can be summarised as follows:",
"cite_spans": [
{
"start": 78,
"end": 91,
"text": "(Nozza, 2021)",
"ref_id": "BIBREF24"
},
{
"start": 379,
"end": 401,
"text": "(Caselli et al., 2021)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 a new release of DALC, DALC v2.0, with a) more than 3k newly annotated messages and b) annotations for the offensive language dimension;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 an extensive set of experiments to model the different annotation dimensions involved;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 an error analysis showing the limits of current models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The annotation guidelines, the data, and the code for the reported experiments, and a data statement (Bender and Friedman, 2018) are publicly available. 1 Examples of offensive messages have been redacted to preserve privacy and explicit offensive lexical items have been obfuscated.",
"cite_spans": [
{
"start": 153,
"end": 154,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Offensive language is a broader language phenomenon when compared to other phenomena and behaviours (e.g., abusive language, hate speech or cyberbullying) and, most importantly, more subjective (Vidgen et al., 2019; Poletto et al., 2021) . In Offensive Language (Zampieri et al., 2019a) Abusive Language (Caselli et al., 2021) Posts containing any form of non-acceptable language (profanity) or a targeted offence, which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words. Impolite, harsh, or hurtful language (that may contain profanities or vulgar language) that result in a debasement, harassment, threat, or aggression of an individual or a (social) group, but not necessarily of an entity, an institution, an organisations, or a concept. general, the use of offensive language is intrinsically connected to freedom of speech. However, in the context of social media interactions, the presence and use of offensive language towards other users should raise concerns because it may escalate the exchange in deeper verbal hostility (e.g., hate speech) and give rise to highly toxic, and unsafe environments (Chowdhury et al., 2020). While we can identify and list parameters and details that help us to narrow down whether a message is abusive or not, the offensiveness of a message is only partially dependent on its content. Other variables such as the context of occurrence, the background and experience of the reader/annotator play a relevant role. Despite these difficulties, offensive language datasets have been developed in different languages (Sigurbergsson and Derczynski, 2020; Pitenis et al., 2020; \u00c7\u00f6ltekin, 2020; Chowdhury et al., 2020) and used in recent shared tasks (Zampieri et al., 2019c .",
"cite_spans": [
{
"start": 194,
"end": 215,
"text": "(Vidgen et al., 2019;",
"ref_id": "BIBREF32"
},
{
"start": 216,
"end": 237,
"text": "Poletto et al., 2021)",
"ref_id": "BIBREF27"
},
{
"start": 262,
"end": 286,
"text": "(Zampieri et al., 2019a)",
"ref_id": "BIBREF38"
},
{
"start": 304,
"end": 326,
"text": "(Caselli et al., 2021)",
"ref_id": "BIBREF7"
},
{
"start": 1614,
"end": 1650,
"text": "(Sigurbergsson and Derczynski, 2020;",
"ref_id": "BIBREF30"
},
{
"start": 1651,
"end": 1672,
"text": "Pitenis et al., 2020;",
"ref_id": "BIBREF26"
},
{
"start": 1673,
"end": 1688,
"text": "\u00c7\u00f6ltekin, 2020;",
"ref_id": null
},
{
"start": 1689,
"end": 1712,
"text": "Chowdhury et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 1745,
"end": 1768,
"text": "(Zampieri et al., 2019c",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Offensive Language: Why and How",
"sec_num": "2"
},
{
"text": "To maximise resource interoperability and foster the study of offensive language from a multilingual perspective, we adopt the definition of offensive language from Zampieri et al. (2019c) . In Table 1 the full definition is reported and compared with the definition of abusive language adopted in the Dutch Abusive Language Corpus (DALC) v1.0. A key element distinguishing these two language phenomena is the level of detail used to describe them, the different emphasis on the intentions of the producers, the presence/absence of a target, and the effects on the receivers. In particular, target is an essential and compulsory element of abusive language, while it is not the case for offensive messages. On the other hand, given its more generic nature, offensive language can be identified in messages that do not contain any target. This is particularly evident in the use of profanities to express strong (positive or negative) emotions. To better clarify the difference between the two phenomena consider the following examples from DALC v2.0: Example 1 instantiate an offensive message, due to the presence of a profanity. Its perception of being offensive can vary according to the context of use and the receivers of the message. At the same time, the message does not fully comply with the definition of abusive language for multiple reasons: there is not a (human) target and there is no intention to debase or harass an individual/group. Example 2, on the contrary, it is a clear case of abusive language. Here the abusive is express via a stereotype and a debasing act, and with an explicit target realised via a specific identity term. The message is abusive and also offensive.",
"cite_spans": [
{
"start": 165,
"end": 188,
"text": "Zampieri et al. (2019c)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [
{
"start": 194,
"end": 201,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Offensive Language: Why and How",
"sec_num": "2"
},
{
"text": "In this work, we have maintained the multi-layer annotation approach of DALC v1.0, distinguishing between the explicitness of the message and its target. The explicitness and the target layers for the offensive dimension have been refined with subclasses along the existing annotation of abusive language. The explicitness layer distinguishes three subclasses: (i.) EXPLICIT; (ii.) IMPLICIT; and (iii.) NOT. While NOT is used to annotate not offensive messages, the difference between the EXPLICIT and IMPLICIT subclasses mainly rely on the surface forms of the message. Explicit offensive content refers to the presence of profanities or combination of words that unambiguously make the message offensive. Implicit messages are more subtle, lacking any surface markers, thus making the offence hidden (Waseem et al., 2017) . The target layer, on the other hand, extends the classes used for abusive language allowing for the absence of a target. In particular, we have four subclasses defined as follows: (i.) INDIVIDUAL, for messages that are addressed or target a specific person or individual (who could be named or not); (ii.) GROUP, for messages that target a group of people considered as a unity because of ethnicity, gender, political affiliation, religion, disabilities, or other common properties; (iii.) OTHER, for messages that target concepts, institutions and organisations, or non-living entities; and (iv.) NOT, for offensive messages without a target. In Table 2 , we report some redacted examples from the dataset to illustrate the combination of the two layers in the annotation process.",
"cite_spans": [
{
"start": 802,
"end": 823,
"text": "(Waseem et al., 2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 1473,
"end": 1480,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Offensive Language: Why and How",
"sec_num": "2"
},
{
"text": "Data Collection and Annotation DALC v1.0 is a corpus of 8,156 messages from Twitter in Dutch obtained by applying three different collection methods: keywords extraction, message geolocation, and seed users. We have extracted a total of 10k messages using only the keywords and seed users data from DALC v1.0, since these two sources proved to be denser and more suitable for the language phenomenon of interest. Following the settings of DALC v1.0, there is no overlap of messages concerning topic and authors between train and test distributions. Consequently, the 10k messages are equally and independently extracted from the train and test candidates -resulting in 5k messages per distribution. We divided the messages of each distribution in batches of 1k each for the annotation. Given the highly subjective nature of offensive language, all annotations for both layers have been conducted in parallel by four annotators. 2 Annotators were asked to apply the definition of offensive language as reported in Table 1 . Each offensive message was then annotated for the explicitness and the target layers.",
"cite_spans": [],
"ref_spans": [
{
"start": 1013,
"end": 1020,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Offensive Language: Why and How",
"sec_num": "2"
},
{
"text": "The annotation has been conducted in two steps. In the first step, the annotators focused on all 6,267 messages that were marked as not abusive in DALC v1.0. This is a necessary curation phase in order to be compliant with the distinction between offensive and abusive language. In the second steps, we have annotated 5 additional batches for train and 1 batch for test. The final amount of annotated data is 12,251. 3 Table 3 reports the pairwise Cohen's Kappa score for all the four annotators for the explicitness and the target layers. The agreement scores have been computed on all the annotated data. The agreement for explicitness layer ranges between a minimum of 0.330 to a maximum of 0.541, indicating a slight/substantial agreement, with a global Fleiss' Kappa of 0.430. It is worth noting that there is a variation in agreement across the annotators, with A.1 and A.3 being the strongest pair. Kappa scores slightly increase when aggregating the explicitness subclasses into a generic offensive (OFF) label. In this case, the values range between 0.358 (A.2-A.4) and 0.593 (A.1-A.3), with a Fleiss' Kappa of 0.473. The results for the annotation of the target layer are slightly worse, with the minimum agreement being a Cohen's Kappa of 0.250 (A.2-A.3) and a maximum of 0.474 (A.1-A.3). Overall Fleiss's Kappa for the target layer is 0.402.",
"cite_spans": [],
"ref_spans": [
{
"start": 419,
"end": 426,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Offensive Language: Why and How",
"sec_num": "2"
},
{
"text": "To better understand these results, we have anal- ysed the pairwise confusion matrices of all the annotators. 4 For the explicitness layer, it clearly appears that the biggest source of disagreement is the offensive status of the message rather than the distinction between explicit or implicit, further supporting the claim that offensiveness is subjective. This has also an impact on the target layer: if a message is not annotated as offensive, the target annotation is ignore.",
"cite_spans": [
{
"start": 110,
"end": 111,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Offensive Language: Why and How",
"sec_num": "2"
},
{
"text": "Explicitness A.1 A.2 A.3 A.4 A.1 -0.457 0.541 0.412 A.2 - -0.373 0.330 A.3 - - -0.471 Target A.1 A.2 A.3 A.4 A.1 -0.391 0.474 0.379 A.2 - -0.304 0.250 A.3 - - -0.457",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Offensive Language: Why and How",
"sec_num": "2"
},
{
"text": "We adopt a majority voting for handling the disagreements and assigning final labels. In all cases where a tie is reached, the examples have been discussed collectively to reach a consensus. However, when subjectivity is an essential property of a language phenomenon, disagreements are more informative than detrimental (Aroyo et al., 2019; Basile, 2020; Leonardelli et al., 2021) . In line with this vision, the final distribution contains the disaggregated annotations to promote further research on the relationship of subjectivity and annotation of natural language phenomena.",
"cite_spans": [
{
"start": 321,
"end": 341,
"text": "(Aroyo et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 342,
"end": 355,
"text": "Basile, 2020;",
"ref_id": "BIBREF2"
},
{
"start": 356,
"end": 381,
"text": "Leonardelli et al., 2021)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Handling of disagreements",
"sec_num": null
},
{
"text": "The annotated corpus contains 11,292 Twitter messages in Dutch, covering a time period between November 2015 and August 2020. For completeness, all messages marked as offensive and containing a target have also been further annotated for abusiveness. For abusive language, we applied the same annotation procedure used in DALC v1.0. Table 4 illustrates the distribution of the data for the abusive and offensive dimensions, and the target layers across the Train/Dev and Test distributions. The unbalanced distribution between the negative and the positive examples for both the abusive and the offensive dimensions is part of the design strategy. While the actual distribution of these classes in social media is unknown, a distribution of 2/3 vs. 1/3 between negative and positive examples appears to be more realistic than a per- fectly balanced dataset and in line with previous work (Basile et al., 2019; Zampieri et al., 2019c .",
"cite_spans": [
{
"start": 888,
"end": 909,
"text": "(Basile et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 910,
"end": 932,
"text": "Zampieri et al., 2019c",
"ref_id": "BIBREF40"
}
],
"ref_spans": [
{
"start": 333,
"end": 340,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Data Overview",
"sec_num": "3"
},
{
"text": "Overall, 2,097 messages have been annotated as abusive, with an increase of 208 messages when compared to DALC v1.0. On the other hand, 3,783 messages have been marked as offensive. In both dimensions, the explicit subclass represents the majority, with 62.47% of cases for the abusive dimension and 58.71% for the offensive one. The difference in the distribution of the implicit subclass is striking, with implicit offensive messages being almost the double of the abusive counterpart. A possible explanation can be found in the definitions of the two phenomena and their annotations: offensive messages have been labelled as such either because they contained a profanity, or because the annotators subjectively perceived them as offensive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Overview",
"sec_num": "3"
},
{
"text": "As for the targets, we observe that only a minority of offensive messages does not have a target (6.95%). When compared to other datasets for offensive language, the amount of messages associated with this class varies -for instance, being the majority class in Sigurbergsson and Derczynski (2020) but not the minority in Zampieri et al. (2019b) -suggesting that there may be a dependency of this subclass on the method(s) used for collecting the data. On the other hand, differences in the realisation of the targets are more evident when focusing on the IND and GRP subclasses. Offensive messages have a balanced distribution between these two subclasses corresponding to 28.25% and 28.60% of all the targets, respectively. On the contrary, abusive messages see a majority of cases (55.22%) for the IND subclass, and relatively fewer cases for GRP (34.09%). Lastly, the OTH subclass has been selected more often (19.53%) with offensive messages than with the abusive ones (only 10.68%). This difference can be again explained in the light of the definitions of the two phenomena.",
"cite_spans": [
{
"start": 322,
"end": 345,
"text": "Zampieri et al. (2019b)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Overview",
"sec_num": "3"
},
{
"text": "No significant difference in length has been found between abusive and offensive messages (average length abusive 28.79 words; average length offensive 28.44), 5 while this is not the case for offensive and not offensive messages (average length not offensive 21.93 words; average length offensive 28.44). 6 Similarly to DALC v1.0, significant differences in length between implicit and explicit messages appear only in the Test distribution, where implicit offensive messages have an average of 30.04 words compared to the 23.55 words of the explicit messages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Overview",
"sec_num": "3"
},
{
"text": "To gain better insights into the data and the differences between the two dimensions, we have extracted and compared the top-50 keywords between the Train and Test distributions by collapsing the subclass in the explicitness layer, resulting in OF-FENSIVE, ABUSIVE, NOT (Table 11 in Appendix B illustrates the top-10 keywords). While we observe a lack of overlapping lexical items between Train and Test distributions, and the absence of any topic-specific lexical items, the differences between offensive and abusive language are not as neat as one would imagine. Besides the presence of some profanities or slurs, most of the keywords do not present any specific denotative or connotative markings for offensive and/or abusive language.",
"cite_spans": [],
"ref_spans": [
{
"start": 270,
"end": 279,
"text": "(Table 11",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data Overview",
"sec_num": "3"
},
{
"text": "We ran a set of experiments to validate the newly annotated corpus. We first focused on the iden-tification of the offensiveness dimension ( \u00a7 4.1), and then on the target layer ( \u00a7 4.2). We also investigate the ability of systems to distinguish between offensive and abusive dimensions ( \u00a7 4.3). We tested four different architectures: a Linear SVM combining character and word n-gram TF-IDF vectors, a Bi-LSTM model initialised Coosto pre-trained word embeddings, 7 and two monolingual Transformer-based pre-trained Language Models (PTLMs), namely BERTje (de Vries et al., 2019) and RobBERT (Delobelle et al., 2020) . The two PTLMs differ with respect to their architectures (BERT vs. RoBERTa), the size (12GB vs. 39GB) and origin of the data used to generate the models (manually selected data vs. the Dutch section of the automatically derived OSCAR corpus (Su\u00e1rez et al., 2019) ). All models are trained on the Train split and evaluated against the heldout, non-overlapping Test split. The Dev split is used for tuning of the systems' (hyper)parameters. Models are compared using the macro-average F1. However, given the imbalance among the subclasses in the different layers, for each subclass, we also report Precision and Recall. For the offensiveness and the offensive target dimensions, systems are compared against a dummy classifier based on the majority class. In all experiments, a common preprocessing approach is applied. All preprocessing steps and (hyper)parameters are detailed in Appendix A for replicability.",
"cite_spans": [
{
"start": 593,
"end": 617,
"text": "(Delobelle et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 861,
"end": 882,
"text": "(Su\u00e1rez et al., 2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We have first modeled the offensiveness dimension both as a binary classification task, by collapsing the EXPLICIT and IMPLICIT subclasses into a single value, namely OFF(ENSIVE). Given the distribution of the annotated data, the task is already challenging. The second experiment setting follows the fine-grained, tripartite distinction between EXPLICIT, IMPLICIT and NOT. Table 5 presents the results for the binary setting. All models outperform the dummy baseline, with RobBERT achieving the best results (macroaverage F1 of 0.828). Interestingly, the second best system is the Bi-LSTM rather than the other PTLM, BERTje, with a macro-average F1 of 0.823. When comparing the results of these two latter models, we observe that BERTje underpeforms on the OFF label, especially for Recall. A possible ex- planation can be found by taking into account the properties of the embedding representations of the models. The Coosto word embeddings used to initialise the Bi-LSTM have been obtained by using a large amount of messages from social media (624 million messages out of a total of 660 million texts), making them more suitable and inline with the text variety of the dataset. This may also be one of the reasons why RobBERT performs best: the data used to generate its embeddings are also from the Web, although not specifically from social media posts. To further validate the behaviour of the Bi-LSTM model, we ran a further set of experiments using random pre-trained embeddings obtained from the Dutch CoNLL17 corpus 8 (Fares et al., 2017) . The embeddings are smaller than the Coosto ones (100 dimensions vs. 300 dimensions for Coosto), and obtained from a different data distribution. While the results 9 are lower (macro-F1 0.799 0.004 ), they are still competitive, with the macro-F1 falling within the standard deviation of BERTje. All systems achieve very good results on the negative class but suffer on the positive one. This is mainly due to the lack of overlapping elements between the Train/Dev and the Test split, besides the impact of the unbalanced distribution of the data in the training data. This is particularly evident for the Recall of the OFF class of the SVM which is barely above 0.5. Finally, in absolute terms, the results of the top systems are in line with those reported for comparable datasets in other languages .",
"cite_spans": [
{
"start": 1529,
"end": 1549,
"text": "(Fares et al., 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 374,
"end": 381,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Detecting Offensive Language",
"sec_num": "4.1"
},
{
"text": "The outcome of the fine-grained experiments are detailed in Table 6 . Rather than focusing only on the best systems, we have experimented with all of them to see whether the patterns observed in the binary classification remain valid. The picture that emerges is slightly different. The performances on the EXP and the NOT subclasses are almost unchanged for the neural-based systems, while they dramatically drop for the EXP subclass for the SVM model. All systems struggle to distinguish the IMP subclass, with the Bi-LSTM achieving the best Precision. When compared to the binary classification, the results of the two PTLMs are closer and marginally better than the Bi-LSTM, confirming RobBERT as the best system (macro-average F1 0.667). Interestingly, BERTje has the highest Recall score for the IMP subclass.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Detecting Offensive Language",
"sec_num": "4.1"
},
{
"text": "Target identification has an important role within the more general task of offensive language identification, especially because it can help to better assess the seriousness of the offence and contribute to the study of more specific phenomena such as hate speech (Waseem et al., 2017; Zampieri et al., 2019b) . In particular, messages containing a target can be further annotated by distinguishing whether they express an insult or stronger forms of degradation (e.g., abusive language, or hate speech), and by refining the types of target (e.g., gender, race/ethnicity, political orientation, disabilities, among others).",
"cite_spans": [
{
"start": 265,
"end": 286,
"text": "(Waseem et al., 2017;",
"ref_id": "BIBREF33"
},
{
"start": 287,
"end": 310,
"text": "Zampieri et al., 2019b)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting the Targets",
"sec_num": "4.2"
},
{
"text": "In these experiments, we have assumed a perfect labelling of the messages for offensiveness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting the Targets",
"sec_num": "4.2"
},
{
"text": "This results in a reduced number of messages that we can use for training and testing our systems. Similarly to the offensiveness dimension, we have compared our results against a dummy classifier that always predicts the most frequent label, i.e., IND. The results are reported in Table 7 . Given the higher number of subclasses and the reduced number of messages useful for training the systems, target identification is more challenging. All systems outperform the dummy baseline, with varying degrees of performance. The first striking result is the (relatively) close performance of the SVM and the Bi-LSTM models, with a macro F1 delta of 0.004. While the Bi-LSTM has a better performance for the IND and GRP subclasses, the SVM obtains better results on the OTH and the NOT. The PTLMs confirm as the best systems and for this task BERTje outperforms RobBERT, with a macro-average F1 of 0.579.",
"cite_spans": [],
"ref_spans": [
{
"start": 282,
"end": 289,
"text": "Table 7",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Detecting the Targets",
"sec_num": "4.2"
},
{
"text": "Similarly to the offensive dimension, the distribution of the labels in the Train split clearly has an impact on the results of the trained systems (see Table 4 ). Thus, it is not surprising that all systems tend to overgeneralise the IND subclass since it is the most frequent one. When analysing the confusion matrices across all systems, it appears that the most confounded class is OTH. The class tends to be wrongly assigned to the IND and the GRP subclasses. In this section, we present a set of experiments that challenges systems to distinguish between three categories: whether a message is offensive but not abusive (OFF; see example 1), whether a message is abusive (ABU; see example 2), and whether a message is neither (NOT). The task is framed as a multi-class classification problem rather than as a multi-label classification one. This results in a slightly different distribution of the labels, namely in Train we have 1,391 (20.51%) messages marked as ABU, 1,086 (16.01%) messages marked as OFF, and 4,304 (63.47%) messages for NOT. The test split has 463 (14.15%) ABU messages, 404 (12.35%) OFF messages, and 2,403 (73.48%) messages marked as NOT. The distribution between the ABU and OFF classes is unbalanced in favour of the ABU class.",
"cite_spans": [],
"ref_spans": [
{
"start": 153,
"end": 160,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Detecting the Targets",
"sec_num": "4.2"
},
{
"text": "Results for these experiments are illustrated in Table 8 . As the figures show, the imbalance of the classes in the Train split affects the performance of all systems, with the results for the ABU messages being better than those labelled as OFF, but worse than those labelled as NOT. RobBERT qualifies again as the best system followed by the Bi-LSTM, and with the SVM being the worst. The results for BERTje are comparable to those obtained for the offensive experiments in the binary setting (see Table 5 ). Across all systems, we observe a tendency to wrongly classify OFF as NOT, and ABU as OFF. Connecting this with our analysis of the top-keywords per class indicates that the systems trained in this way heavily rely on superficial linguistic cues rather than grasping deeper and more heavily discriminating cues. In addition to this, when focusing on the combination of the explicitness layers and the ABU and OFF classes, we observe that in the Train split the majority of ABU messages (i.e, 62.25%) are marked as EXPLICIT, while this holds only for 49.81% of the OFF messages. It thus appears that with varying degrees all systems have identified a clear shortcut in these experiments whereby messages that are marked as EXPLICIT are then more often associated with the ABU class.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 56,
"text": "Table 8",
"ref_id": "TABREF14"
},
{
"start": 500,
"end": 507,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Distinguishing between Offensive and Abusive Language",
"sec_num": "4.3"
},
{
"text": "We have conducted an error analysis for the offensive dimension and the offensive target layer since they represents the new annotations in the dataset. The error analysis has been conducted on the Dev set using the best performing system for each dimension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "Offensive Language For the offensive language dimension, we have used the predictions by Rob-BERT in the binary settings. The system wrongly classifies 179 messages, with the majority (101 messages) being OFF messages wrongly labelled as NOT. To gain better insights, we have classified all the errors into six categories:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "\u2022 criticism: 13.40% of the errors are due to messages expressing some form of criticism; 75% of them are OFF wrongly labelled as NOT;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "\u2022 obfuscation: only 3.35% of OFF messages wrongly labelled are due to obfuscation or abbreviation of profanities or slurs;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "\u2022 sarcasm/irony: 8.93% of the errors are due to presence of irony or sarcasm; the majority (62.5%) concerns errors for the OFF subclass wrongly considered as NOT;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "\u2022 world knowledge: 13.4% of the errors could have been correctly classified by means of some form of world knowledge;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "\u2022 gold errors: 7.82% of the errors are due to potential annotation mistakes in the gold standard data;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "\u2022 bias: this category comprises the largest amount of errors, 48.6% of the messages. 60.91% of the errors are False Positives for the OFF subclass containing identity terms (e.g. \"gay\"), names of political parties or politicians, or religious terms; the remainder of the messages are False Negatives for the OFF subclass containing stereotypes or being implicitly offensive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "Target For targets, 127 messages are wrongly classified. When analysing the confusion matrices across all systems, it appears that the most confounded class is OTH. The class tends to be wrongly assigned to the IND and the GRP subclasses. On the contrary, the errors for the NOT subclass are limited and they seem to be due to lack of training data. The large part of the errors (31.49%) are due to different elements such as mixture of pronouns in the message (e.g., \"jij\" and \"ze\"), presence of collective nouns, or presence of a user's placeholder (i.e., MENTION) but no direct address in the text, and even mentions of concepts. The second largest block of errors, 23.62%, is due to the presence of multiple placeholders in the message, often happening in Twitter when replying to a long conversation but not necessarily addressing all the users involved. 18.11% of the errors could have been avoided by correctly processing the verb form. Given the larger amount of classes, 15.74% of the messages present some errors in the gold datanote, however, that these messages also include the errors in the gold standard for the offensive language dimension. Finally, 11.02% of the targets could have been correctly assigned if some form of commonsense knowledge was available to the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5"
},
{
"text": "The interest for the development of datasets and systems for the detection of abusive and offensive language phenomena has seen a steep growth in recent years. Different phenomena have been investigated including racism (Waseem and Hovy, 2016b; Davidson et al., , 2019 , hate speech (Alfina et al., 2017; Founta et al., 2018; Mishra et al., 2018; Basile et al., 2019) , toxicity 10 , verbal aggression (Kumar et al., 2018) , and misogyny (Frenda et al., 2018; Pamungkas et al., 2020; Guest et al., 2021) .",
"cite_spans": [
{
"start": 220,
"end": 244,
"text": "(Waseem and Hovy, 2016b;",
"ref_id": "BIBREF35"
},
{
"start": 245,
"end": 268,
"text": "Davidson et al., , 2019",
"ref_id": "BIBREF10"
},
{
"start": 283,
"end": 304,
"text": "(Alfina et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 305,
"end": 325,
"text": "Founta et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 326,
"end": 346,
"text": "Mishra et al., 2018;",
"ref_id": "BIBREF23"
},
{
"start": 347,
"end": 367,
"text": "Basile et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 402,
"end": 422,
"text": "(Kumar et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 438,
"end": 459,
"text": "(Frenda et al., 2018;",
"ref_id": "BIBREF16"
},
{
"start": 460,
"end": 483,
"text": "Pamungkas et al., 2020;",
"ref_id": "BIBREF25"
},
{
"start": 484,
"end": 503,
"text": "Guest et al., 2021)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Offensive language, as we have detailed in \u00a7 2, is a more general and subjective phenomenon than abusive language. Founta et al. 2018provides an extensive analysis of the correlations between different phenomena and decide to collapse messages labelled as abusive, offensive and aggressive into a single category, namely abusive. Early attempts to annotate offensive language have been conducted in German as part of broader evaluation on hate speech (Wiegand et al., 2018) . The SemEval 2019 Task 6: OffensEval (Zampieri et al., 2019c) has set up a common reference framework for the definition and the annotation of offensive language. The follow-up edition of the task applied the original definition and annotation approach to four additional languages other than English, namely Turkish, Danish, Arabic, Greek. This corpus complements these annotation efforts with a further compatible dataset to fill a gap in the Dutch language resource panorama and to promote the advancement of multilingual approaches.",
"cite_spans": [
{
"start": 451,
"end": 473,
"text": "(Wiegand et al., 2018)",
"ref_id": "BIBREF37"
},
{
"start": 512,
"end": 536,
"text": "(Zampieri et al., 2019c)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "A different direction to the development of multilingual offensive language datasets has been presented with XHATE-99 (Glava\u0161 et al., 2020) . In this case, the authors have semi-automatically translated selected messages from three English datasets into five target languages (Albanian, Croatian, German, Russian, and Turkish). By working with translations, the authors have managed to better disentangle the impact of language versus domain shift in a transfer learning setting. As a matter of fact, the language alignments have ensured that losses observed in the cross-lingual setting are solely due to language shift rather than domain.",
"cite_spans": [
{
"start": 118,
"end": 139,
"text": "(Glava\u0161 et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "This paper has presented DALC v2.0, a corpus for detecting offensive and abusive language in social media for Dutch. The corpus is composed of 11,292 messages manually annotated and it currently represents the largest available resource for these language phenomena in Dutch. Offensive language captures a more subjective dimension when compared to abusive language. For this reason, the data have been annotated in parallel by all annotators. We have applied a multi-layered annotation scheme targeting two key dimensions: the explicitness of the message and the presence of a potential target. For both annotation layers, the final labels have been assigned by means of majority voting. However, in the release of the corpus, we also distribute the disaggregated labels for both layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "We have conducted a series of experiments by applying different algorithms. We have obtained the best results by using two monolingual PTLMs, namely RobBERT for the offensive dimension, and BERTje for the targets. For the offensive dimension, we have found that a Bi-LSTM architecture is very competitive when compared to the PTLMs also when using non-domain specific embeddings. We have also experimented on the ability of the models to distinguish between abusive and offensive language, obtaining promising results, showing that the distinction between offensive and abusive language is a more complex task than targeting each phenomenon individually.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Our error analysis has indicated limits of the systems and of the dataset. In particular, it seems that systems heavily rely on surface cues to assign a label to the message, showing a lack of \"comprehension\" of the content of the message and a high sensitivity to the distribution of the data in the training split.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Future work will focus on further testing the abilities of the dataset to train robust system by applying trained models to dynamic benchmark on the line of the HateCheck approach (R\u00f6ttger et al., 2021) . Furthermore, given the presence of multiple compatible corpora in different languages, we plan to explore the application of multilingual systems to address this task.",
"cite_spans": [
{
"start": 180,
"end": 202,
"text": "(R\u00f6ttger et al., 2021)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Dual use DALC v2.0 and all the accompanying models are exposed to risks of dual use from malevolent agents. However, by making publicly available the resource and documenting the process behind its creation and the training of the models (including their limitations and errors), we may mitigate such risks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Statement",
"sec_num": null
},
{
"text": "Misrepresentation As the error analysis has shown ( \u00a7 5), even the best system is far from being perfect, with a relatively high number of False Positive for the OFF subclass. We thus recommend caution before deploying such a model without any additional human supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Statement",
"sec_num": null
},
{
"text": "Privacy Collection of data from Twitter's users has been conducted in compliance with Twitter's Terms of Service. Given the large amount of users that may be involved, we could not collect informed consent from each of them. To comply with this limitations, we have made publicly available only the tweet IDs. This will protect the users' rights to delete their messages or accounts. However, re-leasing only IDs exposes DALC to fluctuations in terms of potentially available messages, thus making replicability of experiments and comparison with future work impossible. To obviate to this limitation, we make available another version of the corpus, Full Text. This version of the corpus allows users to access to the full text message of all 11,292 tweets. The Full Text dataset is released with a dedicated licence. In this case, we make available only the text, removing any information related to the time periods or seed users. We have also anonymised all users' mentions and external URLs. The licence explicitly prevents users to actively search for the text of the messages in any form. We deem these sufficient steps to protect users' privacy and rights to do research using internet material.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Statement",
"sec_num": null
},
{
"text": "Figures 1 to 12 illustrate the pairwise confusion matrix for each pair of annotators for the offensive explicitness layer and the offensive target layer. Note: for completeness, the target layer contains an extra subclass (NOT OFF) indicating cases where one annotator has marked the message as OFFENSIVE and, consequently, he has annotated also the target while the other has consider the message as not containing any offence. Table 11 illustrates the keywords for the messages labeled as OFFENSIVE, ABUSIVE, and NOT OFFEN-SIVE. The keywords have been extracted using TF-IDF per language phenomenon rather than per subclass by collapsing the explicitness layers (i.e., offensive vs. abusive rather than abusive explicit vs. offensive explicit, and so forth). Figure 13 illustrates the confusion matrix for the offensive language dimension (binary classification), while Figure 14 illustrates the confusion matrix for the target classification (offensive messages only) ",
"cite_spans": [],
"ref_spans": [
{
"start": 429,
"end": 437,
"text": "Table 11",
"ref_id": "TABREF0"
},
{
"start": 761,
"end": 770,
"text": "Figure 13",
"ref_id": "FIGREF0"
},
{
"start": 872,
"end": 881,
"text": "Figure 14",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Ethical Statement",
"sec_num": null
},
{
"text": "https://github.com/tommasoc80/DALC",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The annotators are also authors of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "15 messages from the last training batch were not annotated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Statistical test: Mann-Whitney Test; p > 0.05 6 Statistical test: Mann-Whitney Test; p < 0.05",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/coosto/ dutch-word-embeddings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://vectors.nlpl.eu/repository/ 9 OFF Precision: 0.7370.058, OFF Recall: 0.6800.064; NOT Precision: 0.8880.016, NOT Recall: 0.9080.034",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The Toxic Comment Classification Challenge https: //bit.ly/2QuHKD6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Preprocessing All experiments have been conducted with common pre-processing steps, namely:\u2022 lowercasing of all words\u2022 all users' mentions have been substituted with a placeholder (MENTION);\u2022 all URLs have been substituted with a with a placeholder (URL);\u2022 all ordinal numbers have been replaced with a placeholder (NUMBER);\u2022 emojis have been replaced with text (e.g. \u2192 :pleading_face:) using Python emoji package;\u2022 hashtag symbol has been removed from hasthtags (e.g. #kadiricinadalet \u2192 kadiricinadalet);\u2022 extra blank spaces have been replaced with a single space;\u2022 extra blank new lines have been removed.Models' hyperparameters All hyperparamters used for the experiments are reported in Table 9 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 691,
"end": 698,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendix A: Replicability",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Hate speech detection in the indonesian language: A dataset and preliminary study",
"authors": [
{
"first": "Rio",
"middle": [],
"last": "Ika Alfina",
"suffix": ""
},
{
"first": "Mohamad",
"middle": [
"Ivan"
],
"last": "Mulia",
"suffix": ""
},
{
"first": "Yudo",
"middle": [],
"last": "Fanany",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ekanata",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 International Conference on Advanced Computer Science and Information Systems (ICACSIS)",
"volume": "",
"issue": "",
"pages": "233--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ika Alfina, Rio Mulia, Mohamad Ivan Fanany, and Yudo Ekanata. 2017. Hate speech detection in the indone- sian language: A dataset and preliminary study. In 2017 International Conference on Advanced Com- puter Science and Information Systems (ICACSIS), pages 233-238. IEEE.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Crowdsourcing subjective tasks: The case study of understanding toxicity in online discussions",
"authors": [
{
"first": "Lora",
"middle": [],
"last": "Aroyo",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rosen",
"suffix": ""
}
],
"year": 2019,
"venue": "Companion Proceedings of The 2019 World Wide Web Conference, WWW '19",
"volume": "",
"issue": "",
"pages": "1100--1105",
"other_ids": {
"DOI": [
"10.1145/3308560.3317083"
]
},
"num": null,
"urls": [],
"raw_text": "Lora Aroyo, Lucas Dixon, Nithum Thain, Olivia Red- field, and Rachel Rosen. 2019. Crowdsourcing sub- jective tasks: The case study of understanding toxic- ity in online discussions. In Companion Proceedings of The 2019 World Wide Web Conference, WWW '19, page 1100-1105, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "It's the end of the gold standard as we know it. on the impact of pre-aggregation on the evaluation of highly subjective tasks",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valerio Basile. 2020. It's the end of the gold standard as we know it. on the impact of pre-aggregation on the evaluation of highly subjective tasks. In DP@AI*IA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Francisco Manuel Rangel",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019. SemEval-2019 task 5: Multilingual detec- tion of hate speech against immigrants and women in twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 54-63, Min- neapolis, Minnesota, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science",
"authors": [
{
"first": "M",
"middle": [],
"last": "Emily",
"suffix": ""
},
{
"first": "Batya",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "587--604",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Say 'YES' to positivity: Detecting toxic language in workplace communications",
"authors": [],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2021",
"volume": "",
"issue": "",
"pages": "2017--2029",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-emnlp.173"
]
},
"num": null,
"urls": [],
"raw_text": "Meghana Moorthy Bhat, Saghar Hosseini, Ahmed Has- san Awadallah, Paul Bennett, and Weisheng Li. 2021. Say 'YES' to positivity: Detecting toxic language in workplace communications. In Findings of the Asso- ciation for Computational Linguistics: EMNLP 2021, pages 2017-2029, Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "2020. I feel offended, don't be abusive! implicit/explicit messages in offensive and abusive language",
"authors": [
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Jelena",
"middle": [],
"last": "Mitrovi\u0107",
"suffix": ""
},
{
"first": "Inga",
"middle": [],
"last": "Kartoziya",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Granitzer",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "6193--6202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommaso Caselli, Valerio Basile, Jelena Mitrovi\u0107, Inga Kartoziya, and Michael Granitzer. 2020. I feel of- fended, don't be abusive! implicit/explicit messages in offensive and abusive language. In Proceedings of the 12th Language Resources and Evaluation Confer- ence, pages 6193-6202, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Hylke van der Veen, Gerben Timmerman, and Malvina Nissim",
"authors": [
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Arjan",
"middle": [],
"last": "Schelhaas",
"suffix": ""
},
{
"first": "Marieke",
"middle": [],
"last": "Weultjes",
"suffix": ""
},
{
"first": "Folkert",
"middle": [],
"last": "Leistra",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)",
"volume": "",
"issue": "",
"pages": "54--66",
"other_ids": {
"DOI": [
"10.18653/v1/2021.woah-1.6"
]
},
"num": null,
"urls": [],
"raw_text": "Tommaso Caselli, Arjan Schelhaas, Marieke Weultjes, Folkert Leistra, Hylke van der Veen, Gerben Timmer- man, and Malvina Nissim. 2021. DALC: the Dutch abusive language corpus. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pages 54-66, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A multi-platform Arabic news comment dataset for offensive language detection",
"authors": [
{
"first": "Hamdy",
"middle": [],
"last": "Shammur Absar Chowdhury",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "Soon-Gyo",
"middle": [],
"last": "Abdelali",
"suffix": ""
},
{
"first": "Bernard",
"middle": [
"J"
],
"last": "Jung",
"suffix": ""
},
{
"first": "Joni",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salminen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "6203--6212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shammur Absar Chowdhury, Hamdy Mubarak, Ahmed Abdelali, Soon-gyo Jung, Bernard J. Jansen, and Joni Salminen. 2020. A multi-platform Arabic news com- ment dataset for offensive language detection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 6203-6212, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A corpus of Turkish offensive language on social media",
"authors": [],
"year": null,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "6174--6184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00c7agr\u0131 \u00c7\u00f6ltekin. 2020. A corpus of Turkish offensive language on social media. In Proceedings of the 12th Language Resources and Evaluation Confer- ence, pages 6174-6184, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Racial bias in hate speech and abusive language detection datasets",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Debasmita",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "25--35",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3504"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Debasmita Bhattacharya, and Ing- mar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In Proceedings of the Third Workshop on Abusive Language Online, pages 25-35, Florence, Italy. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automated hate speech detection and the problem of offensive language",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech de- tection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bertje: A dutch bert model",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Wietse De Vries",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Van Cranenburgh",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Gertjan Van Noord",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nissim",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.09582"
]
},
"num": null,
"urls": [],
"raw_text": "Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, and Malvina Nissim. 2019. Bertje: A dutch bert model. arXiv preprint arXiv:1912.09582.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "RobBERT: a Dutch RoBERTa-based Language Model",
"authors": [
{
"first": "Pieter",
"middle": [],
"last": "Delobelle",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Winters",
"suffix": ""
},
{
"first": "Bettina",
"middle": [],
"last": "Berendt",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "3255--3265",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.292"
]
},
"num": null,
"urls": [],
"raw_text": "Pieter Delobelle, Thomas Winters, and Bettina Berendt. 2020. RobBERT: a Dutch RoBERTa-based Lan- guage Model. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3255-3265, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Word vectors, reuse, and replicability: Towards a community repository of large-text resources",
"authors": [
{
"first": "Murhaf",
"middle": [],
"last": "Fares",
"suffix": ""
},
{
"first": "Andrey",
"middle": [],
"last": "Kutuzov",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 21st Nordic Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "271--276",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murhaf Fares, Andrey Kutuzov, Stephan Oepen, and Erik Velldal. 2017. Word vectors, reuse, and replica- bility: Towards a community repository of large-text resources. In Proceedings of the 21st Nordic Confer- ence on Computational Linguistics, pages 271-276, Gothenburg, Sweden. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Large scale crowdsourcing and characterization of twitter abusive behavior",
"authors": [
{
"first": "Constantinos",
"middle": [],
"last": "Antigoni Maria Founta",
"suffix": ""
},
{
"first": "Despoina",
"middle": [],
"last": "Djouvas",
"suffix": ""
},
{
"first": "Ilias",
"middle": [],
"last": "Chatzakou",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Leontiadis",
"suffix": ""
},
{
"first": "Gianluca",
"middle": [],
"last": "Blackburn",
"suffix": ""
},
{
"first": "Athena",
"middle": [],
"last": "Stringhini",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Vakali",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Sirivianos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kourtellis",
"suffix": ""
}
],
"year": 2018,
"venue": "Twelfth International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antigoni Maria Founta, Constantinos Djouvas, De- spoina Chatzakou, Ilias Leontiadis, Jeremy Black- burn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. In Twelfth International AAAI Conference on Web and Social Media.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Exploration of misogyny in spanish and english tweets",
"authors": [
{
"first": "Simona",
"middle": [],
"last": "Frenda",
"suffix": ""
},
{
"first": "Ghanem",
"middle": [],
"last": "Bilal",
"suffix": ""
}
],
"year": 2018,
"venue": "Third Workshop on Evaluation of Human Language Technologies for Iberian Languages",
"volume": "2150",
"issue": "",
"pages": "260--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simona Frenda, Ghanem Bilal, et al. 2018. Exploration of misogyny in spanish and english tweets. In Third Workshop on Evaluation of Human Language Tech- nologies for Iberian Languages (IberEval 2018), vol- ume 2150, pages 260-267. Ceur Workshop Proceed- ings.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "XHate-999: Analyzing and detecting abusive language across domains and languages",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Mladen",
"middle": [],
"last": "Karan",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6350--6365",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.559"
]
},
"num": null,
"urls": [],
"raw_text": "Goran Glava\u0161, Mladen Karan, and Ivan Vuli\u0107. 2020. XHate-999: Analyzing and detecting abusive lan- guage across domains and languages. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 6350-6365, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An expert annotated dataset for the detection of online misogyny",
"authors": [
{
"first": "Ella",
"middle": [],
"last": "Guest",
"suffix": ""
},
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Alexandros",
"middle": [],
"last": "Mittos",
"suffix": ""
},
{
"first": "Nishanth",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "Gareth",
"middle": [],
"last": "Tyson",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Margetts",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "1336--1350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ella Guest, Bertie Vidgen, Alexandros Mittos, Nishanth Sastry, Gareth Tyson, and Helen Margetts. 2021. An expert annotated dataset for the detection of online misogyny. In Proceedings of the 16th Conference of the European Chapter of the Association for Com- putational Linguistics: Main Volume, pages 1336- 1350.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Cross-domain detection of abusive language online",
"authors": [
{
"first": "Mladen",
"middle": [],
"last": "Karan",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "\u0160najder",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "132--137",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5117"
]
},
"num": null,
"urls": [],
"raw_text": "Mladen Karan and Jan \u0160najder. 2018. Cross-domain detection of abusive language online. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 132-137, Brussels, Belgium. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Preemptive toxic language detection in Wikipedia comments using thread-level context",
"authors": [
{
"first": "Mladen",
"middle": [],
"last": "Karan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "129--134",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3514"
]
},
"num": null,
"urls": [],
"raw_text": "Mladen Karan and Jan \u0160najder. 2019. Preemptive toxic language detection in Wikipedia comments using thread-level context. In Proceedings of the Third Workshop on Abusive Language Online, pages 129- 134, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Benchmarking aggression identification in social media",
"authors": [
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Atul",
"middle": [],
"last": "Kr",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Ojha",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritesh Kumar, Atul Kr. Ojha, Shervin Malmasi, and Marcos Zampieri. 2018. Benchmarking aggression identification in social media. In Proceedings of the First Workshop on Trolling, Aggression and Cyber- bullying (TRAC-2018), pages 1-11, Santa Fe, New Mexico, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Agreeing to disagree: Annotating offensive language datasets with annotators' disagreement",
"authors": [
{
"first": "Elisa",
"middle": [],
"last": "Leonardelli",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Alessio",
"middle": [],
"last": "Palmero Aprosio",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Guerini",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Tonelli",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "10528--10539",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisa Leonardelli, Stefano Menini, Alessio Palmero Aprosio, Marco Guerini, and Sara Tonelli. 2021. Agreeing to disagree: Annotating offensive language datasets with annotators' dis- agreement. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Process- ing, pages 10528-10539, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Author profiling for abuse detection",
"authors": [
{
"first": "Pushkar",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Marco",
"middle": [
"Del"
],
"last": "Tredici",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1088--1098",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pushkar Mishra, Marco Del Tredici, Helen Yan- nakoudakis, and Ekaterina Shutova. 2018. Author profiling for abuse detection. In Proceedings of the 27th International Conference on Computational Lin- guistics, pages 1088-1098.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Exposing the limits of zero-shot cross-lingual hate speech detection",
"authors": [
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "907--914",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-short.114"
]
},
"num": null,
"urls": [],
"raw_text": "Debora Nozza. 2021. Exposing the limits of zero-shot cross-lingual hate speech detection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 907-914, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Misogyny detection in twitter: a multilingual and cross-domain study",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Endang Wahyu Pamungkas",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2020,
"venue": "Information Processing & Management",
"volume": "57",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Endang Wahyu Pamungkas, Valerio Basile, and Viviana Patti. 2020. Misogyny detection in twitter: a multilin- gual and cross-domain study. Information Processing & Management, 57(6):102360.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Offensive language identification in Greek",
"authors": [
{
"first": "Zesis",
"middle": [],
"last": "Pitenis",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "5113--5119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zesis Pitenis, Marcos Zampieri, and Tharindu Ranas- inghe. 2020. Offensive language identification in Greek. In Proceedings of the 12th Language Re- sources and Evaluation Conference, pages 5113- 5119, Marseille, France. European Language Re- sources Association.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Resources and benchmark corpora for hate speech detection: a systematic review",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Poletto",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2021,
"venue": "Lang. Resour. Evaluation",
"volume": "55",
"issue": "2",
"pages": "477--523",
"other_ids": {
"DOI": [
"10.1007/s10579-020-09502-8"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Poletto, Valerio Basile, Manuela Sanguinetti, Cristina Bosco, and Viviana Patti. 2021. Resources and benchmark corpora for hate speech detection: a systematic review. Lang. Resour. Evaluation, 55(2):477-523.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Multilingual offensive language identification with crosslingual embeddings",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "5838--5844",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.470"
]
},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe and Marcos Zampieri. 2020. Mul- tilingual offensive language identification with cross- lingual embeddings. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 5838-5844, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "HateCheck: Functional tests for hate speech detection models",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "R\u00f6ttger",
"suffix": ""
},
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Margetts",
"suffix": ""
},
{
"first": "Janet",
"middle": [],
"last": "Pierrehumbert",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "41--58",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.4"
]
},
"num": null,
"urls": [],
"raw_text": "Paul R\u00f6ttger, Bertie Vidgen, Dong Nguyen, Zeerak Waseem, Helen Margetts, and Janet Pierrehumbert. 2021. HateCheck: Functional tests for hate speech detection models. In Proceedings of the 59th An- nual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 41-58, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Offensive language and hate speech detection for Danish",
"authors": [
{
"first": "Leon",
"middle": [],
"last": "Gudbjartur Ingi Sigurbergsson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "3498--3508",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gudbjartur Ingi Sigurbergsson and Leon Derczynski. 2020. Offensive language and hate speech detec- tion for Danish. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 3498-3508, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Asynchronous pipeline for processing huge corpora on medium to low resource infrastructures",
"authors": [
{
"first": "Pedro Javier Ortiz",
"middle": [],
"last": "Su\u00e1rez",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Romary",
"suffix": ""
}
],
"year": 2019,
"venue": "7th Workshop on the Challenges in the Management of Large Corpora (CMLC-7). Leibniz-Institut f\u00fcr Deutsche Sprache",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pedro Javier Ortiz Su\u00e1rez, Beno\u00eet Sagot, and Laurent Romary. 2019. Asynchronous pipeline for process- ing huge corpora on medium to low resource infras- tructures. In 7th Workshop on the Challenges in the Management of Large Corpora (CMLC-7). Leibniz- Institut f\u00fcr Deutsche Sprache.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Challenges and frontiers in abusive content detection",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Rebekah",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Margetts",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "80--93",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3509"
]
},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019. Challenges and frontiers in abusive content detec- tion. In Proceedings of the Third Workshop on Abu- sive Language Online, pages 80-93, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Understanding abuse: A typology of abusive language detection subtasks",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "78--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Lan- guage Online, pages 78-84.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on Twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {
"DOI": [
"10.18653/v1/N16-2013"
]
},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016a. Hateful sym- bols or hateful people? predictive features for hate speech detection on Twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93, San Diego, California. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016b. Hateful sym- bols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93, San Diego, California. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Implicitly abusive language -what does it actually look like and why are we not getting there?",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Elisabeth",
"middle": [],
"last": "Eder",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "576--587",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.48"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Josef Ruppenhofer, and Elisabeth Eder. 2021. Implicitly abusive language -what does it actually look like and why are we not getting there? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 576-587, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Inducing a lexicon of abusive words-a feature-based approach",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Clayton",
"middle": [],
"last": "Greenberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1046--1056",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Josef Ruppenhofer, Anna Schmidt, and Clayton Greenberg. 2018. Inducing a lexicon of abusive words-a feature-based approach. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1046-1056.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Predicting the type and target of offensive posts in social media",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1415--1420",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1144"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019a. Predicting the type and target of offensive posts in social media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1415-1420, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Predicting the Type and Target of Offensive Posts in Social Media",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019b. Predicting the Type and Target of Offensive Posts in Social Media. In Proceedings of NAACL.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "SemEval-2019 task 6: Identifying and categorizing offensive language in social media (Of-fensEval)",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "75--86",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2010"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019c. SemEval-2019 task 6: Identifying and cat- egorizing offensive language in social media (Of- fensEval). In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 75-86, Min- neapolis, Minnesota, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Zeses Pitenis, and \u00c7agr\u0131 \u00c7\u00f6ltekin. 2020. SemEval-2020 task 12: Multilingual offensive language identification in social media",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Pepa",
"middle": [],
"last": "Atanasova",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Karadzhov",
"suffix": ""
},
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "1425--1447",
"other_ids": {
"DOI": [
"10.18653/v1/2020.semeval-1.188"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and \u00c7agr\u0131 \u00c7\u00f6ltekin. 2020. SemEval-2020 task 12: Multilingual offensive language identification in social media (OffensEval 2020). In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1425-1447, Barcelona (online). International Committee for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Explicitness Layer: A.1-A.2.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Explicitness Layer: A.1-A.3.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Explicitness Layer: A.1-A.4.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Explicitness Layer: A.2-A.3.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Explicitness Layer: A.2-A.4. 53 Explicitness Layer: A.3-A.4.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "Target Layer: A.1-A.2.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"text": "Target Layer: A.1-A.3.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF7": {
"text": "Target Layer: A.1-A.4.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF8": {
"text": "Target Layer: A.2-A.3.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF9": {
"text": "Target Layer: A.2-A.4.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF10": {
"text": "Target Layer: A.3-A.4.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF11": {
"text": "Confusion Matrix: Offensive Binary.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF12": {
"text": "Confusion Matrix: Offensive Target.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"html": null,
"text": "Definitions of offensive and abusive language adopted in this work.",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"html": null,
"text": "Examples of the annotation of the explicitness and the target layers. EXP. = EXPLICIT, IMP. = IMPLICIT; IND. = INDIVIDUAL, GRP. = GROUP, OTH. = OTHER. English translations in brackets.",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF4": {
"html": null,
"text": "Inter-Annotator Agreement for the Explicitness and the Target layers -pairwise Cohen's Kappa.",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"html": null,
"text": "DALC v2.0: Distribution of subclasses in Train, Dev, and Test splits for abusive, offensive dimensions and target layers. Target is split between target of abusive messages and target of offensive messages.",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF8": {
"html": null,
"text": "DALC v2.0: Offensive language, binary classification. Lower script numbers show standard deviations over 3 different runs. Best scores in bold.",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF10": {
"html": null,
"text": "",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF12": {
"html": null,
"text": "",
"num": null,
"type_str": "table",
"content": "<table><tr><td>: DALC v2.0: Target layer classification. Lower script numbers show standard deviations over 3 different runs. Best scores in bold.</td></tr></table>"
},
"TABREF14": {
"html": null,
"text": "",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF16": {
"html": null,
"text": "DALC v2.0: Top 10 keywords per target phenomenon in Train and Test. Explicitly offensive/abusive content have been masked with *",
"num": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}