ACL-OCL / Base_JSON /prefixR /json /ranlp /2021.ranlp-1.123.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:52:23.307354Z"
},
"title": "OffendES: A New Corpus in Spanish for Offensive Language Research",
"authors": [
{
"first": "Flor",
"middle": [
"Miriam"
],
"last": "Plaza-Del-Arco",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad de Ja\u00e9n",
"location": {
"addrLine": "Campus Las Lagunillas",
"postCode": "23071",
"settlement": "Ja\u00e9n",
"country": "Spain"
}
},
"email": ""
},
{
"first": "Arturo",
"middle": [],
"last": "Montejo-R\u00e1ez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad de Ja\u00e9n",
"location": {
"addrLine": "Campus Las Lagunillas",
"postCode": "23071",
"settlement": "Ja\u00e9n",
"country": "Spain"
}
},
"email": ""
},
{
"first": "L",
"middle": [],
"last": "Alfonso Ure\u00f1a-L\u00f3pez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad de Ja\u00e9n",
"location": {
"addrLine": "Campus Las Lagunillas",
"postCode": "23071",
"settlement": "Ja\u00e9n",
"country": "Spain"
}
},
"email": ""
},
{
"first": "Mar\u00eda-Teresa",
"middle": [],
"last": "Mart\u00edn-Valdivia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad de Ja\u00e9n",
"location": {
"addrLine": "Campus Las Lagunillas",
"postCode": "23071",
"settlement": "Ja\u00e9n",
"country": "Spain"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Offensive language detection and analysis has become a major area of research in Natural Language Processing. The freedom of participation in social media has exposed online users to posts designed to denigrate, insult or hurt them according to gender, race, religion, ideology, or other personal characteristics. Focusing on young influencers from the wellknown social platforms of Twitter, Instagram, and YouTube, we have collected a corpus composed of 47,128 Spanish comments manually labeled on offensive pre-defined categories. A subset of the corpus attaches a degree of confidence to each label, so both multi-class classification and multi-output regression studies are possible. In this paper, we introduce the corpus, discuss its building process, novelties, and some preliminary experiments with it to serve as a baseline for the research community.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Offensive language detection and analysis has become a major area of research in Natural Language Processing. The freedom of participation in social media has exposed online users to posts designed to denigrate, insult or hurt them according to gender, race, religion, ideology, or other personal characteristics. Focusing on young influencers from the wellknown social platforms of Twitter, Instagram, and YouTube, we have collected a corpus composed of 47,128 Spanish comments manually labeled on offensive pre-defined categories. A subset of the corpus attaches a degree of confidence to each label, so both multi-class classification and multi-output regression studies are possible. In this paper, we introduce the corpus, discuss its building process, novelties, and some preliminary experiments with it to serve as a baseline for the research community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Offensive language is defined as the text which uses hurtful, derogatory, or obscene terms made by one person to another person (Wiegand et al., 2019) . Related terms in the literature are hate speech (Waseem and Hovy, 2016) , cyberbullying (Rosa et al., 2019) , toxic language (van Aken et al., 2018) , aggression language (Kumar et al., 2018) , or abusive language (Nobata et al., 2016) . Although there are subtle differences in meaning, they are all compatible with the above general definition.",
"cite_spans": [
{
"start": 128,
"end": 150,
"text": "(Wiegand et al., 2019)",
"ref_id": "BIBREF28"
},
{
"start": 201,
"end": 224,
"text": "(Waseem and Hovy, 2016)",
"ref_id": "BIBREF26"
},
{
"start": 241,
"end": 260,
"text": "(Rosa et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 278,
"end": 301,
"text": "(van Aken et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 324,
"end": 344,
"text": "(Kumar et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 367,
"end": 388,
"text": "(Nobata et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Due to the well-acknowledged rise in digital social interactions, in particular on social media platforms, the amount of offensive language is also steadily growing. Unfortunately, this type of prejudiced communication can be extremely harmful and could lead to negative psychological effects among online users, especially among young people, causing anxiety, harassment, and even suicide in extreme cases (Hinduja and Patchin, 2010) .",
"cite_spans": [
{
"start": 407,
"end": 434,
"text": "(Hinduja and Patchin, 2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "At the same time, this issue also implicates governments, online communities, and social media platforms. In order to help fight this problem, these stakeholders are continuously taking appropriate actions to implement laws and policies combating hate speech. For instance, since 2013 the Council of Europe has sponsored the \"No Hate Speech\" movement 1 seeking to mobilize young people to combat hate speech and promote human rights online. In May 2016, the European Commission reached an agreement with Facebook, Microsoft, Twitter, and YouTube to create a \"Code of conduct on countering illegal hate speech online\" 2 . From 2018 to 2020, platforms such as Instagram, Snapchat, and TikTok adopted the Code. According to a Spanish report in 2019 on the evolution of hate crimes in Spain 3 , threats, insults, and discrimination are counted as the most repeated criminal acts, with the Internet (54.9%) and social media (17.2%) as the most widely used media to commit these actions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To help achieve this goal, automatic systems based on Natural Language Processing (NLP) techniques are required. To train these systems, corpora labeled on offensive language are essential. In recent years, the NLP community has invested considerable effort into resource generation. However, most of them have been directed towards English, even though it is a global concern and there are important cultural differences depending on the language examined. In addition, most of them have been focused on Twitter data, despite the presence of offensive language on other platforms such as YouTube or Instagram, which more widely used by young people.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To contribute to filling this gap, in this paper 4 we present OffendES, a Spanish collection of comments manually labeled for offensive content using a fine-grained annotation scheme. We collect our data from young influencers from well-known social platforms including Twitter, Instagram, and YouTube. Therefore, a comparative study of offensive behavior in social media and its relationship with the influencers is conducted. Finally, we propose preliminary experiments to serve as a baseline for the NLP community in which we show the validity of the corpus. The remaining of the paper is organized as follows. Section 2 describes the related work on offensive language including some available datasets. Section 3 introduces our OffendES dataset and some descriptive statistics. Section 4 depicts our baseline evaluation of the novel dataset. A discussion is provided in Section 5. Finally, we conclude with our future studies in Section 6.",
"cite_spans": [
{
"start": 49,
"end": 50,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent years, while offensive language continues to spread on the Internet, the importance of identifying this type of content in textual information has become increasingly significant in the NLP field, with several studies applying different machine learning systems. Most of these studies focus on the detection of offensiveness in social media, usually including a binary classification task to detect the presence of offensive language in the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Offensive Language Detection",
"sec_num": "2.1"
},
{
"text": "Early studies explored traditional machine learning algorithms including Support Vector Machines, Logistic Regression, Random Forest, or Decision Trees, as well as the combination of different types of syntactic, lexical, semantic, and sentiment features (Chen et al., 2012; Nobata et al., 2016; Or\u0203san, 2018; Plaza-del-Arco et al., 2019) .",
"cite_spans": [
{
"start": 255,
"end": 274,
"text": "(Chen et al., 2012;",
"ref_id": "BIBREF6"
},
{
"start": 275,
"end": 295,
"text": "Nobata et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 296,
"end": 309,
"text": "Or\u0203san, 2018;",
"ref_id": "BIBREF15"
},
{
"start": 310,
"end": 338,
"text": "Plaza-del-Arco et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Offensive Language Detection",
"sec_num": "2.1"
},
{
"text": "As neural network architectures have shown promising results, extensive studies have recently explored a variety of deep learning architectures including Recurrent and Convolutional Neural Networks (Ranasinghe et al., 2019; Sharifirad and Matwin, 2019; Georgakopoulos et al., 2018) . More recently, Transformer-based models have made significant progress and represent the state-of-the-art of multiple tasks, including offensive language detection (Plaza-del-Arco, Flor Miriam and Molina-Gonz\u00e1lez, M. Dolores and Ure\u00f1a-L\u00f3pez, L. Alplicit or offensive content which may be offensive to some readers. They do not represent the views of the authors. fonso and Mart\u00edn-Valdivia, Mar\u00eda-Teresa, 2020; Casula et al., 2020; Wiedemann et al., 2020) .",
"cite_spans": [
{
"start": 198,
"end": 223,
"text": "(Ranasinghe et al., 2019;",
"ref_id": "BIBREF22"
},
{
"start": 224,
"end": 252,
"text": "Sharifirad and Matwin, 2019;",
"ref_id": "BIBREF24"
},
{
"start": 253,
"end": 281,
"text": "Georgakopoulos et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 657,
"end": 693,
"text": "Mart\u00edn-Valdivia, Mar\u00eda-Teresa, 2020;",
"ref_id": null
},
{
"start": 694,
"end": 714,
"text": "Casula et al., 2020;",
"ref_id": "BIBREF5"
},
{
"start": 715,
"end": 738,
"text": "Wiedemann et al., 2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Offensive Language Detection",
"sec_num": "2.1"
},
{
"text": "Several labeled datasets are publicly available and usually include a binary annotation, indicating whether the content is offensive or not. Most of them have been generated in the context of different shared tasks for different languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Available",
"sec_num": "2.2"
},
{
"text": "For instance, the well-known offensive language task OffensEval has held two editions in the International Workshop on Semantic Evaluation (Se-mEval). In the first edition, Zampieri et al. (2019b) released the OLID dataset which contains over 14,000 English tweets. It was annotated using a three-level hierarchical annotation model by two people using a crowd-sourcing platform (Zampieri et al., 2019a) . In order to retrieve tweets, they selected specific keywords and constructions often included in offensive posts related to Twitter accounts. Following the same annotation scheme, in the second edition Zampieri et al. (2020) introduced multilingual datasets comprising five different languages.",
"cite_spans": [
{
"start": 173,
"end": 196,
"text": "Zampieri et al. (2019b)",
"ref_id": "BIBREF31"
},
{
"start": 379,
"end": 403,
"text": "(Zampieri et al., 2019a)",
"ref_id": "BIBREF30"
},
{
"start": 608,
"end": 630,
"text": "Zampieri et al. (2020)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Available",
"sec_num": "2.2"
},
{
"text": "The Germeval shared task focused on offensive language identification in German tweets (Wiegand and Siegel, 2018) . A dataset of over 8,500 annotated tweets was provided following also a hierarchical annotation. To collect the data, the authors explored the timeline of users that regularly post offensive content. Tweets were manually annotated by one of the three organizers of the task, and to measure inter-annotation agreement, 300 tweets were annotated by the three annotators in parallel. The annotation scheme is similar to the previously shared task, but differs in the following aspects: the number of levels in the hierarchy, the labels in the second level, and the language.",
"cite_spans": [
{
"start": 87,
"end": 113,
"text": "(Wiegand and Siegel, 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Available",
"sec_num": "2.2"
},
{
"text": "Related to Spanish, most of the datasets within the context of offensive language target hate speech, including AMI (Fersini et al., 2018) , HatEval (Basile et al., 2019) , and the HaterNet (Pereira-Kohatsu et al., 2019) collections. However, there is a lack of resources regarding the Spanish offensive language. To the best of our knowledge, the first corpus appeared at the 3rd SEPLN Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval) (Carmona et al., 2018) . This corpus was also used in the next edition of this workshop in 2019 (Arag\u00f3n et al., 2019) . The dataset focuses on the Mexican variant of Spanish and contains around 10,475 tweets binary labeled as offensive or non-offensive. This collection has been recently revised (D\u00edaz-Torres et al., 2020) . EmoEvent (Plaza-del-Arco, Flor Miriam and Strapparava, Carlo and Ure\u00f1a L\u00f3pez, L. Alfonso and Mart\u00edn-Valdivia, Mar\u00eda-Teresa, 2020) is a multilingual emotion corpus based on different events, it also includes a small proportion of tweets labeled as offensive. Finally, the DETOXIS task 5 recently introduced the first dataset of comments in response to news articles labeled at different toxicity levels. To the best of our knowledge, there is no other Spanish corpus available with fine-grained categories for offensive language focused on young people. As the authors point out in (Arag\u00f3n et al., 2019) , the characterization of the offensiveness level found in a text is complex; therefore, there is a need for a more detailed classification of the tweets.",
"cite_spans": [
{
"start": 116,
"end": 138,
"text": "(Fersini et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 149,
"end": 170,
"text": "(Basile et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 474,
"end": 496,
"text": "(Carmona et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 570,
"end": 591,
"text": "(Arag\u00f3n et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 770,
"end": 796,
"text": "(D\u00edaz-Torres et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 1380,
"end": 1401,
"text": "(Arag\u00f3n et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Available",
"sec_num": "2.2"
},
{
"text": "Our dataset, OffendES, differs from existing Spanish offensive language datasets because (i) apart from Twitter, we study the problem of offensive language detection on YouTube and Instagram, platforms that young people are more used to, (ii) we collect the data with a focus on young influencers, and (iii) we propose an annotation scheme with fine-grained classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Available",
"sec_num": "2.2"
},
{
"text": "In this section, we describe the context of the dataset, the methodology followed to collect it and the annotation scheme proposed to label offensive content. Besides, we give some descriptive statistics and a detailed analysis of the collected data. OffendES is available upon request to the authors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OffendES Dataset",
"sec_num": "3"
},
{
"text": "To understand the rationale behind the design and generation of the corpus, certain contextual information may be useful. As stated in the introduction, dealing with offensive posts in social networks is a growing concern. Several platforms are clear on this issue, as can be read in rules and policies of Twitter 6 , Instagram, 7 or YouTube 8 . Indeed, YouTube has disabled comments on videos and channels featuring children (The YouTube Team, 2019). But this is a major concern not only for platform providers but for public administrations, in order to limit the possible side effects of harmful messaging to more vulnerable communities, like children or teenagers. With this in mind, the creation of this resource aims to achieve the following long-term goals:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scope of the Dataset",
"sec_num": "3.1"
},
{
"text": "1. Early detection of offensive language use in social media on the Internet, with a special focus on young people.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scope of the Dataset",
"sec_num": "3.1"
},
{
"text": "2. Identifying improvements in protection systems for young people in social networks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scope of the Dataset",
"sec_num": "3.1"
},
{
"text": "3. Studying the feasibility of automatic learning systems for offensive language in Spanish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scope of the Dataset",
"sec_num": "3.1"
},
{
"text": "4. Creating a reference corpus for the study of language technologies applied to the classification of sexist language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scope of the Dataset",
"sec_num": "3.1"
},
{
"text": "Instagram, YouTube, and Twitter are among the social media platforms most used by people ages from 18 to 24 (Jenn Chen, 2020). These three have been selected as the main data sources. A total of 12 controversial influencers with a significant number of followers have been identified and their respective accounts in the three targeted social media platforms have been tracked. Table 2 (Appendix) shows the accounts used by the selected influencers in the three selected media. They are Spanish influencers from 24 to 35 years old and, six are men and six are women. The process for collecting comments consisted of two main steps. To collect the data, first, the last 50 posts by each influencer were obtained using the platform API. Then, an ad hoc web scraper was launched to extract user comments to each of the posts obtained (limited to 2,000 replies). This script uses scrolling through JavaScript code commands to retrieve further comments. In the case of YouTube, instead of the scraper, its API 9 has been used to retrieve comments. During two months (from February to March 2020), a total number of 283,622 comments were collected (see Table 1 for detailed information). The comments were then filtered according to two main constraints: the presence of potentially offensive language and lexical diversity. To avoid the creation of a corpus with few or no offensive comments set, we labeled all the comments with flags determining whether the comment contained any of the words found in five different controlled lexicons (Plaza-del-Arco, Flor-Miriam and Molina-Gonz\u00e1lez, M Dolores and Ure\u00f1a-L\u00f3pez, L Alfonso and Mart\u00edn-Valdivia, M. Teresa, 2020). All comments with potentially offensive language were selected (23,788 comments). We selected 60,000 comments to be labeled in the manual annotation phase. Therefore, we selected 36,212 comments without offensive terms. Applying lexical diversity measures proved to be an interesting approach to ensure a diverse set of comments. Therefore, we first attempted to include those comments that added the highest lexical diversity value to the growing set of collected comments. To that end, we applied the Measure of Lexical Textual Diversity MTLD (McCarthy and Jarvis, 2010), but the expected time to build the corpus with our implementation was unacceptable. Thus, we simply added those comments that produced the highest increase in the vocabulary size to the collection by iterating through all the comments and checking the amount of increase in vocabulary size comment by comment. At each iteration, that comment with the highest contribution of new vocabulary to the final collection was selected. This process was repeated until 60,000 comments were reached.",
"cite_spans": [],
"ref_spans": [
{
"start": 378,
"end": 385,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1147,
"end": 1154,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.2"
},
{
"text": "In order to establish the annotation schema, we followed those defined in (Wiegand and Siegel, 2018; Zampieri et al., 2019a) , while introducing some additional details that we consider important. Namely, we created a new category to include those posts with inappropriate language but no offense intended. For instance, the comment \"eres la puta ama\" (you're the fucking boss) contains inappropriate but non-offensive language and has a positive polarity. Then, we reformulated the definition of offensiveness to not include such posts.",
"cite_spans": [
{
"start": 74,
"end": 100,
"text": "(Wiegand and Siegel, 2018;",
"ref_id": null
},
{
"start": 101,
"end": 124,
"text": "Zampieri et al., 2019a)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Labeling Process",
"sec_num": "3.3"
},
{
"text": "The previous analysis led us to propose a definition of an offensive comment: one where language is used to commit an explicit or implicitly directed offense that may include insults, threats, profanity or swearing. Based on this definition, we established the following categories:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Labeling Process",
"sec_num": "3.3"
},
{
"text": "\u2022 Offensive, the target is a person (OFP). Offensive text targeting a specific individual.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Labeling Process",
"sec_num": "3.3"
},
{
"text": "\u2022 Offensive, the target is a group of people or collective (OFG) . Offensive text targeting a group of people belonging to the same ethnic group, gender or sexual orientation, political ideology, religious belief, or other common characteristics.",
"cite_spans": [
{
"start": 59,
"end": 64,
"text": "(OFG)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Labeling Process",
"sec_num": "3.3"
},
{
"text": "\u2022 Offensive, the target is different from a person or a group (OFO). Offensive text where the target does not belong to any of the previous categories, e.g., an organization, an event, a place, an issue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Labeling Process",
"sec_num": "3.3"
},
{
"text": "\u2022 Non-offensive, but with expletive language (NOE). A text that contains rude words, blasphemes, or swearwords but without the aim of offending, and usually with a positive connotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Labeling Process",
"sec_num": "3.3"
},
{
"text": "\u2022 Non-offensive (NO). Text that is neither offensive nor contains expletive language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Labeling Process",
"sec_num": "3.3"
},
{
"text": "The annotation of the collected data was performed via Amazon Mechanical Turk (MTurk) 10 , which is a popular crowdsourcing platform. It provides the option of specifying some requirements that human annotators must meet to work on the task, and the time allotted per assignment. In our case, we selected the location as Spain and the time to five minutes due to the presence of some long comments from YouTube. Apart from releasing the annotation scheme with four examples of instances for each class, in the purpose of ensuring clear and concise documentation, we also provided a list of instructions about rules, tips, and FAQs to try to solve any potential problems that could arise during the labeling process. Finally, to ensure the quality of the annotations, we used tracking comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Labeling Process",
"sec_num": "3.3"
},
{
"text": "We first conducted a round of trial annotation for both types of labeling, 4,500 and 1,500 instances with three and ten annotators, respectively. The goal of the trial annotation was (i) to identify any confusion in understanding the annotation schema, (ii) to estimate the average time to label the dataset, and (iii) to learn about the platform. The launch of these datasets was on September 24th, 2020, and it took two weeks to complete the annotation process on both sets. After analyzing the annotations, we observed through the comments of the annotators that the NOE and OFO classes were the most difficult to identify in the comments by the annotators. For this reason, we improved the definition of each class, providing examples as clear as possible to the annotators. The average agreement (kappa coefficient) grew from 36.85% for trial annotations up to 39.37% for final released comments. Yet, this level of agreement is lower than expected, which reflects the difficulty to discriminate among proposed classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Labeling Process",
"sec_num": "3.3"
},
{
"text": "Once the trial round was completed, the next step was to release the final dataset. A total of 54,023 instances were released in two subsets: 40,513 labeled by three annotators, and 13,510 labeled by ten annotators. The annotation took place from 17 November 2020 to 2 January 2021. As result, the three annotators subset covered 44,951 comments and the ten annotators subset 14,989 comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Labeling Process",
"sec_num": "3.3"
},
{
"text": "In order to check the reliability of the annotators, we analyzed their annotations in the tracking comments, i.e. those comments given as examples in the annotation guide. We observed that one of the annotators had over 60% of error rate in the tracking comments of both types of labeling, so we decided to remove their annotations since they could negatively affect the quality of the dataset. Sadly, this annotator was one of the most prolific, so the removal of his/her annotations resulted in a reduction of the three annotators subset to a number of 44,951 comments. A sample of the collected data is given in Tables 3 and 4 (Appendix).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing",
"sec_num": "3.4"
},
{
"text": "Thus, the final dataset is released divided into two subsets: the three annotators subset (3-Ann), with 44,951 comments, and the ten annotators subset (10-Ann), with 14,989 comments. The former is intended for multi-class classification research and the latter for tackling multi-output regression problems. Only 38 comments belong to both subsets. Comments are compiled without processing, therefore, case, punctuation, and emojis are preserved. Every comment is associated with a social network platform (Instagram, Twitter, or YouTube) and directed to one of the 12 selected influencers as the target. In Table 2 , the amount of comments associated with each platform and influencer is depicted. Comments on dalas' posts are more frequent (over 26% in both subsets). YouTube is the platform where most of the comments were collected (about 75% for both subsets), followed by Instagram (over 18%). Comments from Twitter only represent just over 6% of the collection.",
"cite_spans": [],
"ref_spans": [
{
"start": 608,
"end": 615,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "3.5"
},
{
"text": "For both subsets, the label is the majority class according to human annotators. For the subset labeled by ten annotators, the majority vote was set to five annotators. An additional None label was used when no agreement was reached between annotators. Table 3 shows the number of comments for each label on both subsets. Noticeably, the 10-Ann subset has a much lower percentage of None labels than the 3-Ann subset. The more annotators that were involved, the easier it was to decide the final label for a comment. Table 4 shows statistics on comments length (i.e. the number of characters in the text). As expected, YouTube is the platform with the highest average length (about 190 for both subsets), with high variance; Twitter comments average length is lower (149 characters), with very small variance, and Instagram is the platform where comments tend to be the shortest (with an averaged length of 114). Figure 1 shows the distribution of comments among influencers and social media platforms in the 3-Ann subset. YouTube is the most frequent platform, followed by Instagram. The influencer dalas is the target of more than a quarter of the total amount of comments. A similar distribution of comments is found in the 10-Ann subset.",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 260,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 517,
"end": 524,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 913,
"end": 921,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "3.5"
},
{
"text": "An interesting analysis is to measure label frequency according to each influencer. Figure 2 shows the proportion of influencer-level labels and reflects the differences among these users as tar- get of offensive comments. In terms of gender, it can be seen that female influencers are subject to a greater number of offensive comments than male accounts. In particular, soyunapringada, miare love, and WindyGirk are the accounts ranked with the most offensive comments. Regarding male influencers, accounts like JaviOliveira and Nauter-Play contain more offense comments than accounts like WildHater and JPelirrojo. The profile of the influencer may define more controversy compared to others, or raise more negative emotions to their followers. Therefore, it could be interesting to consider the target profile as a source of information in offensive detection systems. Inter-annotator agreement using the three annotators subset was measured with Cohen's kappa coefficient. The k value is 0.3579 (fair agreement), which is quite low and reflects how difficult it is for humans to agree between the proposed categories. By analyzing annotations on tracking comments, we found that it was a common mistake to label a comment NOE or OFG when it should have been labeled OFO. Figure 3 shows the percentage of consensus per label in the subset of 3-Ann taking as consensus the majority vote (2-annotators agreement and 3-annotators agreement). As can be noticed, the label OFO exhibits the lowest consensus rate, with all three annotators only agreeing on 33.72% of the time. We found that many OFO comments were wrongly annotated with the NOE label and, actually, this could be reasonable since these offenses are not directly targeted to persons or groups, and they often consist in expletive ex- pressions. Thus, we decided to merge them. After merging the OFO label into the NOE label, the kappa value increases slightly up to 0.3837. Another feature we analyzed is the lexical diversity of comments. To this end, we use the MTLD metric already introduced, which allows us to get an insight into lexical variation and avoiding biases due to different text lengths. Table 5 shows the average values for MTLD for comments over labels and platforms, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 92,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 1275,
"end": 1283,
"text": "Figure 3",
"ref_id": null
},
{
"start": 2167,
"end": 2174,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "3.5"
},
{
"text": "As can be noticed, offensive comments targeted to a person (OFP) have low lexical diversity, as well as for those with expletive language (NOE). When the comment is not offensive at all, the lexical diversity is clearly higher. networks, we would expect the lowest value of diversity in Twitter, as it limits comment length. On the contrary, Twitter is the platform with the highest lexical diversity, followed by YouTube. Instagram is clearly much poorer in terms of the diversity of vocabulary used. These findings are worth exploring, as they could provide more understanding of how language is used across platforms and how it relates to harmful language use, or on the average profile of their communities. To understand MTLD values, we have to consider that a value of 50 is the average lexical diversity of texts for an average adult text (being 80 for academic writings).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "3.5"
},
{
"text": "In order to establish a baseline for the OffendES corpus, we conducted experiments based on three different approaches:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline System",
"sec_num": "4"
},
{
"text": "Simple majority class model. Our simplest classifier assigns the majority class of the training set, i.e., the NO class, to each instance in the test set. This results in accuracy values of 58.78% and 64.85% respectively for 3-Ann and 10-Ann subsets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline System",
"sec_num": "4"
},
{
"text": "Lexicon-based model. We also developed a lexicon-based approach using the lexical resources described in Section 3.2. In this approach, we only consider a binary classification scenario: whether the comment is offensive or not. For the 3-Ann subset, we obtained 67.13% of accuracy, 21.27% precision, 83.78% recall, and 33.93% F1. For the 10-Ann subset, the values of accuracy, precision, recall and F1 were, respectively, 71.45%, 35.59%, and 81.60%, 49.56%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline System",
"sec_num": "4"
},
{
"text": "Transformer-based model. Finally, we experimented with a Spanish pre-trained BERT model called BETO (Canete et al., 2020) which has shown promising results in offensive language detection tasks . Details about different configurations of the BETO model and the training process are given in the Appendix. In order to evaluate the model, we sampled from the collection two different sets, for training and evaluation. Measures used to report performance are Precision (P), Recall (R), and F1-score (F1) at class level, and macro and weighted average of these metrics. For the multi-output regression task, since we are not dealing with a multi-class scenario, we used one of the most preferred metrics for regression tasks, the mean squared error (MSE), a risk metric corresponding to the expected value of the squared (quadratic) error or loss.",
"cite_spans": [
{
"start": 100,
"end": 121,
"text": "(Canete et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline System",
"sec_num": "4"
},
{
"text": "This experiment is performed on the 3-Ann subset. All entries labeled as None were discarded (as no final label was assigned to these comments).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-class classification",
"sec_num": "4.1"
},
{
"text": "The set was split into training (95%) and evaluation (5%) partitions, resulting in 30,079 comments in the training set and 3,343 in the evaluation set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-class classification",
"sec_num": "4.1"
},
{
"text": "Transformers (Wolf et al., 2020) library by Huggingface 11 was used to build the BERT network and the tokenizer from available BETO models (uncased variant). A sequence classifier was implemented for this multi-class task, with a final linear layer with four outputs (the logits for each possible label). Training the model took 2 hours and 26 minutes.",
"cite_spans": [
{
"start": 13,
"end": 32,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-class classification",
"sec_num": "4.1"
},
{
"text": "After seven training epochs, the model was evaluated against the evaluation partition. The results obtained are depicted in Table 6 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Multi-class classification",
"sec_num": "4.1"
},
{
"text": "For every sample, a vector of probabilities is computed by counting the number of annotators that selected each label and dividing by the number of annotators. This provides an estimate of the confidence of each label to be assigned to the comment. Training the model took 48 minutes. The 10-Ann dataset was split into training and validation partitions. After training for seven epochs over a partition of 13,020 samples, the model was evaluated against a partition of 685 test samples, obtaining an MSE of 0.0241.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-output regression with BETO",
"sec_num": "4.3"
},
{
"text": "One of the main characteristics of the corpus is its imbalance at all levels: comments are not uniformly distributed across labels, influencers, or social platforms. The corpus size allows for stratified random sampling over those dimensions, but we considered that releasing the full set of comments is the best choice to allow researchers to decide on how to prepare their experiments. That is also the reason why comments with None class have been kept in the corpus, so different studies on the use of language within groups of young users of social networks can be conducted. Also, the None label is of interest by itself, as it reflects the absence of consensus in determining the nature of the comment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Results show that deep learning models, like BERT, are good estimators of the presence of different kinds of offensive language, but that it is still a challenging task to decide whether a comment is directed to a person or not (so cyber-bullying risk could be measured). Despite the fusion of NOE and OFO categories, precision values for all labels different from NO are low.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In this paper, we described OffendES: the first large-scale Spanish dataset of user comments on influencer posts from Instagram, YouTube and Twitter. It consists of 47,128 comments manually labeled for offensive content using a fine-grained annotation scheme. A subset of the corpus (10-Ann) assigns a confidence degree allowing both multi-class classification and multi-output regression studies. Additionally, a preliminary analysis of offensive behavior in social media and its relationship with the selected influencers is presented. Finally, baselines experiments have been performed, showing the validity of the corpus as well as the difficulty of the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "A number of challenges remain open. On the one hand, we plan to explore systems trained on OffendES to monitor offensive messages in online channels participated by young people. On the other hand, the gender of the commenters and the subject of the comments have been left out for deeper analysis, so further research could be shed light on these matters. Finally, we believe that this dataset enables future work in the NLP community to tackle these interesting issues regarding Spanish language. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "El que llora siempre en sus videos por haber sido acosado para dar pena ahora acosa a gente... pat\u00e9tico.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "The one who always cries in his videos for having been harassed to get pity now harasses people... pathetic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter OFP",
"sec_num": null
},
{
"text": "3 El feminismo es c\u00e1ncer y las feministas son mierda. Youtube OFG Feminism is cancer and feminists are shit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter OFP",
"sec_num": null
},
{
"text": "4 Yo estoy de puta madre en casa... yo nac\u00ed en cuarentena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter OFP",
"sec_num": null
},
{
"text": "Youtube NOE I'm doing fucking great at home... I was born in quarantine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter OFP",
"sec_num": null
},
{
"text": "Si pudiera viajar. Bueno iria a italia. Que tengas un buen dia saludos desde Buenos Aires, Argentina.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5",
"sec_num": null
},
{
"text": "If I could travel. Well I would go to Italy. Have a nice day. Greetings from Buenos Aires, Argentina. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instagram NO",
"sec_num": null
},
{
"text": "https://cutt.ly/sj5EdJ7 2 https://cutt.ly/Hj5EsAh 3 https://cutt.ly/ej5EgU7 4 NOTE: This paper contains examples of potentially ex-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://cutt.ly/RkrVTQn 6 https://cutt.ly/1j5Eut0 7 https://cutt.ly/yj5Eijc 8 https://cutt.ly/kj5Eo2d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://cutt.ly/JkrVSYv",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.mturk.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://huggingface.co",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Michael Wiegand and Melanie Siegel. 2018. Overview of the germeval 2018 shared task on the identification of offensive language. In Proceedings of KON-VENS 2018.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been partially supported by a grant from European Regional Development Fund (FEDER), the LIVING-LANG project [RTI2018-094653-B-C21], and the Ministry of Science, Innovation and Universities (scholarship [FPI-PRE2019-089310]) from the Spanish Government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "A.1 Model settings Hyper-parameters. In the experiments with Transformer the hyper-parameters used for finetuning BETO are specified in Table 1 . In the multioutput regression task the hyper-parameters are the same, except for the loss function, which is replaced by mean squared error loss, as it is a regression problem.All experiments (training and evaluation) were performed on a node equipped with two Intel Xeon Silver 4208 CPU at 2.10GHz, 192GB RAM, as main processors, and six GPUs NVIDIA GeForce RTX 2080Ti (with 11GB each). Table 2 shows the accounts used by the selected influencers in the three selected media: Instagram, Twitter, and Youtube. Table 3 shows examples of labeled comments in the OffendES dataset by social network.",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 143,
"text": "Table 1",
"ref_id": null
},
{
"start": 534,
"end": 541,
"text": "Table 2",
"ref_id": null
},
{
"start": 656,
"end": 663,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Challenges for toxic comment classification: An in-depth error analysis",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Betty Van Aken",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Risch",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Krestel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "L\u00f6ser",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "33--42",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5105"
]
},
"num": null,
"urls": [],
"raw_text": "Betty van Aken, Julian Risch, Ralf Krestel, and Alexan- der L\u00f6ser. 2018. Challenges for toxic comment clas- sification: An in-depth error analysis. In Proceed- ings of the 2nd Workshop on Abusive Language On- line (ALW2), pages 33-42, Brussels, Belgium. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Overview of MEX-A3T at iberlef 2019: Authorship and aggressiveness analysis in mexican spanish tweets",
"authors": [
{
"first": "Mario",
"middle": [],
"last": "Ezra Arag\u00f3n",
"suffix": ""
},
{
"first": "Miguel\u00e1ngel\u00e1lvarez",
"middle": [],
"last": "Carmona",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "Montes-Y-G\u00f3mez",
"suffix": ""
},
{
"first": "Hugo",
"middle": [
"Jair"
],
"last": "Escalante",
"suffix": ""
},
{
"first": "Luis",
"middle": [
"Villase\u00f1or"
],
"last": "Pineda",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Moctezuma",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Iberian Languages Evaluation Forum co-located with 35th Conference of the Spanish Society for Natural Language Processing, IberLEF@SEPLN 2019",
"volume": "2421",
"issue": "",
"pages": "478--494",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mario Ezra Arag\u00f3n, Miguel\u00c1ngel\u00c1lvarez Carmona, Manuel Montes-y-G\u00f3mez, Hugo Jair Escalante, Luis Villase\u00f1or Pineda, and Daniela Moctezuma. 2019. Overview of MEX-A3T at iberlef 2019: Authorship and aggressiveness analysis in mexican spanish tweets. In Proceedings of the Iberian Lan- guages Evaluation Forum co-located with 35th Con- ference of the Spanish Society for Natural Language Processing, IberLEF@SEPLN 2019, Bilbao, Spain, September 24th, 2019, volume 2421 of CEUR Work- shop Proceedings, pages 478-494. CEUR-WS.org.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Francisco Manuel Rangel",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2007"
]
},
"num": null,
"urls": [],
"raw_text": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela San- guinetti. 2019. SemEval-2019 task 5: Multilin- gual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th Inter- national Workshop on Semantic Evaluation, pages 54-63, Minneapolis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Spanish pre-trained bert model and evaluation data",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Canete",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Chaperon",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Fuentes",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Canete, Gabriel Chaperon, Rodrigo Fuentes, and Jorge P\u00e9rez. 2020. Spanish pre-trained bert model and evaluation data. PML4DC at ICLR, 2020.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Overview of MEX-A3T at ibereval 2018: Authorship and aggressiveness analysis in mexican spanish tweets",
"authors": [
{
"first": "Miguel\u00e1ngel\u00e1lvarez",
"middle": [],
"last": "Carmona",
"suffix": ""
},
{
"first": "Estefan\u00eda",
"middle": [],
"last": "Guzm\u00e1n-Falc\u00f3n",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "Montes-Y-G\u00f3mez",
"suffix": ""
},
{
"first": "Hugo",
"middle": [
"Jair"
],
"last": "Escalante",
"suffix": ""
},
{
"first": "Luis",
"middle": [
"Villase\u00f1or"
],
"last": "Pineda",
"suffix": ""
},
{
"first": "Ver\u00f3nica",
"middle": [],
"last": "Reyes-Meza",
"suffix": ""
},
{
"first": "Antonio",
"middle": [
"Rico"
],
"last": "Sulayes",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018) co-located with 34th Conference of the Spanish Society for Natural Language Processing",
"volume": "2150",
"issue": "",
"pages": "74--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miguel\u00c1ngel\u00c1lvarez Carmona, Estefan\u00eda Guzm\u00e1n- Falc\u00f3n, Manuel Montes-y-G\u00f3mez, Hugo Jair Es- calante, Luis Villase\u00f1or Pineda, Ver\u00f3nica Reyes- Meza, and Antonio Rico Sulayes. 2018. Overview of MEX-A3T at ibereval 2018: Authorship and ag- gressiveness analysis in mexican spanish tweets. In Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Lan- guages (IberEval 2018) co-located with 34th Con- ference of the Spanish Society for Natural Language Processing (SEPLN 2018), Sevilla, Spain, Septem- ber 18th, 2018, volume 2150 of CEUR Workshop Proceedings, pages 74-96. CEUR-WS.org.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fbk-dh at semeval-2020 task 12: Using multi-channel bert for multilingual offensive language detection",
"authors": [
{
"first": "Camilla",
"middle": [],
"last": "Casula",
"suffix": ""
},
{
"first": "Alessio",
"middle": [],
"last": "Palmero Aprosio",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Tonelli",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1539--1545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Camilla Casula, Alessio Palmero Aprosio, Stefano Menini, and Sara Tonelli. 2020. Fbk-dh at semeval- 2020 task 12: Using multi-channel bert for multilin- gual offensive language detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1539-1545.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Detecting offensive language in social media to protect adolescent online safety",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yilu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Sencun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2012,
"venue": "International Confernece on Social Computing",
"volume": "",
"issue": "",
"pages": "71--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Chen, Yilu Zhou, Sencun Zhu, and Heng Xu. 2012. Detecting offensive language in social media to protect adolescent online safety. In 2012 Inter- national Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing, pages 71-80. IEEE.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic detection of offensive language in social media: Defining linguistic criteria to build a Mexican Spanish dataset",
"authors": [
{
"first": "Mar\u00eda Jos\u00e9",
"middle": [],
"last": "D\u00edaz-Torres",
"suffix": ""
},
{
"first": "Paulina",
"middle": [
"Alejandra"
],
"last": "Mor\u00e1n-M\u00e9ndez",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Villasenor-Pineda",
"suffix": ""
},
{
"first": "Manuel",
"middle": [
"Montesy"
],
"last": "G\u00f3mez",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Aguilera",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Meneses-Ler\u00edn",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "132--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mar\u00eda Jos\u00e9 D\u00edaz-Torres, Paulina Alejandra Mor\u00e1n- M\u00e9ndez, Luis Villasenor-Pineda, Manuel Montes- y G\u00f3mez, Juan Aguilera, and Luis Meneses-Ler\u00edn. 2020. Automatic detection of offensive language in social media: Defining linguistic criteria to build a Mexican Spanish dataset. In Proceedings of the Second Workshop on Trolling, Aggression and Cy- berbullying, pages 132-136, Marseille, France. Eu- ropean Language Resources Association (ELRA).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Overview of the task on automatic misogyny identification at ibereval",
"authors": [
{
"first": "E",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Anzovino",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Fersini, P. Rosso, and Maria Anzovino. 2018. Overview of the task on automatic misogyny iden- tification at ibereval 2018. In IberEval@SEPLN.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Convolutional neural networks for toxic comment classification",
"authors": [
{
"first": "",
"middle": [],
"last": "Spiros V Georgakopoulos",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Sotiris",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tasoulis",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Aristidis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vrahatis",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Vassilis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Plagianakos",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 10th Hellenic Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Spiros V Georgakopoulos, Sotiris K Tasoulis, Aris- tidis G Vrahatis, and Vassilis P Plagianakos. 2018. Convolutional neural networks for toxic comment classification. In Proceedings of the 10th Hellenic Conference on Artificial Intelligence, pages 1-6.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bullying, cyberbullying, and suicide. Archives of suicide research",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Hinduja",
"suffix": ""
},
{
"first": "Justin",
"middle": [
"W"
],
"last": "Patchin",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "14",
"issue": "",
"pages": "206--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Hinduja and Justin W Patchin. 2010. Bully- ing, cyberbullying, and suicide. Archives of suicide research, 14(3):206-221.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Social media demographics for marketers",
"authors": [
{
"first": "Jenn",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenn Chen. 2020. 2020 Social media demographics for marketers. https: //sproutsocial.com/insights/ new-social-media-demographics/.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Benchmarking aggression identification in social media",
"authors": [
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Atul",
"middle": [],
"last": "Kr Ojha",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritesh Kumar, Atul Kr Ojha, Shervin Malmasi, and Marcos Zampieri. 2018. Benchmarking aggression identification in social media. In Proceedings of the First Workshop on Trolling, Aggression and Cyber- bullying (TRAC-2018), pages 1-11.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Mtld, vocdd, and hd-d: A validation study of sophisticated approaches to lexical diversity assessment",
"authors": [
{
"first": "M",
"middle": [],
"last": "Philip",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jarvis",
"suffix": ""
}
],
"year": 2010,
"venue": "Behavior research methods",
"volume": "42",
"issue": "",
"pages": "381--392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip M McCarthy and Scott Jarvis. 2010. Mtld, vocd- d, and hd-d: A validation study of sophisticated ap- proaches to lexical diversity assessment. Behavior research methods, 42(2):381-392.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Abusive language detection in online user content",
"authors": [
{
"first": "Chikashi",
"middle": [],
"last": "Nobata",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "Achint",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th international conference on world wide web",
"volume": "",
"issue": "",
"pages": "145--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive lan- guage detection in online user content. In Proceed- ings of the 25th international conference on world wide web, pages 145-153.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Aggressive language identification using word embeddings and sentiment features",
"authors": [
{
"first": "Constantin",
"middle": [],
"last": "Or\u0203san",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "113--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Constantin Or\u0203san. 2018. Aggressive language iden- tification using word embeddings and sentiment features. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC- 2018), pages 113-119, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Detecting and monitoring hate speech in twitter",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Carlos Pereira-Kohatsu",
"suffix": ""
},
{
"first": "Lara",
"middle": [],
"last": "Quijano-S\u00e1nchez",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Liberatore",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
}
],
"year": 2019,
"venue": "Sensors",
"volume": "19",
"issue": "21",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan Carlos Pereira-Kohatsu, Lara Quijano-S\u00e1nchez, Federico Liberatore, and Miguel Camacho-Collados. 2019. Detecting and monitoring hate speech in twit- ter. Sensors, 19(21):4654.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "SINAI at SemEval-2019 task 6: Incorporating lexicon knowledge into SVM learning to identify and categorize offensive language in social media",
"authors": [
{
"first": "Flor",
"middle": [
"Miriam"
],
"last": "Plaza-Del-Arco",
"suffix": ""
},
{
"first": "M",
"middle": [
"Dolores"
],
"last": "Molina-Gonz\u00e1lez",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Teresa Mart\u00edn-Valdivia",
"suffix": ""
},
{
"first": "L",
"middle": [
"Alfonso"
],
"last": "Ure\u00f1a-L\u00f3pez",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "735--738",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2129"
]
},
"num": null,
"urls": [],
"raw_text": "Flor Miriam Plaza-del-Arco, M. Dolores Molina- Gonz\u00e1lez, M. Teresa Mart\u00edn-Valdivia, and L. Al- fonso Ure\u00f1a-L\u00f3pez. 2019. SINAI at SemEval- 2019 task 6: Incorporating lexicon knowledge into SVM learning to identify and categorize offensive language in social media. In Proceedings of the 13th International Workshop on Semantic Evalua- tion, pages 735-738, Minneapolis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Comparing pre-trained language models for spanish hate speech detection. Expert Systems with Applications",
"authors": [
{
"first": "Flor",
"middle": [
"Miriam"
],
"last": "Plaza-Del-Arco",
"suffix": ""
},
{
"first": "M",
"middle": [
"Dolores"
],
"last": "Molina-Gonz\u00e1lez",
"suffix": ""
},
{
"first": "L",
"middle": [
"Alfonso"
],
"last": "Ure\u00f1a-L\u00f3pez",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Teresa Mart\u00edn-Valdivia",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "166",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Flor Miriam Plaza-del-Arco, M. Dolores Molina- Gonz\u00e1lez, L. Alfonso Ure\u00f1a-L\u00f3pez, and M. Teresa Mart\u00edn-Valdivia. 2020. Comparing pre-trained lan- guage models for spanish hate speech detection. Ex- pert Systems with Applications, 166:114120.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Detecting misogyny and xenophobia in spanish tweets using language technologies",
"authors": [
{
"first": "Flor-Miriam",
"middle": [],
"last": "Plaza-Del-Arco",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Molina-Gonz\u00e1lez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dolores",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ure\u00f1a-L\u00f3pez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Alfonso",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mart\u00edn-Valdivia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Teresa",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM Transactions on Internet Technology (TOIT)",
"volume": "20",
"issue": "2",
"pages": "1--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Plaza-del-Arco, Flor-Miriam and Molina-Gonz\u00e1lez, M Dolores and Ure\u00f1a-L\u00f3pez, L Alfonso and Mart\u00edn- Valdivia, M. Teresa. 2020. Detecting misogyny and xenophobia in spanish tweets using language tech- nologies. ACM Transactions on Internet Technology (TOIT), 20(2):1-19.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "SINAI at SemEval-2020 task 12: Offensive language identification exploring transfer learning models",
"authors": [
{
"first": "Flor",
"middle": [],
"last": "Plaza-Del-Arco",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miriam",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Molina-Gonz\u00e1lez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dolores",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ure\u00f1a-L\u00f3pez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Alfonso",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mart\u00edn-Valdivia",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1622--1627",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Plaza-del-Arco, Flor Miriam and Molina-Gonz\u00e1lez, M. Dolores and Ure\u00f1a-L\u00f3pez, L. Alfonso and Mart\u00edn- Valdivia, Mar\u00eda-Teresa. 2020. SINAI at SemEval- 2020 task 12: Offensive language identification ex- ploring transfer learning models. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1622-1627, Barcelona (online). International Committee for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "EmoEvent: A multilingual emotion corpus based on different events",
"authors": [
{
"first": "Flor",
"middle": [],
"last": "Plaza-Del-Arco",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miriam",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
},
{
"first": "Ure\u00f1a",
"middle": [],
"last": "L\u00f3pez",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Alfonso",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mart\u00edn-Valdivia",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "1492--1498",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Plaza-del-Arco, Flor Miriam and Strapparava, Carlo and Ure\u00f1a L\u00f3pez, L. Alfonso and Mart\u00edn-Valdivia, Mar\u00eda-Teresa. 2020. EmoEvent: A multilingual emotion corpus based on different events. In Pro- ceedings of the 12th Language Resources and Eval- uation Conference, pages 1492-1498, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Brums at hasoc 2019: Deep learning models for multilingual hate speech and offensive language identification",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": ""
}
],
"year": 2019,
"venue": "FIRE (Working Notes)",
"volume": "",
"issue": "",
"pages": "199--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Marcos Zampieri, and Hansi Hettiarachchi. 2019. Brums at hasoc 2019: Deep learning models for multilingual hate speech and of- fensive language identification. In FIRE (Working Notes), pages 199-207.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Automatic cyberbullying detection: A systematic review",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Rosa",
"suffix": ""
},
{
"first": "N\u00e1dia",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Ribeiro",
"suffix": ""
},
{
"first": "Paula",
"middle": [
"Costa"
],
"last": "Ferreira",
"suffix": ""
},
{
"first": "Jo\u00e3o",
"middle": [
"Paulo"
],
"last": "Carvalho",
"suffix": ""
},
{
"first": "Sofia",
"middle": [],
"last": "Oliveira",
"suffix": ""
},
{
"first": "Lu\u00edsa",
"middle": [],
"last": "Coheur",
"suffix": ""
},
{
"first": "Paula",
"middle": [],
"last": "Paulino",
"suffix": ""
}
],
"year": 2019,
"venue": "Computers in Human Behavior",
"volume": "93",
"issue": "",
"pages": "333--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugo Rosa, N\u00e1dia Pereira, Ricardo Ribeiro, Paula Costa Ferreira, Jo\u00e3o Paulo Carvalho, Sofia Oliveira, Lu\u00edsa Coheur, Paula Paulino, AM Veiga Sim\u00e3o, and Isabel Trancoso. 2019. Automatic cyberbullying detection: A systematic review. Computers in Human Behavior, 93:333-345.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Using attention-based bidirectional lstm to identify different categories of offensive language directed toward female celebrities",
"authors": [
{
"first": "Sima",
"middle": [],
"last": "Sharifirad",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Matwin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Workshop on Widening NLP",
"volume": "",
"issue": "",
"pages": "46--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sima Sharifirad and Stan Matwin. 2019. Using attention-based bidirectional lstm to identify differ- ent categories of offensive language directed toward female celebrities. In Proceedings of the 2019 Work- shop on Widening NLP, pages 46-48.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The YouTube Team. 2019. More updates on our actions related to the safety of minors on YouTube",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "2020--2021",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The YouTube Team. 2019. More updates on our actions related to the safety of minors on YouTube. http://web.archive.org/web/ 20080207010024. Accessed: 2020-01-10.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL student research workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL student research workshop, pages 88-93.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Uhh-lt at semeval-2020 task 12: Fine-tuning of pre-trained transformer networks for offensive language detection",
"authors": [
{
"first": "Gregor",
"middle": [],
"last": "Wiedemann",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Seid Muhie Yimam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1638--1644",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregor Wiedemann, Seid Muhie Yimam, and Chris Biemann. 2020. Uhh-lt at semeval-2020 task 12: Fine-tuning of pre-trained transformer networks for offensive language detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1638-1644.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Detection of abusive language: the problem of biased datasets",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kleinbauer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "602--608",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of abusive language: the problem of biased datasets. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 602-608.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Transformers: State-of-theart natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Julien Chaumond, Lysandre Debut, Vic- tor Sanh, Clement Delangue, Anthony Moi, Pier- ric Cistac, Morgan Funtowicz, Joe Davison, Sam Shleifer, et al. 2020. Transformers: State-of-the- art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing: System Demonstrations, pages 38-45.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Predicting the type and target of offensive posts in social media",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1415--1420",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1144"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019a. Predicting the type and target of offensive posts in social media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1415-1420, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "SemEval-2019 task 6: Identifying and categorizing offensive language in social media (Of-fensEval)",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "75--86",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2010"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019b. SemEval-2019 task 6: Identifying and cat- egorizing offensive language in social media (Of- fensEval). In Proceedings of the 13th Interna- tional Workshop on Semantic Evaluation, pages 75- 86, Minneapolis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "SemEval-2020 task 12: Multilingual offensive language identification in social media (Offen-sEval 2020)",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Pepa",
"middle": [],
"last": "Atanasova",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Karadzhov",
"suffix": ""
},
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1425--1447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2020. SemEval-2020 task 12: Multilingual offen- sive language identification in social media (Offen- sEval 2020). In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1425- 1447, Barcelona (online). International Committee for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "Comments distribution by influencer and social media platform in the 3-Ann subset.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Distribution of labels per influencer in the OffendES dataset.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "Percentage of consensus per label. Percentage of consensus per label after including OFO label into NOE.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"num": null,
"text": "Presence of offensive terms from lexicons in the retrieve comments.",
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF3": {
"num": null,
"text": "Comments per social media and influencer in the OffendES dataset.",
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"3\">Label 3-Ann 10-Ann</td></tr><tr><td>NO</td><td>26,425</td><td>9,715</td></tr><tr><td>OFP</td><td>4,102</td><td>2,362</td></tr><tr><td>NOE</td><td>2,470</td><td>1,414</td></tr><tr><td colspan=\"2\">None 11,529</td><td>1,283</td></tr><tr><td>OFG</td><td>425</td><td>215</td></tr></table>"
},
"TABREF4": {
"num": null,
"text": "Comments per label in the OffendES dataset. Average Std. dev. Min. Max.",
"type_str": "table",
"html": null,
"content": "<table><tr><td>(3-Ann subset)</td><td colspan=\"4\">Average Std. dev. Min. Max.</td></tr><tr><td>YouTube</td><td>189</td><td>247</td><td colspan=\"2\">3 9,986</td></tr><tr><td>Twitter</td><td>149</td><td>75</td><td>4</td><td>413</td></tr><tr><td>Instagram</td><td>114</td><td>124</td><td colspan=\"2\">3 2,200</td></tr><tr><td>(10-Ann subset) YouTube</td><td>191</td><td>277</td><td colspan=\"2\">4 9,812</td></tr><tr><td>Twitter</td><td>150</td><td>74</td><td>5</td><td>292</td></tr><tr><td>Instagram</td><td>113</td><td>115</td><td colspan=\"2\">3 1,631</td></tr></table>"
},
"TABREF5": {
"num": null,
"text": "Statistics over comments length.",
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF8": {
"num": null,
"text": "Average values of measures of lexical textual comments diversity per social network and label.",
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF10": {
"num": null,
"text": "Multiclass experiment results. Non-offensive, which comprises labels NO and NOE, and Offensive, combining OFP and OFG labels. This results in 28,895 nonoffensive comments and 4,527 offensive comments. Training the model took 2 hours and 16 minutes. The results obtained are depicted inTable 7.",
"type_str": "table",
"html": null,
"content": "<table><tr><td>4.2 Binary classification with BETO</td></tr><tr><td>Same configuration as the previous model, but us-</td></tr><tr><td>ing non-weighted cross-entropy as loss function</td></tr><tr><td>during training. Classes have been merged into two</td></tr></table>"
},
"TABREF11": {
"num": null,
"text": "Binary classification experiment results.",
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF13": {
"num": null,
"text": "Different account identifiers for selected influencers.",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Comment</td></tr></table>"
},
"TABREF14": {
"num": null,
"text": "Examples of comments labeled in OffendES (3-annotators subset), along with English translations.What nonsense. It's an election campaign, of course some of them throw shit at the others.You are an amazing comedian, you always make me smile and forget my problems.",
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td>Comment</td><td colspan=\"6\">Social Network OFP OFG OFO NOE NO</td></tr><tr><td colspan=\"2\">1 Vieja rid\u00edcula.</td><td>Instagram</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td/><td>Ridiculous old woman.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td>Vaya tonter\u00eda. Es campa\u00f1a electoral, eviden-temente unos le tiran mierda a los otros.</td><td>Twitter</td><td>0</td><td>0</td><td>0</td><td>0.7</td><td>0.3</td></tr><tr><td/><td>Eres un c\u00f3mico incre\u00edble siempre consigues</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td>sacarme una sonrisa y se me olvidan las pe-</td><td>Instagram</td><td>0</td><td>0</td><td>0</td><td>0</td><td>1</td></tr><tr><td/><td>nas.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td>Mocosos \"retrasados\", \u00bfa alguien le ha sor-prendido?, creo que no...</td><td>Youtube</td><td>0.1</td><td>0.7</td><td>0</td><td>0</td><td>0.2</td></tr><tr><td/><td>Snotty \"retards\", was anyone surprised? I</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>don't think so...</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td>Vaya mierda de v\u00eddeo. Deja de hablar sin saber, gracias.</td><td>Youtube</td><td>0.3</td><td>0</td><td>0.5</td><td>0</td><td>0.1</td></tr><tr><td/><td>What a shitty video. Stop talking out of your</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>ass, thanks.</td><td/><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF15": {
"num": null,
"text": "Examples of comments labeled in OffendES (10-annotators subset), along with English translations.",
"type_str": "table",
"html": null,
"content": "<table/>"
}
}
}
}