ACL-OCL / Base_JSON /prefixK /json /konvens /2021.konvens-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:12:07.056227Z"
},
"title": "How Hateful are Movies? A Study and Prediction on Movie Subtitles",
"authors": [
{
"first": "Niklas",
"middle": [],
"last": "Von Boguszewski",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Group",
"institution": "Universit\u00e4t Hamburg",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Sana",
"middle": [],
"last": "Moin",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Group",
"institution": "Universit\u00e4t Hamburg",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": "",
"affiliation": {
"laboratory": "Language Technology Group",
"institution": "Universit\u00e4t Hamburg",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this research, we investigate techniques to detect hate speech in movies. We introduce a new dataset collected from the subtitles of six movies, where each utterance is annotated either as hate, offensive or normal. We apply transfer learning techniques of domain adaptation and fine-tuning on existing social media datasets, namely from Twitter and Fox News. We evaluate different representations, i.e., Bag of Words (BoW), Bi-directional Long shortterm memory (Bi-LSTM), and Bidirectional Encoder Representations from Transformers (BERT) on 11k movie subtitles. The BERT model obtained the best macro-averaged F1score of 77%. Hence, we show that transfer learning from the social media domain is efficacious in classifying hate and offensive speech in movies through subtitles.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this research, we investigate techniques to detect hate speech in movies. We introduce a new dataset collected from the subtitles of six movies, where each utterance is annotated either as hate, offensive or normal. We apply transfer learning techniques of domain adaptation and fine-tuning on existing social media datasets, namely from Twitter and Fox News. We evaluate different representations, i.e., Bag of Words (BoW), Bi-directional Long shortterm memory (Bi-LSTM), and Bidirectional Encoder Representations from Transformers (BERT) on 11k movie subtitles. The BERT model obtained the best macro-averaged F1score of 77%. Hence, we show that transfer learning from the social media domain is efficacious in classifying hate and offensive speech in movies through subtitles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Nowadays, hate speech is becoming a pressing issue and occurs in multiple domains, mostly in the major social media platforms or political speeches. Hate speech is defined as verbal communication that denigrates a person or a community on some characteristics such as race, color, ethnicity, gender, sexual orientation, nationality, or religion (Nockleby et al., 2000; Davidson et al., 2017) . Some examples given by Schmidt and Wiegand (2017) are:",
"cite_spans": [
{
"start": 345,
"end": 368,
"text": "(Nockleby et al., 2000;",
"ref_id": "BIBREF17"
},
{
"start": 369,
"end": 391,
"text": "Davidson et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 417,
"end": 443,
"text": "Schmidt and Wiegand (2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Go fucking kill yourself and die already a useless ugly pile of shit scumbag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The Jew Faggot Behind The Financial Collapse. * Equal contribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Hope one of those bitches falls over and breaks her leg.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several sensitive comments on social media platforms have led to crime against minorities (Williams et al., 2020) . Hate speech can be considered as an umbrella term that different authors have coined with different names. Xu et al. (2012) ; Hosseinmardi et al. (2015) ; Zhong et al. (2016) referred it by the term cyberbully-ing, while Davidson et al. (2017) used the term offensive language to some expressions that can be strongly impolite, rude or use of vulgar words towards an individual or group that can even ignite fights or be hurtful. Use of words like f**k, n*gga, b*tch is common in social media comments, song lyrics, etc. Although these terms can be treated as obscene and inappropriate, some people also use them in non-hateful ways in different contexts (Davidson et al., 2017) . This makes it challenging for all hate speech systems to distinguish between hate speech and offensive content. Davidson et al. (2017) tried to distinguish between the two classes in their Twitter dataset.",
"cite_spans": [
{
"start": 90,
"end": 113,
"text": "(Williams et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 223,
"end": 239,
"text": "Xu et al. (2012)",
"ref_id": "BIBREF22"
},
{
"start": 242,
"end": 268,
"text": "Hosseinmardi et al. (2015)",
"ref_id": "BIBREF11"
},
{
"start": 271,
"end": 290,
"text": "Zhong et al. (2016)",
"ref_id": "BIBREF23"
},
{
"start": 337,
"end": 359,
"text": "Davidson et al. (2017)",
"ref_id": "BIBREF5"
},
{
"start": 771,
"end": 794,
"text": "(Davidson et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 909,
"end": 931,
"text": "Davidson et al. (2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These days due to globalization and online media streaming services, we are exposed to different cultures across the world through movies. Thus, an analysis of the amount of hate and offensive content in the media that we consume daily could be helpful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Two research questions guided our research:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. RQ 1. What are the limitations of social media hate speech detection models to detect hate speech in movie subtitles?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. RQ 2. How to build a hate and offensive speech classification model for movie subtitles?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address the problem of hate speech detection in movies, we chose three different models. We have used the BERT (Devlin et al., 2019) model, due to the recent success in other NLP-related fields, a Bi-LSTM (Hochreiter and Schmidhuber, 1997) model to utilize the sequential nature of movie subtitles and a classic Bag of Words (BoW) model as a baseline system. The paper is structured as follows: Section 2 gives an overview of the related work in this topic and Section 3 describes the research methodology and the annotation work, while in Section 4 we discuss the employed datasets and the pre-processing steps. Furthermore, Section 5 describes the implemented models while Section 6 presents the evaluation of the models, the qualitative analysis of the results and the annotation analysis followed by Section 7, which covers the threats to the validity of our research. Finally, we end with the conclusion in Section 8 and propose further work directions in Section 9.",
"cite_spans": [
{
"start": 114,
"end": 135,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 208,
"end": 242,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Some of the existing hate speech detection models classify comments targeted towards certain commonly attacked communities like gay, black, and Muslim, whereas in actuality, some comments did not have the intention of targeting a community (Borkan et al., 2019; Dixon et al., 2018) . Mathew et al. (2021) introduced a benchmark dataset consisting of hate speech generated from two social media platforms, Twitter and Gab. In the social media space, a key challenge is to separately identify hate speech from offensive text. Although they might appear the same way semantically, they have subtle differences. Therefore they tried to solve the bias and interpretability aspect of hate speech and did a three-class classification (i.e., hate, offensive, or normal). They reported the best macro-averaged F1-score of 68.7% on their BERT-HateXplain model. It is also one of the models that we use in our study, as it is one of the 'off-theshelf' hate speech detection models that can easily be employed for the topic at hand.",
"cite_spans": [
{
"start": 240,
"end": 261,
"text": "(Borkan et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 262,
"end": 281,
"text": "Dixon et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 284,
"end": 304,
"text": "Mathew et al. (2021)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Lexicon-based detection methods have low precision because they classify the messages based on the presence of particular hate speech-related terms, particularly those insulting, cursing, and bullying words. Davidson et al. (2017) used a crowdsourced hate speech lexicon to identify tweets with the occurrence of hate speech keywords to filter tweets. They then used crowdsourcing to label these tweets into three classes: hate speech, offensive language, and neither. In their dataset, the more generic racist and homophobic tweets were classified as hate speech, whereas the ones involving sexist and abusive words were classified as offensive. It is one of the datasets we have used in exploring transfer learning and model finetuning in our study.",
"cite_spans": [
{
"start": 208,
"end": 230,
"text": "Davidson et al. (2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Due to global events, hate speech also plagues online news platforms. In the news domain, context knowledge is required to identify hate speech. Lei and Ruihong (2017) conducted a study on a dataset prepared from user comments on news articles from the Fox News platform. It is the second dataset we have used to explore transfer learning from the news domain to movie subtitles in our study.",
"cite_spans": [
{
"start": 145,
"end": 167,
"text": "Lei and Ruihong (2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Several other authors have collected the data from different online platforms and labeled them manually. Some of these data sources are: Twitter (Xiang et al., 2012; Xu et al., 2012) , Instagram (Hosseinmardi et al., 2015; Zhong et al., 2016) , Yahoo! (Nobata et al., 2016; Djuric et al., 2015) , YouTube (Dinakar et al., 2012) and Whisper (Silva et al., 2021) to name a few. Most of the data sources used in the previous studies are based on social media, news, and micro-blogging platforms. However, the notion of the existence of hate speech in movie dialogues has been overlooked. Thus in our study, we first explore how the different existing ML (Machine Learning) models classify hate and offensive speech in movie subtitles and propose a new dataset compiled from six movie subtitles.",
"cite_spans": [
{
"start": 145,
"end": 165,
"text": "(Xiang et al., 2012;",
"ref_id": "BIBREF21"
},
{
"start": 166,
"end": 182,
"text": "Xu et al., 2012)",
"ref_id": "BIBREF22"
},
{
"start": 195,
"end": 222,
"text": "(Hosseinmardi et al., 2015;",
"ref_id": "BIBREF11"
},
{
"start": 223,
"end": 242,
"text": "Zhong et al., 2016)",
"ref_id": "BIBREF23"
},
{
"start": 252,
"end": 273,
"text": "(Nobata et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 274,
"end": 294,
"text": "Djuric et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 305,
"end": 327,
"text": "(Dinakar et al., 2012)",
"ref_id": "BIBREF7"
},
{
"start": 340,
"end": 360,
"text": "(Silva et al., 2021)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To investigate the problem of detecting hate and offensive speech in movies, we used different machine learning models trained on social media content such as tweets or discussion thread comments from news articles. Here, the models in our research were developed and evaluated on an indomain 80% train and 20% test split data using the same random state to ensure comparability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Methodology",
"sec_num": "3"
},
{
"text": "We have developed six different models: two Bi-LSTM models, two BoW models, and two BERT models. For each pair, one of them has been trained on a dataset consisting of Twitter posts and the other on a dataset consisting of Fox News discussion threads. The trained models have been used to classify movie subtitles to evaluate their performance by domain adaptation from social media content to movies. In addition, another state-of-the-art BERT-based classification model called HateXplain (Mathew et al., 2021) has been used to classify the movies out of the box. While it is also possible to further fine-tune the HateXplain model, we are restricted in reporting the result of the 'off-the-shelf' classification system to new domains, such as movie subtitles.",
"cite_spans": [
{
"start": 490,
"end": 511,
"text": "(Mathew et al., 2021)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Research Methodology",
"sec_num": "3"
},
{
"text": "Furthermore, the movie dataset we have collected (see Section 4) is used to train domainspecific BoW, Bi-LSTM, and BERT models using 6-fold cross-validation, where each movie was selected as a fold and report the averaged results. Finally, we have identified the best model trained on social media content based on macroaveraged F1-score and fine-tuned it with the movie dataset using 6-fold cross-validation on that particular model, to investigate fine-tuning and transfer learning capabilities for hate speech on movie subtitles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Methodology",
"sec_num": "3"
},
{
"text": "In our annotation guidelines, we defined hateful speech as a language used to express hatred towards a targeted individual or group or is intended to be derogatory, to humiliate, or to insult the members of the group, based on attributes such as race, religion, ethnic origin, sexual orientation, disability, or gender. Although the meaning of hate speech is based on the context, we provided the above definition agreeing to the definition provided by Nockleby et al. (2000) ; Davidson et al. (2017) . Offensive speech uses profanity, strongly impolite, rude, or vulgar language expressed with fighting or hurtful words to insult a targeted individual or group (Davidson et al., 2017) . We used the same definition also for offensive speech in the guidelines. The remaining subtitles were defined as normal.",
"cite_spans": [
{
"start": 453,
"end": 475,
"text": "Nockleby et al. (2000)",
"ref_id": "BIBREF17"
},
{
"start": 478,
"end": 500,
"text": "Davidson et al. (2017)",
"ref_id": "BIBREF5"
},
{
"start": 662,
"end": 685,
"text": "(Davidson et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Guidelines",
"sec_num": "3.1"
},
{
"text": "For the annotation of movie subtitles, we have used Amazon Mechanical Turk (MTurk) crowdsourcing. Before the main annotation task, we have conducted an annotation pilot study, where 40 subtitles texts were randomly chosen from the movie subtitle dataset. Each of them has included 10 hate speech, 10 offensive, and 20 normal subtitles that are manually annotated by experts. In total, 100 MTurk workers were assigned for the annotation task. We have used the builtin MTurk qualification requirement (HIT approval rate higher than 95% and number of HITs ap- proved larger than 5000) to recruit workers during the Pilot task. Each worker was assessed for accuracy and the 13 workers who have completed the task with the highest annotation accuracy were chosen for the main study task. The rest of the workers were compensated for the task they have completed in the pilot study and blocked from participating in the main annotation task. For each HIT, the workers are paid 40 cents both for the pilot and the main annotation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Task",
"sec_num": "3.2"
},
{
"text": "For the main task, the 13 chosen MTurk workers were first assigned to one movie subtitle annotation to further look at the annotator agreement as will be described in Section 6.3. Two annotators were replaced during the main annotation task with the next-best workers from the identified workers in the pilot study. This process was repeated after each movie annotation for the remaining five movies. One batch consists of 40 subtitles which were displayed in chronological order to the worker. Each batch has been annotated by three workers. In Figure 1 , you can see the first four questions of a batch out of the movie American History X 1998. ",
"cite_spans": [],
"ref_spans": [
{
"start": 546,
"end": 554,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Annotation Task",
"sec_num": "3.2"
},
{
"text": "The publicly available Fox News corpus 1 consists of 1,528 annotated comments compiled from ten discussion threads that happened on the Fox News website in 2016. The corpus does not differentiate between offensive and hateful comments. This corpus has been introduced by Lei and Ruihong (2017) and has been annotated by two trained native English speakers. We have identified 13 duplicates and two empty comments in this corpus and removed them for accurate training results. The second publicly available corpus we use consists of 24,802 tweets 2 . We identified of them as duplicates and removed them again to achieve accurate training results. The corpus has been introduced by Davidson et al. (2017) and was labeled by CrowdFlower workers as hate speech, offensive, and neither. The last class is referred to as normal in this paper. The distribution of the normal, offensive, and hate classes can be found in Table 1 . The novel movie dataset we introduce consists of six movies. The movies have been chosen based on keyword tags provided by the IMDB website 3 . The tags hate-speech and racism were chosen because we assumed that they were likely to contain a lot of hate and offensive speech. The tag friendship was chosen to get contrary movies containing a lot of normal subtitles, with less hate speech content. In addition, we excluded movie genres like documentations, fantasy, or musicals to keep the movies comparable to each other. Namely we have chosen the movies BlacKkKlansman (2018) which was tagged as hate-speech, Django Unchained (2012), American History X (1998) and Pulp Fiction (1994) which were tagged as racism whereas South Park (1999) as well as The Wolf of Wall Street (2013) were tagged as friendship in",
"cite_spans": [
{
"start": 681,
"end": 703,
"text": "Davidson et al. (2017)",
"ref_id": "BIBREF5"
},
{
"start": 1603,
"end": 1609,
"text": "(1994)",
"ref_id": "BIBREF0"
},
{
"start": 1699,
"end": 1705,
"text": "(2013)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 914,
"end": 921,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4"
},
{
"text": "The goal of the pre-processing step was to make the text of the Tweets and conversational discussions as comparable as possible to the movie subtitles since we assume that this will improve the transfer learning results. Therefore, we did not use pre-processing techniques like stop word removal or lemmatization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "4.1"
},
{
"text": "After performing a manual inspection, we applied certain rules to remove the textual noise from our datasets. The following was the noise observed in each dataset, which we removed for the Twitter and Fox News datasets: (1) repeated punctuation marks, (2) multiple username tags, (3) emoticon character encodings, and (4) website links. For the movie subtitle text dataset: (1) sound expressions, e.g [PEOPLE CHATTERING], [DOOR OPEN-ING], (2) name header of the current speaker, e.g. \"DIANA: Hey, what's up?\" which refers to Diana is about to say something, (3) HTML tags, (4) nonalpha character subtitle, and (5) non-ASCII characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Cleansing",
"sec_num": "4.2"
},
{
"text": "The downloaded subtitle files are provided by the website www.opensubtitles.org 4 and are free to use for scientific purposes. The files are available in the SRT-format 5 that have a time duration along with a subtitle, which while watching appears on the screen in a given time frame. We performed the following operations to create the movie dataset: (1) Converted the SRT-format to CSV-format by separating start time, end time, and the subtitle text, (2) Fragmented subtitles which were originally single appearances on the screen and spanned across multiple screen frames were combined, by identifying sentence-ending punctuation marks, (3) Combined single word subtitles with the previous subtitle because single word subtitles tend to be expressions to what has been said before. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtitle format conversion",
"sec_num": "4.3"
},
{
"text": "The Bi-LSTM models are built using the Keras and the BoW models are built using the PyTorch library while both are trained with a 1e-03 learning rate and categorical cross-entropy loss function. For the development of BERT-based models, we rely on the TFBERTForSequenceClassification algorithm, which is provided by HuggingFace 6 and pre-trained on bert-base-uncased. Learning rate of 3e-06 and sparse categorical cross-entropy loss function was used for this. All the models used the Adam optimizer (Kingma and Ba, 2015). We describe the detailed hyper-parameters for all the models used for all the experiments in the Appendix A.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "In this section, we will discuss the different classification results obtained from the various hate speech classification models. We will also briefly present a qualitative exploration of the annotated movie datasets. The model referred in the tables as LSTM refers to Bi-LSTM models used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Annotation Analysis",
"sec_num": "6"
},
{
"text": "We have introduced a new dataset of movie subtitles in the field of hate speech research. A total of six movies are annotated, which consists of sequential subtitles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification results and Discussion",
"sec_num": "6.1"
},
{
"text": "First, we experimented on the HateXplain model (Mathew et al., 2021 ) by testing the model's performance on the movie dataset. We achieved a macro-averaged F1-score of 66% (see Table 3 ). Next, we tried to observe how the different models (BoW, Bi-LSTM, and BERT) perform using transfer learning and how comparable are those results to this state-of-the-art model's results.",
"cite_spans": [
{
"start": 47,
"end": 67,
"text": "(Mathew et al., 2021",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 177,
"end": 184,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Classification results and Discussion",
"sec_num": "6.1"
},
{
"text": "We trained and tested the BERT, Bi-LSTM, and BoW model by applying an 80:20 split on the so- Table 4 ). When applied to the Fox News dataset, we observed that BERT performed better than both BoW and Bi-LSTM with a small margin in terms of macro-averaged F1-score. Hate is detected close to 50% whereas normal is detected close to 80% for all three models on F1score. When applied on the Twitter dataset, results are almost the same for the BoW and Bi-LSTM models, whereas the BERT model performed close to 10% better by reaching a macro-averaged F1-score of 76%. All the models have a high F1-score of above 90% for identifying offensive class. This goes along with the fact that the offensive class is the dominant one in the Twitter dataset (Table 1) .",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 743,
"end": 752,
"text": "(Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Classification results and Discussion",
"sec_num": "6.1"
},
{
"text": "Hence, by looking at the macro-averaged F1score values, BERT performed best in the task for training and testing on social media content on both datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification results and Discussion",
"sec_num": "6.1"
},
{
"text": "Next, we train on social media data and test on the six movies (see Table 5 ) to address RQ 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 75,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Classification results and Discussion",
"sec_num": "6.1"
},
{
"text": "When trained on the Fox News dataset, BoW and Bi-LSTM performed similarly by poorly detecting hate in the movies. In contrast, BERT identified the hate class more than twice as well by reaching an F1-score of 39%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification results and Discussion",
"sec_num": "6.1"
},
{
"text": "When trained on the Twitter dataset, BERT performed almost double in terms of macro-averaged F1-score than the other two models. Even though the detection for the offensive class was high on the Twitter dataset (see Table 4 ) the models did not perform as well on the six movies, which could be due to the domain change. However, BERT was able to perform better on the hate class, even though it was trained on a small proportion of hate content in the Twitter dataset. The other two models performed very poorly. To address RQ 2, we train new models from scratch on the six movies dataset using 6-fold cross-validation (see Table 6 ). In this setup, each fold represents one movie that is exchanged iteratively during evaluation.",
"cite_spans": [],
"ref_spans": [
{
"start": 216,
"end": 223,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 625,
"end": 632,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classification results and Discussion",
"sec_num": "6.1"
},
{
"text": "Compared to the domain adaptation (see Table 5), the BoW and Bi-LSTM models performed better. Bi-LSTM distinguished better than BoW among hate and offensive while maintaining a good identification of the normal class resulting in a better macro-averaged F1-score of 71% as compared to 64% for the BoW model. BERT performed best across all three classes resulting in 10% better results compared to the Bi-LSTM model on macro-averaged F1-score, however, it has similar results when compared to the domain adaptation (see Table 5 ) results.",
"cite_spans": [],
"ref_spans": [
{
"start": 519,
"end": 526,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Classification results and Discussion",
"sec_num": "6.1"
},
{
"text": "Furthermore, the absolute amount of hateful subtitles in the movies The Wolf of Wall Street (3), South Park (10), and Pulp Fiction 16 Table 6 : In-domain results using models trained on the movie dataset using 6-fold cross-validation minor, hence the cross-validation on these three movies as test set is very sensible of only predicting a few of them wrong since a few of them will already result in a high relative amount.",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 141,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classification results and Discussion",
"sec_num": "6.1"
},
{
"text": "We have also tried to improve our BERT model trained on social media content (Table 4) by finetuning it via 6-fold cross-validation using the six movies dataset (see Table 7 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 86,
"text": "(Table 4)",
"ref_id": "TABREF7"
},
{
"start": 166,
"end": 173,
"text": "Table 7",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Classification results and Discussion",
"sec_num": "6.1"
},
{
"text": "The macro-averaged F1-score increased compared to the domain adaptation (see Table 5 ) from 64% to 89% for the model trained on the Fox News dataset. For the Twitter dataset the macroaveraged F1-score is comparable to the domain adaptation (see Table 5 ) and in-domain results (see Table 6 ). Compared to the results of the HateXplain model (see Table 3 ) the identification of the normal utterances are comparable whereas the offensive class was identified by our BERT model much better, with an increment of 48%, but the hate class was identified by a decrement of 18%.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Table 5",
"ref_id": "TABREF9"
},
{
"start": 245,
"end": 252,
"text": "Table 5",
"ref_id": "TABREF9"
},
{
"start": 282,
"end": 289,
"text": "Table 6",
"ref_id": null
},
{
"start": 346,
"end": 353,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Classification results and Discussion",
"sec_num": "6.1"
},
{
"text": "The detailed results of all experiments is given in Appendix A.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification results and Discussion",
"sec_num": "6.1"
},
{
"text": "In this section, we investigate the unsuccessfully classified utterances (see Figure 2 ) of all six movies by the BERT model trained on the Twitter dataset and fine-tuned with the six movies via 6-fold cross-validation (see Table 7 ) to analyze the model addressing RQ 2.",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 86,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 224,
"end": 231,
"text": "Table 7",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "6.2"
},
{
"text": "The majority of unsuccessfully classified utterances 564 We looked at the individual utterances of the hate class misclassified as normal (37 utterances). We observed that most of them were sarcastic and those did not contain any hate keywords, whereas some could have been indirect or contextdependent, for example, the utterance \"It's just so beautiful. We're cleansing this country of a backwards race of chimpanzees\" indirectly and sarcastically depicts hate speech which our model could not identify. We assume that our model has shortcomings in interpreting those kinds of utterances correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "6.2"
},
{
"text": "Furthermore, we analyzed the utterances of the class normal which were misclassified as hate (60 utterances). We observed that around a third of them were actual hate but were misclassified by our annotators as normal, hence those were correctly classified as hate by our model. We noticed that a fifth of them contain the keyword \"Black Power\", which we refer to as normal whereas the BERT model classified them as hate. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "6.2"
},
{
"text": "Using the MTurk crowdsourcing, a total of 10,688 subtitles (from the six movies) are annotated. For each of the three workers involved, 81% agreed to the same class. Out of the total annotations, only 0.7% received disagreement on the classes (where all the three workers chose a different class for each subtitle).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Analysis",
"sec_num": "6.3"
},
{
"text": "To ensure the quality of the classes for the training, we chose majority voting. In the case of disagreement, we took the offensive class as the final class of the subtitle. One reason why workers do disagree might be that they do interpret a scene differently. We think that providing the video and audio clips of the subtitle frames might help to disambiguate such confusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Analysis",
"sec_num": "6.3"
},
{
"text": "Let us consider an example from one of the annotation batches that describes a scene where the shooting of an Afro-American appears to happen. Subtitle 5 in that batch reads out \"Shoot the nigger!\", and subtitle 31 states \"Just shit. Got totally out of control.\", which was interpreted as normal by a worker who might not be sensible to the word shit, as offensive speech by a worker who is, in fact, sensible to the word shit or as hate speech by a worker who thinks that the word shit refers to the Afro-American.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Analysis",
"sec_num": "6.3"
},
{
"text": "The movie Django Unchained 2012 was tagged as racism and has been annotated as the most hateful movie (see Table 2 ) followed by BlacK-kKlansman 2018 and American History X 1998 which where tagged as racism or hateful. This indicates that hate speech and racist comments often go along together. As expected, movies tagged by friendship like The Wolf of Wall Street 2013 and South Park 1999 were less hateful. Surprisingly the percentage of offensive speech increases when the percentage of hate decreases making the movies tagged by friendship most offensive in our movie dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 114,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Annotation Analysis",
"sec_num": "6.3"
},
{
"text": "1. The pre-processing of the movies or the social media datasets could have deleted crucial parts which would have made a hateful tweet normal, for example. Thus the training on such datasets could impact the training negatively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Threats to Validity",
"sec_num": "7"
},
{
"text": "2. Movies are not real, they are more like a very good simulation. Thus, for this matter, hate speech is simulated and arranged. Maybe documentation movies are better suited since they tend to cover real-case scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Threats to Validity",
"sec_num": "7"
},
{
"text": "3. The annotations could be wrong since the task of identifying hate speech is subjective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Threats to Validity",
"sec_num": "7"
},
{
"text": "4. Movies might not contain a lot of hate speech, hence the need to detect them is very minor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Threats to Validity",
"sec_num": "7"
},
{
"text": "5. As the annotation process was done batchwise, annotators might lose crucial contextual information when the batch change happens, as it misses the chronological order of the dialogue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Threats to Validity",
"sec_num": "7"
},
{
"text": "6. Only textual data might not provide enough contextual information for the annotators to correctly annotate the dialogues as the other modalities of the movies (audio and video) are not considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Threats to Validity",
"sec_num": "7"
},
{
"text": "In this paper, we applied different approaches to detect hate and offensive speech in a novel proposed movie subtitle dataset. In addition, we proposed a technique to combine fragments of movie subtitles and made the social media text content more comparable to movie subtitles (for training purposes). For the classification, we used two techniques of transfer learning, i.e., domain adaptation and fine-tuning. The former was used to evaluate three different ML models, namely Bag of Words for a baseline system, transformer-based systems as they are becoming the state-of-the-art classification approaches for different NLP tasks, and Bi-LSTM-based models as our movie dataset represents sequential data for each movie. The latter was performed only on the BERT model and we report our best result by cross-validation on the movie dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "All three models were able to perform well for the classification of the normal class. Whereas when it comes to the differentiation between offensive and hate classes, BERT achieved a substantially higher F1-score as compared to the other two models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "The produced artifacts could have practical significance in the field of movie recommendations. We will release the annotated datasets, keeping all the contextual information (time offsets of the subtitle, different representations, etc.), the fine-tuned and newly trained models, as well as the python source code and pre-processing scripts, to pursue research on hate speech on movie subtitles. 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "The performance of hate speech detection in movies can be improved by increasing the existing movie dataset with movies that contain a lot of hate speech. Moreover, multi-modal models can also improve performance by using speech or image. In addition, some kind of hate speech can only be detected through the combination of different modals, like some memes in the hateful meme challenge by Facebook (Kiela et al., 2020) e.g. a picture that says look how many people love you whereas the image shows an empty desert. Furthermore, we also did encounter the widely reported sparsity of hate speech content, which can be mitigated by using techniques such as data augmentation, or balanced class distribution. We intentionally did not perform shuffling of all six movies before splitting into k-folds to retain a realistic scenario where a classifier is executed on a new movie.",
"cite_spans": [
{
"start": 401,
"end": 421,
"text": "(Kiela et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Further Work",
"sec_num": "9"
},
{
"text": "Another interesting aspect that can be looked at is the identification of the target groups of the hate speech content in movies and to see the more prevalent target groups. This work can also be extended for automated annotation of movies to investigate the distribution of offensive and hate speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Further Work",
"sec_num": "9"
},
{
"text": "All the models used the Adam optimizer (Kingma and Ba, 2015). Bi-LSTM and BoW used the crossentropy loss function whereas our BERT models used the sparse categorical and cross-entropy loss function. Further values for the hyperparameters for each experiment are shown in Table 8 .",
"cite_spans": [],
"ref_spans": [
{
"start": 271,
"end": 278,
"text": "Table 8",
"ref_id": "TABREF14"
}
],
"eq_spans": [],
"section": "A.1 Hyperparameter values for experiments",
"sec_num": null
},
{
"text": "A.1.1 Bi-LSTM For all the models except for the model trained on the Twitter dataset, the architecture consists of an embedding layer followed by two Bi-LSTM layers stacked one after another. Finally, a Dense layer with a softmax activation function is giving the output class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Hyperparameter values for experiments",
"sec_num": null
},
{
"text": "For training with Twitter (both in-domain and domain adaptation), a single Bi-LSTM layer is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Hyperparameter values for experiments",
"sec_num": null
},
{
"text": "The BoW model uses two hidden layers consisting of 100 neurons each.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.2 BoW",
"sec_num": null
},
{
"text": "A.1.3 BERT BERT uses TFBertForSequenceClassification model and BertTokenizer as its tokenizer from the pretrained model bert-base-uncased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1.2 BoW",
"sec_num": null
},
{
"text": "We report precision, recall, F1-score and macro averaged F1-score for every experiment in Table 9 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 97,
"text": "Table 9",
"ref_id": "TABREF15"
}
],
"eq_spans": [],
"section": "A.2 Additional Performance Metrics for Experiments",
"sec_num": null
},
{
"text": "https://github.com/sjtuprog/ fox-news-comments 2 https://github.com/t-davidson/ hate-speech-and-offensive-language/ 3 https://www.imdb.com/search/keyword/ December 2020. The detailed distribution of the normal, offensive, and hate classes, movie-wise, can be found inTable 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.opensubtitles.org/ 5 https://en.wikipedia.org/wiki/SubRip",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Movie: Pulp Fiction. Last visited 23.05.2021. 1998. Movie: American History X. Last visited 23.05",
"authors": [],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Movie: Pulp Fiction. Last visited 23.05.2021. 1998. Movie: American History X. Last visited 23.05.2021.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Movie: South Park: Bigger, Longer & Uncut. Last visited 23",
"authors": [],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "1999. Movie: South Park: Bigger, Longer & Uncut. Last visited 23.05.2021. 2012. Movie: Django Unchained. Last visited 23.05.2021.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Movie: The Wolf of Wall Street. Last visited 23.05.2021",
"authors": [],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Movie: The Wolf of Wall Street. Last visited 23.05.2021. 2018. Movie: BlacKkKlansman. Last visited 23.05.2021.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Borkan",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
}
],
"year": 2019,
"venue": "Companion of The",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2019. Nuanced Met- rics for Measuring Unintended Bias with Real Data for Text Classification. In Companion of The 2019 7 https://github.com/uhh-lt/hatespeech",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "World Wide Web Conference",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "491--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "World Wide Web Conference, WWW, pages 491-500, San Francisco, CA, USA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automated Hate Speech Detection and the Problem of Offensive Language",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "512--515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated Hate Speech Detection and the Problem of Offensive Language. In Proceedings of the 11th International AAAI Con- ference on Web and Social Media, pages 512-515, Montr\u00e9al, QC, Canada.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, MN, USA. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Common Sense Reasoning for Detection, Prevention, and Mitigation of Cyberbullying",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Dinakar",
"suffix": ""
},
{
"first": "Birago",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Lieberman",
"suffix": ""
},
{
"first": "Rosalind",
"middle": [],
"last": "Picard",
"suffix": ""
}
],
"year": 2012,
"venue": "ACM Trans. Interact. Intell. Syst",
"volume": "2",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2362394.2362400"
]
},
"num": null,
"urls": [],
"raw_text": "Karthik Dinakar, Birago Jones, Catherine Havasi, Henry Lieberman, and Rosalind Picard. 2012. Com- mon Sense Reasoning for Detection, Prevention, and Mitigation of Cyberbullying. ACM Trans. Interact. Intell. Syst., 2(3):18:1-18:30.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Measuring and Mitigating Unintended Bias in Text Classification",
"authors": [
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery",
"volume": "",
"issue": "",
"pages": "67--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and Mitigat- ing Unintended Bias in Text Classification. In Pro- ceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Ma- chinery, pages 67-73, New Orleans, LA, USA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Hate Speech Detection with Comment Embeddings",
"authors": [
{
"first": "Nemanja",
"middle": [],
"last": "Djuric",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Morris",
"suffix": ""
},
{
"first": "Mihajlo",
"middle": [],
"last": "Grbovic",
"suffix": ""
},
{
"first": "Vladan",
"middle": [],
"last": "Radosavljevic",
"suffix": ""
},
{
"first": "Narayan",
"middle": [],
"last": "Bhamidipati",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "29--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nemanja Djuric, Jing Zhou, Robin Morris, Mihajlo Gr- bovic, Vladan Radosavljevic, and Narayan Bhamidi- pati. 2015. Hate Speech Detection with Comment Embeddings. In Proceedings of the 24th Interna- tional Conference on World Wide Web, pages 29-30, Florence, Italy.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Long Short-Term Memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Comput",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long Short-Term Memory. Neural Comput., 9(8):1735- 1780.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Detection of Cyberbullying Incidents on the Instagram Social Network. CoRR",
"authors": [
{
"first": "Homa",
"middle": [],
"last": "Hosseinmardi",
"suffix": ""
},
{
"first": "Sabrina",
"middle": [
"Arredondo"
],
"last": "Mattson",
"suffix": ""
},
{
"first": "Rahat",
"middle": [],
"last": "Ibn Rafiq",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Lv",
"suffix": ""
},
{
"first": "Shivakant",
"middle": [],
"last": "Mishra",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Homa Hosseinmardi, Sabrina Arredondo Mattson, Ra- hat Ibn Rafiq, Richard Han, Qin Lv, and Shivakant Mishra. 2015. Detection of Cyberbullying Inci- dents on the Instagram Social Network. CoRR, abs/1503.03909.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. 2020. The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Hamed",
"middle": [],
"last": "Firooz",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Mohan",
"suffix": ""
},
{
"first": "Vedanuj",
"middle": [],
"last": "Goswami",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. 2020. The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes. CoRR, abs/2005.04790.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "ADAM: A Method for Stochastic Optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations, ICLR 2015",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. ADAM: A Method for Stochastic Optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, pages 1-15, San Diego, CA, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Detecting Online Hate Speech Using Context Aware Models",
"authors": [
{
"first": "Gao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Huang",
"middle": [],
"last": "Ruihong",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "260--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao Lei and Huang Ruihong. 2017. Detecting On- line Hate Speech Using Context Aware Models. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP, pages 260-266, Varna, Bulgaria.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection",
"authors": [
{
"first": "Binny",
"middle": [],
"last": "Mathew",
"suffix": ""
},
{
"first": "Punyajoy",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Seid Muhie Yimam",
"suffix": ""
},
{
"first": "Pawan",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Animesh",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mukherjee",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "35",
"issue": "",
"pages": "14867--14875",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Binny Mathew, Punyajoy Saha, Seid Muhie Yi- mam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2021. HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection. Pro- ceedings of the AAAI Conference on Artificial Intel- ligence, 35(17):14867-14875.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Abusive Language Detection in Online User Content",
"authors": [
{
"first": "Chikashi",
"middle": [],
"last": "Nobata",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "Achint",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "145--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive Lan- guage Detection in Online User Content. In Pro- ceedings of the 25th International Conference on World Wide Web., pages 145-153, Montr\u00e9al, QC, Canada.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Encyclopedia of the American Constitution",
"authors": [
{
"first": "John",
"middle": [
"T"
],
"last": "Nockleby",
"suffix": ""
},
{
"first": "Leonard",
"middle": [
"W"
],
"last": "Levy",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [
"L"
],
"last": "Karst",
"suffix": ""
},
{
"first": "Dennis",
"middle": [
"J"
],
"last": "Mahoney",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John T. Nockleby, Leonard W. Levy, Kenneth L. Karst, and Dennis J. Mahoney editors. 2000. Encyclope- dia of the American Constitution. Macmillan, 2nd edition.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A Survey on Hate Speech Detection using Natural Language Processing",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.18653/v1/W17-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Schmidt and Michael Wiegand. 2017. A Sur- vey on Hate Speech Detection using Natural Lan- guage Processing. In Proceedings of the Fifth Inter- national Workshop on Natural Language Processing for Social Media, pages 1-10, Valencia, Spain. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Analyzing the targets of hate in online social media",
"authors": [
{
"first": "Leandro",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Mainack",
"middle": [],
"last": "Mondal",
"suffix": ""
},
{
"first": "Denzil",
"middle": [],
"last": "Correa",
"suffix": ""
},
{
"first": "Fabr\u00edcio",
"middle": [],
"last": "Benevenuto",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "687--690",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leandro Silva, Mainack Mondal, Denzil Correa, Fab- r\u00edcio Benevenuto, and Ingmar Weber. 2021. Ana- lyzing the targets of hate in online social media. In Proceedings of the International AAAI Conference on Web and Social Media, pages 687-690, Cologne, Germany.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Hate in the Machine: Anti-Black and Anti-Muslim Social Media Posts as Predictors of Offline Racially and Religiously Aggravated Crime",
"authors": [
{
"first": "Matthew",
"middle": [
"L"
],
"last": "Williams",
"suffix": ""
},
{
"first": "Pete",
"middle": [],
"last": "Burnap",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Javed",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Sefa",
"middle": [],
"last": "Ozalp",
"suffix": ""
}
],
"year": 2020,
"venue": "The British Journal of Criminology",
"volume": "60",
"issue": "1",
"pages": "93--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew L. Williams, Pete Burnap, Amir Javed, Han Liu, and Sefa Ozalp. 2020. Hate in the Machine: Anti-Black and Anti-Muslim Social Media Posts as Predictors of Offline Racially and Religiously Ag- gravated Crime. The British Journal of Criminology, 60(1):93-117.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Detecting Offensive Tweets via Topical Feature Discovery over a Large Scale Twitter Corpus",
"authors": [
{
"first": "Guang",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Ling",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Rose",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 21st ACM international conference on Information and knowledge management",
"volume": "",
"issue": "",
"pages": "1980--1984",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guang Xiang, Bin Fan, Ling Wang, Jason Hong, and Carolyn Rose. 2012. Detecting Offensive Tweets via Topical Feature Discovery over a Large Scale Twitter Corpus. In Proceedings of the 21st ACM international conference on Information and knowl- edge management, pages 1980-1984, Maui, HI, USA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning from Bullying Traces in Social Media",
"authors": [
{
"first": "Jun-Ming",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kwang-Sung",
"middle": [],
"last": "Jun",
"suffix": ""
},
{
"first": "Xiaojin",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Bellmore",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "656--666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun-Ming Xu, Kwang-Sung Jun, Xiaojin Zhu, and Amy Bellmore. 2012. Learning from Bullying Traces in Social Media. In Proceedings of the 2012 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 656-666, Montr\u00e9al, QC, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Content-Driven Detection of Cyberbullying on the Instagram Social Network",
"authors": [
{
"first": "Haoti",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Squicciarini",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Rajtmajer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Griffin",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Cornelia",
"middle": [],
"last": "Caragea",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16), IJ-CAI'16",
"volume": "",
"issue": "",
"pages": "3952--3958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haoti Zhong, Hao Li, Anna Squicciarini, Sarah Rajt- majer, Christopher Griffin, David Miller, and Cor- nelia Caragea. 2016. Content-Driven Detection of Cyberbullying on the Instagram Social Network. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16), IJ- CAI'16, pages 3952-3958, New York, NY, USA.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Annotation template containing a batch of the movie American History X",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Label misclassification on the movie dataset using the BERT model ofTable 7 trained on the Twitter dataset",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"num": null,
"content": "<table/>",
"text": "Class distribution for the different datasets",
"html": null,
"type_str": "table"
},
"TABREF3": {
"num": null,
"content": "<table/>",
"text": "Class distribution on each movie",
"html": null,
"type_str": "table"
},
"TABREF5": {
"num": null,
"content": "<table/>",
"text": "Prediction results using the HateXplain model on the movie dataset (domain adaptation)",
"html": null,
"type_str": "table"
},
"TABREF7": {
"num": null,
"content": "<table/>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF9": {
"num": null,
"content": "<table/>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF11": {
"num": null,
"content": "<table><tr><td colspan=\"2\">Dataset Model</td><td>Class</td><td>F1-</td><td>Macro</td></tr><tr><td/><td/><td/><td>Score</td><td>AVG</td></tr><tr><td/><td/><td/><td/><td>F1</td></tr><tr><td/><td>BERT (Fox</td><td>normal hate</td><td colspan=\"2\">0.97 0.89 0.82</td></tr><tr><td>Movies</td><td>News)</td><td/><td/></tr><tr><td/><td>BERT</td><td colspan=\"2\">normal offensive 0.75 0.97</td><td>0.77</td></tr><tr><td/><td>(Twitter)</td><td>hate</td><td>0.59</td></tr></table>",
"text": "are offensive classified as normal and vice versa resulting in 69%. Hate got classified as offensive in 5% of all cases and offensive as hate in 8%. The remaining misclassification is between",
"html": null,
"type_str": "table"
},
"TABREF12": {
"num": null,
"content": "<table><tr><td>: Prediction results using BERT models trained</td></tr><tr><td>on the Twitter and Fox News datasets and fine-tuned</td></tr><tr><td>them with the movie dataset by applying 6-fold cross-</td></tr><tr><td>validation (fine-tuning)</td></tr><tr><td>normal and hate resulting in 18%, which we refer</td></tr><tr><td>to as the most critical for us to analyze further.</td></tr></table>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF14": {
"num": null,
"content": "<table><tr><td>BoW</td><td>Fox News</td><td>Fox News</td><td>normal</td><td>0.81</td><td>0.84</td><td>0.83</td><td>0.63</td></tr><tr><td>BoW</td><td>Fox News</td><td>Fox News</td><td>hate</td><td>0.45</td><td>0.41</td><td>0.43</td><td>0.63</td></tr><tr><td>BoW</td><td>Twitter</td><td>Twitter</td><td>normal</td><td>0.79</td><td>0.78</td><td>0.78</td><td>0.66</td></tr><tr><td>BoW</td><td>Twitter</td><td>Twitter</td><td colspan=\"2\">offensive 0.90</td><td>0.95</td><td>0.93</td><td>0.66</td></tr><tr><td>BoW</td><td>Twitter</td><td>Twitter</td><td>hate</td><td>0.43</td><td>0.18</td><td>0.26</td><td>0.66</td></tr><tr><td>BoW</td><td>Fox News</td><td>Movies</td><td>normal</td><td>0.84</td><td>0.87</td><td>0.86</td><td>0.51</td></tr><tr><td>BoW</td><td>Fox News</td><td>Movies</td><td>hate</td><td>0.16</td><td>0.13</td><td>0.15</td><td>0.51</td></tr><tr><td>BoW</td><td>Twitter</td><td>Movies</td><td>normal</td><td>0.96</td><td>0.46</td><td>0.62</td><td>0.37</td></tr><tr><td>BoW</td><td>Twitter</td><td>Movies</td><td colspan=\"2\">offensive 0.20</td><td>0.82</td><td>0.32</td><td>0.37</td></tr><tr><td>BoW</td><td>Twitter</td><td>Movies</td><td>hate</td><td>0.11</td><td>0.24</td><td>0.15</td><td>0.37</td></tr><tr><td>BoW</td><td>Movies</td><td>Movies</td><td>normal</td><td>0.93</td><td>0.97</td><td>0.95</td><td>0.64</td></tr><tr><td>BoW</td><td>Movies</td><td>Movies</td><td colspan=\"2\">offensive 0.65</td><td>0.56</td><td>0.59</td><td>0.64</td></tr><tr><td>BoW</td><td>Movies</td><td>Movies</td><td>hate</td><td>0.56</td><td>0.28</td><td>0.37</td><td>0.64</td></tr><tr><td>BERT</td><td>Fox News</td><td>Fox News</td><td>normal</td><td>0.84</td><td>0.87</td><td>0.86</td><td>0.68</td></tr><tr><td>BERT</td><td>Fox News</td><td>Fox News</td><td>hate</td><td>0.57</td><td>0.46</td><td>0.51</td><td>0.68</td></tr><tr><td>BERT</td><td>Twitter</td><td>Twitter</td><td>normal</td><td>0.88</td><td>0.91</td><td>0.89</td><td>0.76</td></tr><tr><td>BERT</td><td>Twitter</td><td>Twitter</td><td colspan=\"2\">offensive 0.94</td><td>0.97</td><td>0.95</td><td>0.76</td></tr><tr><td>BERT</td><td>Twitter</td><td>Twitter</td><td>hate</td><td>0.59</td><td>0.34</td><td>0.43</td><td>0.76</td></tr><tr><td>BERT</td><td>Fox News</td><td>Movies</td><td>normal</td><td>0.88</td><td>0.90</td><td>0.89</td><td>0.64</td></tr><tr><td>BERT</td><td>Fox News</td><td>Movies</td><td>hate</td><td>0.40</td><td>0.37</td><td>0.39</td><td>0.64</td></tr><tr><td>BERT</td><td>Twitter</td><td>Movies</td><td>normal</td><td>0.98</td><td>0.92</td><td>0.95</td><td>0.77</td></tr><tr><td>BERT</td><td>Twitter</td><td>Movies</td><td colspan=\"2\">offensive 0.63</td><td>0.90</td><td>0.74</td><td>0.77</td></tr><tr><td>BERT</td><td>Twitter</td><td>Movies</td><td>hate</td><td>0.63</td><td>0.63</td><td>0.63</td><td>0.77</td></tr><tr><td>BERT</td><td>Movies</td><td>Movies</td><td>normal</td><td>0.97</td><td>0.98</td><td>0.97</td><td>0.81</td></tr><tr><td>BERT</td><td>Movies</td><td>Movies</td><td colspan=\"2\">offensive 0.80</td><td>0.76</td><td>0.78</td><td>0.81</td></tr><tr><td>BERT</td><td>Movies</td><td>Movies</td><td>hate</td><td>0.79</td><td>0.68</td><td>0.68</td><td>0.81</td></tr><tr><td>BERT</td><td colspan=\"2\">Fox News and Movies Movies</td><td>normal</td><td>0.97</td><td>0.97</td><td>0.97</td><td>0.89</td></tr><tr><td>BERT</td><td colspan=\"2\">Fox News and Movies Movies</td><td>hate</td><td>0.83</td><td>0.81</td><td>0.82</td><td>0.89</td></tr><tr><td>BERT</td><td>Twitter and Movies</td><td>Movies</td><td>normal</td><td>0.97</td><td>0.97</td><td>0.97</td><td>0.77</td></tr><tr><td>BERT</td><td>Twitter and Movies</td><td>Movies</td><td colspan=\"2\">offensive 0.76</td><td>0.76</td><td>0.75</td><td>0.77</td></tr><tr><td>BERT</td><td>Twitter and Movies</td><td>Movies</td><td>hate</td><td>0.57</td><td>0.73</td><td>0.59</td><td>0.77</td></tr><tr><td>Bi-LSTM</td><td>Fox News</td><td>Fox News</td><td>normal</td><td>0.83</td><td>0.72</td><td>0.77</td><td>0.62</td></tr><tr><td>Bi-LSTM</td><td>Fox News</td><td>Fox News</td><td>hate</td><td>0.39</td><td>0.55</td><td>0.46</td><td>0.62</td></tr><tr><td>Bi-LSTM</td><td>Twitter</td><td>Twitter</td><td>normal</td><td>0.74</td><td>0.78</td><td>0.76</td><td>0.66</td></tr><tr><td>Bi-LSTM</td><td>Twitter</td><td>Twitter</td><td colspan=\"2\">offensive 0.91</td><td>0.91</td><td>0.91</td><td>0.66</td></tr><tr><td>Bi-LSTM</td><td>Twitter</td><td>Twitter</td><td>hate</td><td>0.31</td><td>0.31</td><td>0.31</td><td>0.66</td></tr><tr><td>Bi-LSTM</td><td>Fox News</td><td>Movies</td><td>normal</td><td>0.85</td><td>0.81</td><td>0.83</td><td>0.51</td></tr><tr><td>Bi-LSTM</td><td>Fox News</td><td>Movies</td><td>hate</td><td>0.17</td><td>0.20</td><td>0.18</td><td>0.51</td></tr><tr><td>Bi-LSTM</td><td>Twitter</td><td>Movies</td><td>normal</td><td>0.96</td><td>0.50</td><td>0.66</td><td>0.38</td></tr><tr><td>Bi-LSTM</td><td>Twitter</td><td>Movies</td><td colspan=\"2\">offensive 0.22</td><td>0.79</td><td>0.34</td><td>0.38</td></tr><tr><td>Bi-LSTM</td><td>Twitter</td><td>Movies</td><td>hate</td><td>0.10</td><td>0.33</td><td>0.16</td><td>0.38</td></tr><tr><td>Bi-LSTM</td><td>Movies</td><td>Movies</td><td>normal</td><td>0.94</td><td>0.97</td><td>0.95</td><td>0.71</td></tr><tr><td>Bi-LSTM</td><td>Movies</td><td>Movies</td><td colspan=\"2\">offensive 0.67</td><td>0.60</td><td>0.63</td><td>0.71</td></tr><tr><td>Bi-LSTM</td><td>Movies</td><td>Movies</td><td>hate</td><td>0.73</td><td>0.49</td><td>0.56</td><td>0.71</td></tr><tr><td colspan=\"2\">HateXplain -</td><td>Movies</td><td>normal</td><td>0.88</td><td>0.98</td><td>0.93</td><td>0.66</td></tr><tr><td colspan=\"2\">HateXplain -</td><td>Movies</td><td colspan=\"2\">offensive 0.62</td><td>0.17</td><td>0.27</td><td>0.66</td></tr><tr><td colspan=\"2\">HateXplain -</td><td>Movies</td><td>hate</td><td>0.89</td><td>0.68</td><td>0.77</td><td>0.66</td></tr></table>",
"text": "Detailed setups of all applied experiments Model Train-Dataset Test-Dataset Category Precision Recall F1-Score Macro AVG F1",
"html": null,
"type_str": "table"
},
"TABREF15": {
"num": null,
"content": "<table/>",
"text": "Detailed results of all applied experiments",
"html": null,
"type_str": "table"
}
}
}
}