ACL-OCL / Base_JSON /prefixT /json /trac /2020.trac-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:52:27.642520Z"
},
"title": "The Role of Computational Stylometry in Identifying (Misogynistic) Aggression in English Social Media Texts",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Pascucci",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Raffaele",
"middle": [],
"last": "Manna",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Vincenzo",
"middle": [],
"last": "Masucci",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Johanna",
"middle": [],
"last": "Monti",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we describe UniOr ExpSys team participation in TRAC-2 (Trolling, Aggression and Cyberbullying) shared task, a workshop organized as part of LREC 2020. TRAC-2 shared task is organized in two sub-tasks: Aggression Identification (a 3-way classification between \"Overtly Aggressive\", \"Covertly Aggressive\" and \"Non-aggressive\" text data) and Misogynistic Aggression Identification (a binary classifier for classifying the texts as \"gendered\" or \"non-gendered\"). Our approach is based on linguistic rules, stylistic features extraction through stylometric analysis and Sequential Minimal Optimization algorithm in building the two classifiers.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we describe UniOr ExpSys team participation in TRAC-2 (Trolling, Aggression and Cyberbullying) shared task, a workshop organized as part of LREC 2020. TRAC-2 shared task is organized in two sub-tasks: Aggression Identification (a 3-way classification between \"Overtly Aggressive\", \"Covertly Aggressive\" and \"Non-aggressive\" text data) and Misogynistic Aggression Identification (a binary classifier for classifying the texts as \"gendered\" or \"non-gendered\"). Our approach is based on linguistic rules, stylistic features extraction through stylometric analysis and Sequential Minimal Optimization algorithm in building the two classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The spread of offensive and hate speech on social media is one of the issues that mostly concerns the scientific community. The number of hate and offensive posts and comments on social media is growing day by day and the measures adopted by social media managers are often not enough. Most of the time, haters'accounts are simply temporarily blocked, and no other effective measures to combat the phenomenon are taken. In this paper, we describe our participation in TRAC-2 (Ritesh Kumar and Zampieri, 2020) workshop shared task and the results we achieved. TRAC-2 workshop shared task (now in its second edition), focuses on trolling, aggression and cyberbullying detection in a given corpus built ad hoc by the task organizers and is organized in two sub-tasks: Aggression Identification task and Misogynistic Aggression Identification task. TRAC-2 workshop shared task includes texts in three different languages: Bangla, Hindi and English for both sub-tasks. The participants are allowed to compete for the tasks and the languages they prefer. Considering the importance of linguistic knowledge in our approach, we decided to participate only in the two English sub-tasks (since we don't have linguistic knowledge in Bangla and Hindi). The method we use for text data classification, indeed, is based on a hybrid approach of Computational Stylometry, Machine Learning and Linguistic Rules. This research has been carried out in the context of two innovative industrial PhD projects in cooperation between the \"L'Orientale\" University of Naples and Expert System Corp. (a semantic intelligence company that creates artificial intelligence, cognitive computing and semantic technology software). That's the reason why we chose the name \"UniOr ExpSys\" for our team. The paper is organized as follows: in Section 2 we show Related work in Hate and Offensive speech detection. Section 3 focuses on methodology and data. Results are in Section 4 and Conclusions are in Section 5.",
"cite_spans": [
{
"start": 483,
"end": 508,
"text": "Kumar and Zampieri, 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Over the last few years, hate speech (HS) and offensive speech (OS) detection, has generated interest in scholars (for a survey, see (Schmidt and Wiegand, 2017) and (Fortuna and Nunes, 2018) ). The advent of social media represents the main cause of the HS and OS spread. Social networks are an extremely efficient means of communication, but, unfortunately, not everyone makes proper use of them. Increasing vulgarity in online conversations has emerged as a relevant issue in society as well as in science (Ramakrishnan et al., 2019) . The difference between HS and OS is subtle but significant and can be summarized as: HS is deemed to be harmful on the basis of defined protected attributes such as race, disability, sexuality and so on. In other words, HS is the intention to denigrate \"a person or persons on the basis of (alleged) membership in a social group identified by attributes such as race, ethnicity, gender, sexual orientation, religion, age, physical or mental disability, and others\" (Britannica, 2015); instead, OS can be described as a speech that \"Causes someone to feel hurt, angry, or upset : rude or insulting\" 1 . Research on detecting HS presence in social media has been carried out by (Malmasi and Zampieri, 2017) . The scholars investigated the dataset built by (Davidson et al., 2017) , composed of 14,509 English tweets annotated by three annotators into one of the following three classes: HATE (tweets containing HS), OFFENSIVE (tweets containing OS) and OK (non-offensive tweets). (Malmasi and Zampieri, 2017) used a linear Support Vector Machine to perform multi-class classification and achieved the best performance of 0.78 of text correctly classified with character 4-grams feature. A very ambitious project is Contro l'odio (literally Against hate), a web platform for monitoring and contrasting discrimination and HS against immigrants in Italy (Capozzi et al., 2019) . The classifier they built is trained with the Italian Hate Speech Corpus (IHSC) (Sanguinetti et al., 2018) , a collection of about 6,000 HS tweets. Contro l'odio project extends the research outcomes that emerged from the Italian Hate Map project (Musto et al., 2016) , combining computational linguistics methods that 1 https://www.merriam-webster.com/ dictionary/offend allow users to access a huge amount of information through interactive maps. (De Smedt et al., 2018) proposed a report on multilingual cross-domain (Extremism, Jidahism, Sexism and Racism) perspectives on online HS detection to identify common features of HS across domains. The scholars exploited different techniques (sentiment analysis, text classification, keyword extraction, and collocation extraction) and argued that it is hard to come up with a linguistic definition of HS, because there is no standardized \"list of bad words\", and if there is, then perpetrators are very creative in coining new offensive terminology. Cyberbullying is also part of HS and OS, especially if we consider that social media represent real breeding grounds in which new and increasingly sophisticated forms of cyberbullying are being developed. The detection and classification of textual cyberbullying on social media has been well investigated in (Dinakar et al., 2011) , (Xu et al., 2012) , (Dadvar et al., 2013) , and (Burnap and Williams, 2015). With the aim of monitoring the presence of cyberbullying in online texts, CREEP's project (Menini et al., 2019) main goal is to support supervising persons (e.g., educators) at identifying potential cases of cyberbullying. Stylistic features extraction in cyberbullying texts has been also investigated in (Pascucci et al., 2019 ) with a focus on features that belong to ten different cyberbullying categories characterized by text. Interesting research has been carried out by (Sprugnoli et al., 2018) , who built a corpus of What-sApp chats through a role-play by three classes of students aged 12 and 13 made of 14,600 tokens. In their corpus, the scholars distinguish four cyberbullying roles (Harasser, Victim, Bystander-defender, Bystander-assistant) and different classes of insults or discrimination, such as Body Shame, Sexism, Racism and Sexual Harassment. Their data have been annotated by two annotators and 1,203 cyberbullying expressions have been identified, corresponding to almost 6,000 tokens (41.1% of the whole corpus). Italian scientific community pays a great deal of attention to HS and OS detection shared task, and a few linguistic resources (Sanguinetti et al., 2018) , (Poletto et al., 2017) , and (Del Vigna et al., 2017) have been developed regarding HS Facebook and Twitter comments in Italian. The following is a short and certainly not exhaustive list that includes HS and OS shared tasks organized in the last few years:",
"cite_spans": [
{
"start": 133,
"end": 160,
"text": "(Schmidt and Wiegand, 2017)",
"ref_id": "BIBREF27"
},
{
"start": 165,
"end": 190,
"text": "(Fortuna and Nunes, 2018)",
"ref_id": "BIBREF9"
},
{
"start": 508,
"end": 535,
"text": "(Ramakrishnan et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 1214,
"end": 1242,
"text": "(Malmasi and Zampieri, 2017)",
"ref_id": "BIBREF15"
},
{
"start": 1292,
"end": 1315,
"text": "(Davidson et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 1516,
"end": 1544,
"text": "(Malmasi and Zampieri, 2017)",
"ref_id": "BIBREF15"
},
{
"start": 1887,
"end": 1909,
"text": "(Capozzi et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 1992,
"end": 2018,
"text": "(Sanguinetti et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 2159,
"end": 2179,
"text": "(Musto et al., 2016)",
"ref_id": "BIBREF17"
},
{
"start": 2361,
"end": 2384,
"text": "(De Smedt et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 3221,
"end": 3243,
"text": "(Dinakar et al., 2011)",
"ref_id": "BIBREF7"
},
{
"start": 3246,
"end": 3263,
"text": "(Xu et al., 2012)",
"ref_id": "BIBREF30"
},
{
"start": 3266,
"end": 3287,
"text": "(Dadvar et al., 2013)",
"ref_id": "BIBREF1"
},
{
"start": 3413,
"end": 3434,
"text": "(Menini et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 3629,
"end": 3651,
"text": "(Pascucci et al., 2019",
"ref_id": "BIBREF18"
},
{
"start": 3801,
"end": 3825,
"text": "(Sprugnoli et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 4490,
"end": 4516,
"text": "(Sanguinetti et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 4519,
"end": 4541,
"text": "(Poletto et al., 2017)",
"ref_id": "BIBREF21"
},
{
"start": 4553,
"end": 4572,
"text": "Vigna et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2."
},
{
"text": "\u2022 HaSpeeDe (Bosco et al., 2018), a shared task on HS detection, based on two datasets from two different online social platforms differently featured from the linguistic and communicative point of view. The shared task has been organized in the context of EVALITA 2018 (a periodic evaluation campaign of natural language processing and speech tools for the Italian language);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2."
},
{
"text": "\u2022 Germeval (Wiegand et al., 2018) , classification of German tweets from Twitter. It included a coarse-grained binary classification task and a fine-grained multiclass classification task;",
"cite_spans": [
{
"start": 11,
"end": 33,
"text": "(Wiegand et al., 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2."
},
{
"text": "\u2022 AMI (Fersini et al., 2018) , a shared task on automatic misogyny identification divided in two subtasks: Sub-task A on misogyny identification and Subtask B about misogynistic behaviour categorization and target classification. AMI shared task has been organized in the context of EVALITA 2018;",
"cite_spans": [
{
"start": 6,
"end": 28,
"text": "(Fersini et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2."
},
{
"text": "\u2022 Hateval (Basile et al., 2019), a shared task on multilingual detection of HS against immigrants and women in twitter organized as part of SemEval 2019. The shared task involved a total of 74 participants to detect HS in the dataset and to distinguish if the incitement was against an individual rather than a group;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2."
},
{
"text": "\u2022 Offenseval (Zampieri et al., 2019b) , also organized in the context of SemEval 2019, focuses on identifying and categorizing OS in social media. The task was based on a dataset (OLID -Offensive Language Identification Dataset) (Zampieri et al., 2019a) built ad hoc for this occasion. Offenseval was organized in three sub-tasks: in sub-task A, the goal was to discriminate between offensive and non-offensive posts. In subtask B, the focus was on the type of offensive content in the post, and in sub-task C, systems had to detect the target of the offensive posts. The 2020 Offenseval edition will be held as part of COLING 2020.",
"cite_spans": [
{
"start": 13,
"end": 37,
"text": "(Zampieri et al., 2019b)",
"ref_id": "BIBREF32"
},
{
"start": 229,
"end": 253,
"text": "(Zampieri et al., 2019a)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2."
},
{
"text": "\u2022 TRAC-1 (Kumar et al., 2018a) , the first workshop on trolling, aggression and cyberbullying. TRAC-1 shared task (Kumar et al., 2018b) has been organized as part of COLING 2018 conference. TRAC-1 included a shared task on Aggression Identification (Kumar et al., 2018a ",
"cite_spans": [
{
"start": 9,
"end": 30,
"text": "(Kumar et al., 2018a)",
"ref_id": "BIBREF11"
},
{
"start": 114,
"end": 135,
"text": "(Kumar et al., 2018b)",
"ref_id": "BIBREF12"
},
{
"start": 249,
"end": 269,
"text": "(Kumar et al., 2018a",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2."
},
{
"text": "In this section, we describe our approach to text classification and TRAC-2 shared task data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and Data",
"sec_num": "3."
},
{
"text": "Our approach to text analysis and features extraction is a hybrid approach of Computational Stylometry (CS), Machine Learning (ML) and Linguistic Rules (LR). CS can be described as a set of techniques that allow scholars to find out information about the authors of texts through an automatic linguistic analysis of texts. One of the main assumptions in CS is that each author operates choices which are influenced by sociological (age, gender and education level) and psychological (personality, mental health and being a native speaker or not) factors (Daelemans, 2013) which determine a unique writing style. With this in mind, it is natural that stylistic features play a fundamental role in detecting author's traits. Considering that stylistic features detected over the years by the scholars are at least one hundred, we summarize in a short list some main stylistic features studied in literature: sentence length (Argamon et al., 2003) , vocabulary richness (De Vel et al., 2001) , word length distributions (Zheng et al., 2006) , punctuation (Baayen et al., 1996) , use of a specific class of verbs or adjectives, use of first/third person, n-grams, readability index (Lucisano and Piemontese, 1988) , use of metaphors. Concerning ML, it is known that there are so many definitions, but the most exhaustive and concise is: ML is the computer ability to learn from data and consists in making predictions on unknown data on the basis of parameters identified during the training process. Lastly, the LR writing process is carried out thanks to COGITO c , Expert System's semantic intelligence software, by which it is possible to write rules to process the texts and extract all the characteristics. An important aspect of the software is that it allows to perform word-sense disambiguation, that is crucial in text analysis, exploiting the power of its semantic network. Our standard approach to text analysis consists of the following steps:",
"cite_spans": [
{
"start": 922,
"end": 944,
"text": "(Argamon et al., 2003)",
"ref_id": null
},
{
"start": 971,
"end": 988,
"text": "Vel et al., 2001)",
"ref_id": "BIBREF5"
},
{
"start": 1017,
"end": 1037,
"text": "(Zheng et al., 2006)",
"ref_id": "BIBREF33"
},
{
"start": 1052,
"end": 1073,
"text": "(Baayen et al., 1996)",
"ref_id": null
},
{
"start": 1178,
"end": 1209,
"text": "(Lucisano and Piemontese, 1988)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1."
},
{
"text": "\u2022 Linguistic Definition of Stylometric Features: since each author operates grammatical choices when writing a text, we organize all the grammatical characteristics of the texts under study in a taxonomy to detect the authorial fingerprint based on the grammatical choices done. This first step is carried out thanks to COGITO c , that allows us to write LR;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1."
},
{
"text": "\u2022 Semantic Engine Development: we train the semantic engine to extract the features from the analyzed texts. The semantic engine is implemented thanks to COGITO c 's semantic network (Sensigrafo) -that can operate word-sense disambiguation -with the addition of the rules we built;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1."
},
{
"text": "\u2022 Training Set Analysis: the training set is analysed and all features (based on the grammatical choices done by the writer) are extracted;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1."
},
{
"text": "\u2022 ML: In the last step, we exploit the features extracted to train the model to detect these features in the dataset. ML process is carried out exploiting WEKA platform (Hall et al., 2009 ) (a software with machine learning tools and algorithms for data analysis) thanks to which it is possible to build a classifier with the support of one of the algorithms available.",
"cite_spans": [
{
"start": 169,
"end": 187,
"text": "(Hall et al., 2009",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3.1."
},
{
"text": "TRAC-2 workshop shared task (now in its second edition), focuses on trolling, aggression and cyberbullying detection in a given corpus build ad hoc by the task organizers and is organized in two sub-tasks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task description and Data",
"sec_num": "3.2."
},
{
"text": "\u2022 Sub-task-A: Aggression Identification task, for which participant have to build a 3-way classifier to detect if the texts are (OAG), (CAG), or (NAG);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task description and Data",
"sec_num": "3.2."
},
{
"text": "\u2022 Sub-task-B: Misogynistic Aggression Identification task, for which participants have to build a binary classifier for classifying texts as Gendered (GEN) or Non-Gendered (NGEN).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task description and Data",
"sec_num": "3.2."
},
{
"text": "As we reported, TRAC-2 shared task included also a second SubTask (Misogynistic Aggression Identification), as opposed to TRAC-1, which included only the Aggression Identification SubTask. TRAC-2 shared task includes texts in three different languages: Bangla, Hindi and English (as opposed to TRAC-1, which didn't include Bangla) for both sub-tasks (Bhattacharya et al., 2020). The participants are allowed to compete for the tasks and the languages they prefer. As we mentioned in Section 3.1, building ad hoc LR and exploiting our semantic network plays a crucial role in our approach, so considering that we have no linguistic knowledge in Bangla and Hindi, we decided to take part only in the two English sub-tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task description and Data",
"sec_num": "3.2."
},
{
"text": "The systems submitted to TRAC-2 shared task have been evaluated on the basis of weighted macro-averaged Fscores. It means that the individual F-score of each class has been weighted by the proportion of the concerned class in the test set. The final F-score represents the average of these individual F-scores of each class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "3.2.1."
},
{
"text": "As usual in social media text data analysing, we cleaned the texts before analysying them. We removed @ symbol (it means that we also removed all mentions), we also removed hashtags (#), URLs, and emojis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.2.2."
},
{
"text": "TRAC-2 English shared task training set is composed of 4,217 text data labelled both for SubTask A and for Sub-Task B. Besides this, a Dev set composed of 1,064 text data even those labelled for both SubTasks was also delivered. In order to detect the best performing algorithm between Random Forest (RF) (Liaw et al., 2002) , Simple Logistic (SL) (Peng et al., 2002) , and Sequential Minimal Optimization (SMO) (Platt, 1998) , we built three different classifiers. Firstly, we train the three different model with the Training set for both SubTasks and we tested it on the Dev set. The results are shown in Table 2 (SubTask A) and Table 3 (Sub-Task B).",
"cite_spans": [
{
"start": 305,
"end": 324,
"text": "(Liaw et al., 2002)",
"ref_id": "BIBREF13"
},
{
"start": 348,
"end": 367,
"text": "(Peng et al., 2002)",
"ref_id": "BIBREF19"
},
{
"start": 412,
"end": 425,
"text": "(Platt, 1998)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 608,
"end": 615,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 632,
"end": 639,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Training set and Dev set analysis",
"sec_num": "3.2.3."
},
{
"text": "Cross-validation is a method used to test the performance of a model. The 10-folds cross-validation phase also confirmed that SMO classifier performances were better than those of the classifiers trained with the other two algorithms Table 5 : 10-folds Cross-validation on SubTask B Training set, where all performances reported should be read as weighted",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 241,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cross-validation",
"sec_num": "3.2.4."
},
{
"text": "Considering the performances achieved in both Dev set evaluation tests and in the two 10-folds cross-validation tests, we decided to analyze the Test set with the classifier we built with the support of the SMO algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-validation",
"sec_num": "3.2.4."
},
{
"text": "The Test set developed by Trac-2 shared task organizers is composed of 1,200 text data to be labelled in both Sub-Tasks. As we mentioned above, in SubTask A it is possible to label text data as: OAG, CAG, or NAG. In SubTask B texts can be labelled as GEN or NGEN. Despite each team was allowed to submit up to three systems for evaluation, we decided to submit just one for both SubTasks. The decision originated from the fact that the SMO algorithm was the best performing algorithm since the analysis TRAC-2 training and dev set. As shown above, other classifiers trained with other algorithms achieved worse performances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRAC-2 Test set",
"sec_num": "3.2.5."
},
{
"text": "In this section, we show the results achieved by UniOr ExpSys in both SubTasks. In the following few lines, we describe our hybrid approach of CS, ML and LR. Thanks to COGITO c we are able to build ad hoc linguistic rules to recognize stylistic features in texts. After this process, we train a semantic engine to extract the aforementioned features. The semantic engine is implemented thanks to the semantic network with the addition of the rules we built. Then, the training set is analysed and all features are extracted. In the last step, we exploit the features extracted to train the model to detect these features in the dataset. For the ML process, we exploit the WEKA platform and we built a classifier with the support of the SMO Algorithm. Please note that our system is trained with TRAC-2 training set and TRAC -1 dataset with regard to SubTask A and only with TRAC-2 training set with regard to SubTask B. The results achieved in TRAC-2 SubTask A (Aggression Identification task) and TRAC-2 SubTask B (Misogynistic Aggression Identification task) are shown in Table 6 and Table 7 respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 1074,
"end": 1081,
"text": "Table 6",
"ref_id": null
},
{
"start": 1086,
"end": 1093,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4."
},
{
"text": "CS-LR-SMO 0.6291 0.62 Table 6 : Results for Sub-task EN-A.",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 29,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "System F1 (weighted) Accuracy",
"sec_num": null
},
{
"text": "CS-LR-SMO 0.6733 0.6183 Table 7 : Results for Sub-task EN-B.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "System F1 (weighted) Accuracy",
"sec_num": null
},
{
"text": "It is important to highlight that our approach pays close attention to linguistic and stylistic aspects. Each feature is extracted thanks to the linguistic analysis of texts. In several instances, it has not been possible to extract stylistic features characterizing that specific category of texts (especially because texts were too short). Another fundamental aspect required by our approach is represented by balanced data, both in the training set and in the test set. Balanced data would have allowed a better training phase, with positive effects also on the classifier performances. Nevertheless, we are happy about the results we achieved in TRAC-2 participation and we thank the task organizers for the exciting competition in which we participated. In the future, exploring deep learning techniques for classifying these kinds of text data is certainly necessary. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "4.1."
},
{
"text": "In this paper, we have shown the results achieved during the participation at TRAC-2 shared task workshop, organized as part of LREC 2020. The shared task is organized in two SubTasks: Aggression Identification task, for which participant have to build a 3-way classifier to detect if the texts are i) Overtly Aggressive (OAG), ii) Covertly Aggressive (CAG), or iii) Non-Aggressive (NAG) and Sub-task-B: Misogynistic Aggression Identification task, for which participants have to build a binary classifier for classifying texts as i) Gendered (GEN) or ii) Non-Gendered (NGEN). We use a hybrid approach based on CS, ML and LR, which focuses on stylistic features extraction to identify the features that characterize texts belonging to the different categories. With regard to Aggression Identification task we achieved 0.629072 of F1-weighted, and with regard to Misogynistic Aggression Identification task we achieved 0.673321.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
},
{
"text": "This research has been partly supported by the PON Ricerca e Innovazione 2014-20 and the POR Campania FSE 2014-2020 funds. Authorship contribution is as follows: Antonio Pascucci is author of Sections 1, 2, and 3. Sections 4 and 5 are in common between Antonio Pascucci and Raffaele Manna. This research has been developed in the framework of two Innovative Industrial PhD projects in Computational Stylometry (CS) by \"L'Orientale\" University of Naples in cooperation with Expert System Corp. We are grateful to Vincenzo Masucci and Expert System Corp. for providing COGITO c for research and to Prof. Johanna Monti for supervising the research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "6."
},
{
"text": "Argamon, S., \u0160ari\u0107, M., and Stein, S. S. (2003) . Style mining of electronic messages for multiple authorship discrimination: first results. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 475-480. ACM. Aroyehun, S. T. and Gelbukh, A. (2018). Aggression detection in social media: Using deep neural networks, data augmentation, and pseudo labeling. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 90-97. Baayen, H., Van Halteren, H., and Tweedie, F. (1996) .",
"cite_spans": [
{
"start": 13,
"end": 47,
"text": "\u0160ari\u0107, M., and Stein, S. S. (2003)",
"ref_id": null
},
{
"start": 528,
"end": 568,
"text": "Van Halteren, H., and Tweedie, F. (1996)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bibliographical References",
"sec_num": "7."
},
{
"text": "Outside the cave of shadows: Using syntactic annotation to enhance authorship attribution. Literary and Linguistic Computing, 11(3):121-132. Basile, V., Bosco, C., Fersini, E., Nozza, D., Patti, V., Pardo, F. M. R., Rosso, P., and Sanguinetti, M. (2019) . Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 54-63. Bhattacharya, S., Singh, S., Kumar, R., Bansal, A., Bhagat, A., Dawer, Y., Lahiri, B., and Ojha, A. K. (2020) . Developing a multilingual annotated corpus of misogyny and aggression. Bosco, C., Dell'Orletta, F., Poletto, F., Sanguinetti, M., and Tesconi, M. (2018) . Overview of the evalita 2018 hate speech detection task. In EVALITA 2018-Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian, volume 2263, pages 1-9. CEUR. Britannica, E. (2015). Britannica academic. Encyclopaedia Britannica Inc. Burnap, P. and Williams, M. L. (2015). Cyber hate speech on twitter: An application of machine classification and statistical modeling for policy and decision making. Policy & Internet, 7(2):223-242.",
"cite_spans": [
{
"start": 164,
"end": 253,
"text": "Fersini, E., Nozza, D., Patti, V., Pardo, F. M. R., Rosso, P., and Sanguinetti, M. (2019)",
"ref_id": null
},
{
"start": 472,
"end": 552,
"text": "Kumar, R., Bansal, A., Bhagat, A., Dawer, Y., Lahiri, B., and Ojha, A. K. (2020)",
"ref_id": null
},
{
"start": 655,
"end": 707,
"text": "Poletto, F., Sanguinetti, M., and Tesconi, M. (2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bibliographical References",
"sec_num": "7."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Computational linguistics against hate: Hate speech detection and visualization on social media in the\" contro l'odio\" project",
"authors": [
{
"first": "A",
"middle": [
"T"
],
"last": "Capozzi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Poletto",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Ruffo",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Musto",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Polignano",
"suffix": ""
}
],
"year": 2019,
"venue": "6th Italian Conference on Computational Linguistics",
"volume": "2019",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Capozzi, A. T., Lai, M., Basile, V., Poletto, F., Sanguinetti, M., Bosco, C., Patti, V., Ruffo, G., Musto, C., Polignano, M., et al. (2019). Computational linguistics against hate: Hate speech detection and visualization on social me- dia in the\" contro l'odio\" project. In 6th Italian Confer- ence on Computational Linguistics, CLiC-it 2019, vol- ume 2481, pages 1-6. CEUR-WS.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improving cyberbullying detection with user context",
"authors": [
{
"first": "M",
"middle": [],
"last": "Dadvar",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Trieschnigg",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ordelman",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jong",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Information Retrieval",
"volume": "",
"issue": "",
"pages": "693--696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dadvar, M., Trieschnigg, D., Ordelman, R., and de Jong, F. (2013). Improving cyberbullying detection with user context. In Advances in Information Retrieval, pages 693-696. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Explanation in computational stylometry",
"authors": [
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2013,
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "451--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daelemans, W. (2013). Explanation in computational sty- lometry. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 451- 462. Springer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automated hate speech detection and the problem of offensive language",
"authors": [
{
"first": "T",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Eleventh international aaai conference on web and social media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davidson, T., Warmsley, D., Macy, M., and Weber, I. (2017). Automated hate speech detection and the prob- lem of offensive language. In Eleventh international aaai conference on web and social media.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multilingual cross-domain perspectives on online hate speech",
"authors": [
{
"first": "T",
"middle": [],
"last": "De Smedt",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Jaki",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Kotz\u00e9",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Saoud",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gw\u00f3\u017ad\u017a",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "De Pauw",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.03944"
]
},
"num": null,
"urls": [],
"raw_text": "De Smedt, T., Jaki, S., Kotz\u00e9, E., Saoud, L., Gw\u00f3\u017ad\u017a, M., De Pauw, G., and Daelemans, W. (2018). Multilingual cross-domain perspectives on online hate speech. arXiv preprint arXiv:1809.03944.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Mining e-mail content for author identification forensics",
"authors": [
{
"first": "De",
"middle": [],
"last": "Vel",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Corney",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mohay",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2001,
"venue": "ACM Sigmod Record",
"volume": "30",
"issue": "4",
"pages": "55--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "De Vel, O., Anderson, A., Corney, M., and Mohay, G. (2001). Mining e-mail content for author identification forensics. ACM Sigmod Record, 30(4):55-64.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Hate me, hate me not: Hate speech detection on facebook",
"authors": [
{
"first": "Del",
"middle": [],
"last": "Vigna",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Cimino",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Dell'orletta",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Petrocchi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Tesconi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Italian Conference on Cybersecurity (ITASEC17)",
"volume": "",
"issue": "",
"pages": "86--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Del Vigna, F., Cimino, A., Dell'Orletta, F., Petrocchi, M., and Tesconi, M. (2017). Hate me, hate me not: Hate speech detection on facebook. In Proceedings of the First Italian Conference on Cybersecurity (ITASEC17), pages 86-95.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Modeling the detection of textual cyberbullying",
"authors": [
{
"first": "K",
"middle": [],
"last": "Dinakar",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Lieberman",
"suffix": ""
}
],
"year": 2011,
"venue": "The Social Mobile Web",
"volume": "",
"issue": "",
"pages": "11--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dinakar, K., Reichart, R., and Lieberman, H. (2011). Mod- eling the detection of textual cyberbullying. In The So- cial Mobile Web, pages 11-17.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Overview of the evalita 2018 task on automatic misogyny identification (ami). EVALITA Evaluation of NLP and Speech Tools for Italian",
"authors": [
{
"first": "E",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fersini, E., Nozza, D., and Rosso, P. (2018). Overview of the evalita 2018 task on automatic misogyny identifi- cation (ami). EVALITA Evaluation of NLP and Speech Tools for Italian, 12:59.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A survey on automatic detection of hate speech in text",
"authors": [
{
"first": "P",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nunes",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "51",
"issue": "4",
"pages": "1--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fortuna, P. and Nunes, S. (2018). A survey on automatic detection of hate speech in text. ACM Computing Sur- veys (CSUR), 51(4):1-30.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The weka data mining software: an update",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "I",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM SIGKDD explorations newsletter",
"volume": "11",
"issue": "1",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reute- mann, P., and Witten, I. H. (2009). The weka data min- ing software: an update. ACM SIGKDD explorations newsletter, 11(1):10-18.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Benchmarking aggression identification in social media",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Ojha",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, R., Ojha, A. K., Malmasi, S., and Zampieri, M. (2018a). Benchmarking aggression identification in so- cial media. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 1-11, Santa Fe, New Mexico, USA, August. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Benchmarking Aggression Identification in Social Media",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Ojha",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbulling (TRAC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, R., Ojha, A. K., Malmasi, S., and Zampieri, M. (2018b). Benchmarking Aggression Identification in So- cial Media. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbulling (TRAC), Santa Fe, USA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Classification and regression by randomforest. R news",
"authors": [
{
"first": "A",
"middle": [],
"last": "Liaw",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wiener",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "2",
"issue": "",
"pages": "18--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liaw, A., Wiener, M., et al. (2002). Classification and re- gression by randomforest. R news, 2(3):18-22.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Gulpease: una formula per la predizione della difficolt\u00e0 dei testi in lingua italiana",
"authors": [
{
"first": "P",
"middle": [],
"last": "Lucisano",
"suffix": ""
},
{
"first": "M",
"middle": [
"E"
],
"last": "Piemontese",
"suffix": ""
}
],
"year": 1988,
"venue": "Scuola e citt\u00e0",
"volume": "3",
"issue": "31",
"pages": "110--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucisano, P. and Piemontese, M. E. (1988). Gulpease: una formula per la predizione della difficolt\u00e0 dei testi in lin- gua italiana. Scuola e citt\u00e0, 3(31):110-124.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Detecting hate speech in social media",
"authors": [
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1712.06427"
]
},
"num": null,
"urls": [],
"raw_text": "Malmasi, S. and Zampieri, M. (2017). Detect- ing hate speech in social media. arXiv preprint arXiv:1712.06427.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A system to monitor cyberbullying based on message classification and social network analysis",
"authors": [
{
"first": "S",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Moretti",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Corazza",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Cabrio",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tonelli",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Villata",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "105--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Menini, S., Moretti, G., Corazza, M., Cabrio, E., Tonelli, S., and Villata, S. (2019). A system to monitor cyber- bullying based on message classification and social net- work analysis. In Proceedings of the Third Workshop on Abusive Language Online, pages 105-110.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Modeling community behavior through semantic analysis of social data: The italian hate map experience",
"authors": [
{
"first": "C",
"middle": [],
"last": "Musto",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Semeraro",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "De Gemmis",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Lops",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization",
"volume": "",
"issue": "",
"pages": "307--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Musto, C., Semeraro, G., de Gemmis, M., and Lops, P. (2016). Modeling community behavior through seman- tic analysis of social data: The italian hate map experi- ence. In Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization, pages 307- 308.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Computational stylometry and machine learning for gender and age detection in cyberbullying texts",
"authors": [
{
"first": "A",
"middle": [],
"last": "Pascucci",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Masucci",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Monti",
"suffix": ""
}
],
"year": 2019,
"venue": "8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascucci, A., Masucci, V., and Monti, J. (2019). Compu- tational stylometry and machine learning for gender and age detection in cyberbullying texts. In 2019 8th Inter- national Conference on Affective Computing and Intelli- gent Interaction Workshops and Demos (ACIIW), pages 1-6. IEEE.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "An introduction to logistic regression analysis and reporting",
"authors": [
{
"first": "C.-Y",
"middle": [
"J"
],
"last": "Peng",
"suffix": ""
},
{
"first": "K",
"middle": [
"L"
],
"last": "Lee",
"suffix": ""
},
{
"first": "G",
"middle": [
"M"
],
"last": "Ingersoll",
"suffix": ""
}
],
"year": 2002,
"venue": "The journal of educational research",
"volume": "96",
"issue": "1",
"pages": "3--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng, C.-Y. J., Lee, K. L., and Ingersoll, G. M. (2002). An introduction to logistic regression analysis and reporting. The journal of educational research, 96(1):3-14.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Sequential minimal optimization: A fast algorithm for training support vector machines",
"authors": [
{
"first": "J",
"middle": [],
"last": "Platt",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Platt, J. (1998). Sequential minimal optimization: A fast algorithm for training support vector machines.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Hate speech annotation: Analysis of an italian twitter corpus",
"authors": [
{
"first": "F",
"middle": [],
"last": "Poletto",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Stranisci",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Bosco",
"suffix": ""
}
],
"year": 2006,
"venue": "4th Italian Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Poletto, F., Stranisci, M., Sanguinetti, M., Patti, V., and Bosco, C. (2017). Hate speech annotation: Analysis of an italian twitter corpus. In 4th Italian Conference on Computational Linguistics, CLiC-it 2017, volume 2006, pages 1-6. CEUR-WS.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Fully connected neural network with advance preprocessor to identify aggression over facebook and twitter",
"authors": [
{
"first": "K",
"middle": [],
"last": "Raiyani",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Gon\u00e7alves",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Quaresma",
"suffix": ""
},
{
"first": "V",
"middle": [
"B"
],
"last": "Nogueira",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "28--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raiyani, K., Gon\u00e7alves, T., Quaresma, P., and Nogueira, V. B. (2018). Fully connected neural network with ad- vance preprocessor to identify aggression over facebook and twitter. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 28-41.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Uva wahoos at semeval-2019 task 6: Hate speech identification using ensemble machine learning",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ramakrishnan",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Zadrozny",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Tabari",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "806--811",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramakrishnan, M., Zadrozny, W., and Tabari, N. (2019). Uva wahoos at semeval-2019 task 6: Hate speech identi- fication using ensemble machine learning. In Proceed- ings of the 13th International Workshop on Semantic Evaluation, pages 806-811.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Aggression identification using deep learning and data augmentation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Risch",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Krestel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "150--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Risch, J. and Krestel, R. (2018). Aggression identification using deep learning and data augmentation. In Proceed- ings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 150-158.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Evaluating aggression identification in social media",
"authors": [
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Atul",
"middle": [],
"last": "Kr",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "Ojha",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritesh Kumar, Atul Kr. Ojha, S. M. and Zampieri, M. (2020). Evaluating aggression identification in social media. In Ritesh Kumar, et al., editors, Proceedings of the Second Workshop on Trolling, Aggression and Cy- berbullying (TRAC-2020), Paris, France, may. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "An italian twitter corpus of hate speech against immigrants",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Poletto",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Stranisci",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanguinetti, M., Poletto, F., Bosco, C., Patti, V., and Stranisci, M. (2018). An italian twitter corpus of hate speech against immigrants. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A Survey on Hate Speech Detection Using Natural Language Processing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wiegand",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schmidt, A. and Wiegand, M. (2017). A Survey on Hate Speech Detection Using Natural Language Processing. In Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media. Associ- ation for Computational Linguistics, pages 1-10, Valen- cia, Spain.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Creating a whatsapp dataset to study pre-teen cyberbullying",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sprugnoli",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tonelli",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Oncini",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Piras",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "51--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sprugnoli, R., Menini, S., Tonelli, S., Oncini, F., and Piras, E. (2018). Creating a whatsapp dataset to study pre-teen cyberbullying. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 51-59.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Overview of the GermEval 2018 Shared Task on the Identification of Offensive Language",
"authors": [
{
"first": "M",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Siegel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of GermEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wiegand, M., Siegel, M., and Ruppenhofer, J. (2018). Overview of the GermEval 2018 Shared Task on the Identification of Offensive Language. In Proceedings of GermEval.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Learning from bullying traces in social media",
"authors": [
{
"first": "J.-M",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "K.-S",
"middle": [],
"last": "Jun",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bellmore",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 conference of the North American chapter of the association for computational linguistics: Human language technologies",
"volume": "",
"issue": "",
"pages": "656--666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu, J.-M., Jun, K.-S., Zhu, X., and Bellmore, A. (2012). Learning from bullying traces in social media. In Pro- ceedings of the 2012 conference of the North American chapter of the association for computational linguistics: Human language technologies, pages 656-666. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Predicting the type and target of offensive posts in social media",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.09666"
]
},
"num": null,
"urls": [],
"raw_text": "Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., and Kumar, R. (2019a). Predicting the type and tar- get of offensive posts in social media. arXiv preprint arXiv:1902.09666.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Semeval-2019 task 6: Identifying and categorizing offensive language in social media (offenseval)",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.08983"
]
},
"num": null,
"urls": [],
"raw_text": "Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., and Kumar, R. (2019b). Semeval-2019 task 6: Iden- tifying and categorizing offensive language in social me- dia (offenseval). arXiv preprint arXiv:1903.08983.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A framework for authorship identification of online messages: Writing-style features and classification techniques",
"authors": [
{
"first": "R",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of the American society for information science and technology",
"volume": "57",
"issue": "3",
"pages": "378--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng, R., Li, J., Chen, H., and Huang, Z. (2006). A framework for authorship identification of online mes- sages: Writing-style features and classification tech- niques. Journal of the American society for information science and technology, 57(3):378-393.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Figure 2show the confusion matrices of both SubTasks classifiers. As we can see in the SubTask A confusion matrix (Figure 1), CAG class text data are well classified, with the only exception of 15 instances incorrectly classified. The class that achieved the worst performance is NAG, which includes Non-Aggressive texts, but 156 have been classified as CAG and even 74 as OAG. With regard to SubTask B confusion matrix(Figure 2), GEN text data are quite well classified, while there is a big issue with NGEN: slightly more than Sub-task EN-A, confusion matrix of the CS-LR-SMO model",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Sub-task EN-B, confusion matrix of the CS-LR-SMO model half text data have been correctly classified, and this has undermined the performance of our binary classifier.",
"num": null,
"uris": null
},
"TABREF0": {
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">). The task was to develop a</td></tr><tr><td colspan=\"4\">classifier that could make a 3-way classification be-</td></tr><tr><td colspan=\"4\">tween Overtly Aggressive (OAG), Covertly Aggres-</td></tr><tr><td colspan=\"4\">sive (CAG), or Non-Aggressive (NAG) text data in</td></tr><tr><td colspan=\"4\">Hindi and English. It involved 130 teams, but only</td></tr><tr><td colspan=\"4\">30 of these submitted their systems. Besides, only</td></tr><tr><td colspan=\"4\">20 teams decided to submit their system description</td></tr><tr><td colspan=\"4\">paper. TRAC-1 shared task organizers provided two</td></tr><tr><td colspan=\"4\">test sets for Hindi and English: the first one was com-</td></tr><tr><td colspan=\"4\">posed of 916 English Facebook comments and 970</td></tr><tr><td colspan=\"4\">Hindi Facebook comments. Additionally, 1,257 En-</td></tr><tr><td colspan=\"4\">glish tweets and 1,194 saroyehun Julian vista.ue</td></tr><tr><td colspan=\"2\">Facebook Test set 0.642</td><td>0.601</td><td>0.581</td></tr><tr><td>Surprise Test set</td><td>0.592</td><td>0.599</td><td>0.600</td></tr><tr><td colspan=\"4\">Table 1: Performances achieved by the three TRAC-1 best</td></tr><tr><td colspan=\"4\">teams on the TRAC-1 Facebook test set and the Surprise</td></tr><tr><td colspan=\"2\">test set for English language</td><td/><td/></tr><tr><td colspan=\"3\">TRAC-2 takes its cue from TRAC-1 workshop.</td><td/></tr></table>",
"type_str": "table",
"text": "Hindi tweets have been provided as the surprise test set. The three best performing teams in English language in TRAC-1 shared task are: vista.ue(Raiyani et al., 2018), Julian(Risch and Krestel, 2018), and saroyehun (Aroyehun and Gelbukh, 2018). InTable 1the three systems performances are reported in terms of F1-weighted.",
"num": null
},
"TABREF2": {
"html": null,
"content": "<table><tr><td colspan=\"4\">Classifier Precision Recall F-Measure</td></tr><tr><td>RF</td><td>0.659</td><td>0.618</td><td>0.616</td></tr><tr><td>SL</td><td>0.630</td><td>0.595</td><td>0.594</td></tr><tr><td>SMO</td><td>0.663</td><td>0.630</td><td>0.630</td></tr></table>",
"type_str": "table",
"text": "Evaluation on SubTask A Dev set using SubTask A Training set as training, where all performances reported should be read as weighted",
"num": null
},
"TABREF3": {
"html": null,
"content": "<table><tr><td colspan=\"4\">: Evaluation on SubTask B Dev set using SubTask</td></tr><tr><td colspan=\"4\">A Training set as training, where all performances reported</td></tr><tr><td colspan=\"2\">should be read as weighted</td><td/><td/></tr><tr><td colspan=\"4\">(RF and SL). The results of the 10-folds cross-validation</td></tr><tr><td colspan=\"4\">test on both SubTasks Training sets are shown in Table 4</td></tr><tr><td colspan=\"3\">(SubTask A) and Table 5 (SubTask B).</td><td/></tr><tr><td colspan=\"4\">Classifier Precision Recall F-Measure</td></tr><tr><td>RF</td><td>0.510</td><td>0.508</td><td>0.501</td></tr><tr><td>SL</td><td>0.503</td><td>0.505</td><td>0.496</td></tr><tr><td>SMO</td><td>0.569</td><td>0.523</td><td>0.527</td></tr></table>",
"type_str": "table",
"text": "",
"num": null
},
"TABREF4": {
"html": null,
"content": "<table><tr><td colspan=\"4\">: 10-folds Cross-validation on SubTask A Train-</td></tr><tr><td colspan=\"4\">ing set, where all performances reported should be read as</td></tr><tr><td>weighted</td><td/><td/><td/></tr><tr><td colspan=\"4\">Classifier Precision Recall F-Measure</td></tr><tr><td>RF</td><td>0.595</td><td>0.592</td><td>0.589</td></tr><tr><td>SL</td><td>0.645</td><td>0.644</td><td>0.642</td></tr><tr><td>SMO</td><td>0.642</td><td>0.642</td><td>0.642</td></tr></table>",
"type_str": "table",
"text": "",
"num": null
}
}
}
}