Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S19-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:46:38.748224Z"
},
"title": "Incivility Detection in Online Comments",
"authors": [
{
"first": "Farig",
"middle": [],
"last": "Sadeque",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Arizona Tucson",
"location": {
"postCode": "85721",
"region": "AZ"
}
},
"email": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Rains",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Yotam",
"middle": [],
"last": "Shmargad",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Arizona Tucson",
"location": {
"postCode": "85721",
"region": "AZ"
}
},
"email": ""
},
{
"first": "Kate",
"middle": [],
"last": "Kenski",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Kevin",
"middle": [],
"last": "Coe",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Incivility in public discourse has been a major concern in recent times as it can affect the quality and tenacity of the discourse negatively. In this paper, we present neural models that can learn to detect name-calling and vulgarity from a newspaper comment section. We show that in contrast to prior work on detecting toxic language, fine-grained incivilities like namecalling cannot be accurately detected by simple models like logistic regression. We apply the models trained on the newspaper comments data to detect uncivil comments in a Russian troll dataset, and find that despite the change of domain, the model makes accurate predictions.",
"pdf_parse": {
"paper_id": "S19-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "Incivility in public discourse has been a major concern in recent times as it can affect the quality and tenacity of the discourse negatively. In this paper, we present neural models that can learn to detect name-calling and vulgarity from a newspaper comment section. We show that in contrast to prior work on detecting toxic language, fine-grained incivilities like namecalling cannot be accurately detected by simple models like logistic regression. We apply the models trained on the newspaper comments data to detect uncivil comments in a Russian troll dataset, and find that despite the change of domain, the model makes accurate predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Online harassment, colloquially known as cyberbullying or cyber harassment, has been rampant since the introduction of the Internet to the general population. It has been a major cause of concern since the mid-and late-90's, and is a thoroughly researched topic in the fields of social science, behavioral science, network science and computer security. Cyberbullying is a form of harassment that is carried out using electronic modes of communication like computer, phone, and in almost all the cases in recent years, the Internet. Cyberbullying is defined as a \"willful and repeated harm inflicted through the medium of electronic text\" by Patchin and Hinduja (2006) -but this phenomenon goes far beyond the scope of just electronic text. A more comprehensive definition of cyberbullying can be found in one of their later works, where they defined cyberbullying as \"a form of harassment using electronic mode of communication\" (Hinduja and Patchin, 2008) . Fauman (2008) described cyberbullying as \"bullying through the use of technology such as the Internet and cellular phones\".",
"cite_spans": [
{
"start": 642,
"end": 668,
"text": "Patchin and Hinduja (2006)",
"ref_id": "BIBREF18"
},
{
"start": 930,
"end": 957,
"text": "(Hinduja and Patchin, 2008)",
"ref_id": "BIBREF10"
},
{
"start": 960,
"end": 973,
"text": "Fauman (2008)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The spectrum of online harassment is vast; hence, we focus on one segment of this phenomenon: online incivility. Incivility has been rampant in American society for quite some time. Incivility is described as features of discussion that convey an unnecessarily disrespectful tone toward the discussion forum, its participants, or its topics (Coe et al., 2014) . While it is often said that incivility is \"very much in the eye of the beholder\" and what is civil to someone may be uncivil to another , some are universal nevertheless. One study has suggested that 69% of Americans believe that incivility in public discourse has become a rampant problem, and only 6% do not identify it as a problem (Shandwick, 2018) . The average number of incivility encounters per week has also risen drastically in both the physical world and cyberspace. Social media encounters are especially alarming: a person who encountered any form of incivility anywhere, had on average 5.4 uncivil encounters per week in online social media platforms in 2018, which is almost double the amount from late 2016.",
"cite_spans": [
{
"start": 341,
"end": 359,
"text": "(Coe et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 697,
"end": 714,
"text": "(Shandwick, 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present machine learning models that can identify two prominent forms of incivility, name-calling and vulgarity, based on usergenerated contents from public discourse platforms. We focused trained recurrent neural network models on an annotated newspaper comment section and showed that our model outperforms several baselines, including a state-of-the-art model based on pre-trained contextual embeddings. We applied our newspaper-comments-trained model to a datsaets of Russian troll tweets to observe how the model generalizes from one platform to another. divided incivility into several different forms, including name-calling, vulgarity, lying accusation, pejorative, and aspersion. They took comments posted by regulars in a newspaper website, and annotated these for the various forms of incivility. Their research focused mostly on the demographics and other individual attributes of readers of these comments and how they perceived incivility in these comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Rains et al. 2017focused more on the perpetrators of incivility rather than the readers. They researched a handful of news articles published in the Arizona Daily Star newspaper website and the comments posted about these articles, then manually annotated these comments and their posters for their incivility and political orientation. The authors found that conservatives were significantly less likely to be uncivil in these public discussions compared to liberals, and the likelihood of liberals being uncivil increased with the presence of conservatives in the same discussion. Liberals were also found to be more repercussive compared to the conservatives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Recent work has focused on particular forms of incivility, as described in the following sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Reynolds et al. (2011) developed machine learning models that can detect cyberbullying by identifying curse and insult words in social media posts. They have collected a small set of posts from a website named formspring.me and used various non-sequential learning algorithms on this dataset to build a binary classifier for cyberbullying detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generic incivility",
"sec_num": "2.1"
},
{
"text": "Cachola et al. (2018) used a vulgarity score for better sentiment prediction from a collection of 6800 tweets. They found that vulgarity interacts with key demographic variables like gender, age, religiosity, etc. Other research has also identified demographic keys closely associated with vulgarity: Wang et al. (2014) presented a quantitative analysis on the frequency of curse word usage in Twitter and their variation with certain demographics, and Gauthier et al. (2015) analyzed the usage of swear words based on Tweeter users' age and gender. As none of these papers present any machine learning model that can be used for vulgarity detection, claim their work to be the first in vulgarity prediction. They classified functionality of vulgarity in five different cohorts: aggression, emotion expression, emphasis, auxiliary and signalling group identity; and used binary logistic regression classifiers to identify vulgar texts. They also showed the correlation among demographic variables and vulgarity and found that age, faith, and political ideology have significant correlation with vulgarity usage.",
"cite_spans": [
{
"start": 301,
"end": 319,
"text": "Wang et al. (2014)",
"ref_id": "BIBREF26"
},
{
"start": 453,
"end": 475,
"text": "Gauthier et al. (2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vulgarity",
"sec_num": "2.2"
},
{
"text": "Waseem and Hovy (2016) has presented machine learning models that can be used to detect racism and sexism in social media. They have collected and annotated a set of almost 17000 tweets, and used them to build character based n-gram models for offensive tweet detection. They have provided an extensive list of criteria that identify a tweet as racially and sexually offensive, and showed that demographic information does not add much performance to a character-level model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Racism/sexism",
"sec_num": "2.3"
},
{
"text": "Wulczyn et al. (2017) introduced a methodology to generate annotations for personal attacks. They have used crowdsourcing to identify a set of Wikipedia comments, and used a machine learning model to imitate this annotation on a much larger scale. Agrawal and Awekar (2018) have developed deep neural models that can detect cyberbullying (Reynolds et al., 2011) , racism/sexism (Waseem and Hovy, 2016) , and personal attacks (Wulczyn et al., 2017) in multiple social media platforms. They claim that theirs is the first work to systematically analyze cyberbullying in social media towards building deep prediction models. They have shown that hand-crafted features using lexicons is not a good idea as abusive word vocabularies vary a lot from one social media platform to another, and swear words are not always considered to be uncivil in social media.",
"cite_spans": [
{
"start": 248,
"end": 273,
"text": "Agrawal and Awekar (2018)",
"ref_id": "BIBREF0"
},
{
"start": 338,
"end": 361,
"text": "(Reynolds et al., 2011)",
"ref_id": "BIBREF21"
},
{
"start": 378,
"end": 401,
"text": "(Waseem and Hovy, 2016)",
"ref_id": "BIBREF27"
},
{
"start": 425,
"end": 447,
"text": "(Wulczyn et al., 2017)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Personal attacks",
"sec_num": "2.4"
},
{
"text": "Habernal et al. (2018) analyzed ad hominem attacks in Change My View, a \"good faith\" argumentation platform that is hosted on Reddit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Name-calling",
"sec_num": "2.5"
},
{
"text": "They identified posts that Reddit moderators had marked as violating the forum's rules against ad hominem atacks. To identify such posts, they used stacked bidirectional Long-Short Term Memory networks (LSTMs) and Convolutional Neural Networks (CNNs), and achieved 78% and 81% accuracy, respectively. One of their most interesting findings was that in 48.6% of the cases, ad hominem attacks are in the last comment of the thread, which shows that personal attacks and name-callings can affect user participation in public discourses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Name-calling",
"sec_num": "2.5"
},
{
"text": "Works that closely resemble what we are trying to do have one major issue with the datasets that have been used-they are often annotated by mechanical turks (Wulczyn et al., 2017; Reynolds et al., 2011) . Incivility is based on the perception of the person in the receiving end, and this perception varies wildly from person to person. Using turkers that we know almost nothing about is not ideal-as difference in perception may introduce unintended bias in the dataset. Hence, we need a dataset that is annotated by experts who have extensive knowledge on incivility detection. Coe et al. (2014) presents one such dataset, and we plan to use this for our incivility detection task (more on this in Section 4).",
"cite_spans": [
{
"start": 157,
"end": 179,
"text": "(Wulczyn et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 180,
"end": 202,
"text": "Reynolds et al., 2011)",
"ref_id": "BIBREF21"
},
{
"start": 579,
"end": 596,
"text": "Coe et al. (2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Name-calling",
"sec_num": "2.5"
},
{
"text": "For our work, we will use the incivility classification presented by Coe et al. (2014) : name-calling, vulgarity, aspersion, lying accusations and pejorative for speech. We focus on the two most prevalent forms of these in Coe et al. (2014)'s data: namecalling and vulgarity.",
"cite_spans": [
{
"start": 69,
"end": 86,
"text": "Coe et al. (2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incivility Classification and Definitions",
"sec_num": "3"
},
{
"text": "name-calling Ad hominem attacks. Although ad hominem attacks are often used to derail a conversation by using derogatory terms towards another person, the authors have included every instances of derogatory remarks, irrespective of target and intention. For example, At least the morons in the state capital no longer have control of this process! is identified as a name-calling comment as it has the word moron in it .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incivility Classification and Definitions",
"sec_num": "3"
},
{
"text": "vulgarity Contents that include any sort of curse words, including minor ones such as damn . For example, I hope the voters will kick that politician out on his pompous ass next election. is marked as vulgar, as it contains the word ass in it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incivility Classification and Definitions",
"sec_num": "3"
},
{
"text": "4 Data Coe et al. (2014) graciously shared with us the data that they collected from the comment section of the Arizona Daily Star newspaper. They collected articles and comments between 17 October 2011 and 6 November 2011 from eight news sections: Business, Entertainment, Lifestyles, Local News, Nation and World, Opinion, Sports, and State News. All their data was downloaded and saved manually by one research assistant one day after the articles were posted to provide enough time for the article to garner comments, yet not long enough for the article to be deleted. At the end of the data collection period, a total of 706 articles and 6535 comments were collected, out of which they coded 6444 for further analysis. They used three teams of 3-5 research assistants to code articles and comments for incivility. The teams had extensive training on the coding procedures (Coe et al., 2014) . The coding process took approximately six weeks, and chance-corrected intercoder reliability was established prior to the coding, which ranged between 0.61 to 1.0 Krippendorff's alpha score for different codes. In addition to coding the incivilities present in the comments, they also coded a variety of other metadata, e.g., the author's name, reactions received for other readers (thumbs up or thumbs down), word counts, etc. All the results of the coding procedure were saved in a metadata file created using Microsoft Excel. Comments were saved in separate PDF files named based on the news sections, articles and dates.",
"cite_spans": [
{
"start": 7,
"end": 24,
"text": "Coe et al. (2014)",
"ref_id": "BIBREF5"
},
{
"start": 877,
"end": 895,
"text": "(Coe et al., 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incivility Classification and Definitions",
"sec_num": "3"
},
{
"text": "As we have mentioned before that incivility is in the eye of the beholder, it is sometimes challenging to identify what can be unequivocally considered as uncivil interaction. Informed by the Coe et al. (2014) data, the following sections discuss some of these challenges.",
"cite_spans": [
{
"start": 192,
"end": 209,
"text": "Coe et al. (2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges in Identifying incivilities from User Contents",
"sec_num": "5"
},
{
"text": "Although researchers have identified incivilities being rampant in public discourse (Shandwick, 2018) , it is still minuscule compared to regular civil discourses in any social platform. As most of our identification and prediction techniques are data-driven, it is difficult to create a model that can identify incivilities from this small number of examples.",
"cite_spans": [
{
"start": 84,
"end": 101,
"text": "(Shandwick, 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency",
"sec_num": "5.1"
},
{
"text": "Oftentimes people refrain from using an exact version of an uncivil phrase and use an abbreviation or spelling variation of that phrase instead. For example, in All BS, just like the politicians -the same crap, the term BS is clearly an abbreviation of the word bullshit. However, there are also instances in the data where BS is used to abbreviate a person's name, which clearly is not an example of uncivil comment. Also, people often like to write uncivil words in spellings that are a derivative form. For example, people often use sh!t instead of shit, which clearly are the same thing in a public discourse. Hundreds of these variations may exist, making for a challenging identification problem. Another challenge in identifying incivilities is that people can be really creative when they try to attack someone. This often happens when someone tries to indulge in ad hominem attacks with plausible deniability. For example, we have observed people using the word DemocRat instead of Democrat to identify someone with a democratic political orientation. Although these two words look similar, and sound exactly the same, Demo-cRat indicates that the target democrat is also a rat, a colloquial word for a spy, or a dishonest person. There are many other examples of this kind of variation, e.g. democraps. This phenomenon is sometimes referred as Obscenity Obfuscation, and researchers have found that it is becoming increasingly common in user generated contents in all sorts of social media platforms (Rojas-Galeano, 2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Variations and Creativity",
"sec_num": "5.2"
},
{
"text": "It is sometimes difficult to understand whether a word or a phase is used in an uncivil manner without understanding the context. For example, the word lazy can be used to describe the state of something that is actually slow or ineffective (e.g., lazy algorithms), or it can be used as an ad hominem attack on someone (e.g., the lazy politicians have ruined this country). As understanding the context of a content in a public discourse is difficult, separating these cases based on their contexts is challenging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Difficulty in Comprehension",
"sec_num": "5.3"
},
{
"text": "In this section, we focus on our attempt to create a machine learning model that can be used as an incivility filter for moderators in social media plat-forms. Our model will exclusively use features obtained from the contents and reciprocations in the platform, while avoiding the demographic information that was used heavily by prior work. This will allow our models to be used on the large portion of online discourse where such demographic information is unavailable, e.g., where users are anonymous.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incivility Prediction",
"sec_num": "6"
},
{
"text": "We will train our incivility prediction models on the Coe et al. (2014) data discussed in section 4. However, that data were designed for use in social science research, not natural language processing research, and thus there were several challenges in working with the data as they were collected, including:",
"cite_spans": [
{
"start": 54,
"end": 71,
"text": "Coe et al. (2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation",
"sec_num": "6.1"
},
{
"text": "\u2022 The comments were saved in PDFs, and the metadata referenced each comment by a number that was drawn (not typed) into the PDF beside the comment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation",
"sec_num": "6.1"
},
{
"text": "\u2022 The naming conventions for the files were inconsistent (spelling variations, variable length identifiers, etc.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation",
"sec_num": "6.1"
},
{
"text": "\u2022 Dates were saved using multiple formats (ddmmyy, dd-mm-yy, etc.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation",
"sec_num": "6.1"
},
{
"text": "\u2022 There were no specific markers in the text that identified the start and end of a comment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation",
"sec_num": "6.1"
},
{
"text": "\u2022 Many comments contained quotations from other comments, also with no consistent markers of where quotes began or ended.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation",
"sec_num": "6.1"
},
{
"text": "We solved these problems using a combination of regular expressions (e.g., for normalizing dates), brute-force techniques (e.g., quotations were identified by comparing against all previous comments), and manual revision (e.g., renaming the files whose names were too inconsistent to be resolved automatically). The resulting set of annotated comments were saved in JSON format for further computational analysis. We ended up with 6175 comments from the original set of 6444 comments after the extraction and cleaning process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation",
"sec_num": "6.1"
},
{
"text": "Our main focus was to build a prediction model that can work as a filter for incivility in public discourse. We were also interested in how a model trained on public discourse data would work on a social media platform. We first divided our dataset into three smaller sets: train, development and test sets. Comments are randomly assigned to sets, and we ended up with 3950 comments in the training set, 989 comments in the validation set and 1236 comments in the test set. We set the the test set aside for our final evaluation, and worked only on the training and validation dataset to find the best model that can fit the problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Task",
"sec_num": "6.2"
},
{
"text": "We found a similar task in Kaggle 1 (Wulczyn et al., 2017 ) that tries to identify toxicity of comments in the discourse section of Wikipedia. In that task, the best performing model was a recurrent neural network model with gated recurrent units (GRUs; Cho et al., 2014) , but some simple non-sequential models (logistic regressions and support vector machines) also performed almost as well as the sequential model on that task.",
"cite_spans": [
{
"start": 36,
"end": 57,
"text": "(Wulczyn et al., 2017",
"ref_id": "BIBREF29"
},
{
"start": 254,
"end": 271,
"text": "Cho et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "6.3"
},
{
"text": "For our baseline, we used two non-sequential machine learning techniques: logistic regression and support vector machines, using TF-IDF vectors obtained over words in the comments. We also considered a state-of-the-art out-of-the-box text classification model as a baseline, the Flair text classification model (Akbik et al., 2018) , which uses GloVe word embeddings (Pennington et al., 2014) and pre-trained contextual word embeddings derived from two character-level language models. Flair achieved state-of-the-art performance in partof-speech tagging and named-entity recognition tasks, and we thought that the character-based nature of the Flair model might be helpful in the face of the linguistic variation and creativity challenges we discussed earlier.",
"cite_spans": [
{
"start": 311,
"end": 331,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 367,
"end": 392,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "6.3"
},
{
"text": "Our model was inspired by the top performing systems in the Kaggle competition, and started with FastText embeddings (Joulin et al., 2016) for each of the words in a comment. These word vectors were fed to a recurrent layer consisting of bidirectional GRUs. The outputs of the GRUs were fed to an average pooling layer and a max pooling layer, which were then concatenated 2 . The output of the pooling was then fed through a sigmoid layer to produce the outputs. To avoid overfitting, we used a dropout layer (Srivastava et al., 2014) with 0.2 probability in between the input and hidden layer. We set the maximum length of input to 500 words for each comment, as this garnered the best validation performance in our preliminary analysis. We set class weights based on the frequency of namecalling and vulgarity: non-name-calling comments are 7 times more common than the name-calling ones, and non-vulgar comments are 35 times more common than vulgar ones, so we used a weighting scheme of 1:7 for name-calling and 1:35 for vulgarity. The model was trained with the Adam optimizer (Kingma and Ba, 2015) on mini-batches of size 32, with other hyperparameters set to their defaults. We trained each instance of this model for at most 500 epochs, with the option of early stopping if the validation accuracy did not improve for 10 consecutive epochs. A general structure of this model is shown in figure 1.",
"cite_spans": [
{
"start": 117,
"end": 138,
"text": "(Joulin et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 510,
"end": 535,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF25"
},
{
"start": 1083,
"end": 1104,
"text": "(Kingma and Ba, 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "6.4"
},
{
"text": "To further improve our model, we wanted to incorporate any metadata that were available to use. Coe et al. (2014) found that the thumbs up and thumbs downs received by a comment, the section of the article, and the author of the article all had some significance regarding incivility in the forum. So we introduced these metadata as features in our model. We created normalized feature vectors built on these attributes, and introduced them as auxiliary features right before the sigmoid layer, by concatenating them with the output of the pooling layers.",
"cite_spans": [
{
"start": 96,
"end": 113,
"text": "Coe et al. (2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "6.4"
},
{
"text": "We also explored external resources that could improve our model. We created a pretrained model on the Kaggle dataset discussed earlier, as it had a large amount of annotated comments (over 160 thousand comments obtained from Wikipedia contributor's community). We used the same RNN model to train on the Kaggle data until it reached convergence, then retrained the model using our Arizona Daily Star data. The only portion of the model that was not shared between the pre-training (on Kaggle) and the training (on Arizona Daily Star) was the output sigmoid layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "6.4"
},
{
"text": "The performance of the different models can be seen in instances of vulgarity in the development dataset, hence, Flair automatically outperformed these two. But our GRU-based model easily outperformed the Flair model (51.13 vs. 36.55 F 1 in name-calling, and 48.00 vs. 11.43 F 1 in vulgarity). These results stand in contrast to the Kaggle competition on toxicity detection, where such baselines performed nearly as well as the best (GRU-based) model, and all models achieved high levels of performance (>0.98 area under receiver operating characteristic curve). This suggests that the finer-grained incivility detection formulated by Coe et al. (2014) is more challenging than simple toxicity detection.",
"cite_spans": [
{
"start": 635,
"end": 652,
"text": "Coe et al. (2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6.5"
},
{
"text": "Adding the auxiliary features (upvotes, etc.) to the GRU-based model had virtually zero effect, with slight improvement on the model's precision but a slight drop in recall for name-calling, and absolutely no change for vulgarity. Using the Kaggle dataset to pre-train our GRU-based model before training on the Arizona Daily Star data yielded very high precisions, but at the cost of very low recalls. This suggests that while there is some overlap between the two tasks (toxicity detection and incivility detection), the differences between the tasks make it difficult to directly leverage the data from one task in the other. Since the GRU model with no auxiliary features or pre-training performed best on the development set, we evaluated the performance of this model on the test set. It achieved 48.07 F-measure for namecalling and 52.77 for vulgarity, scores roughly similar to what we had seen on the development data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6.5"
},
{
"text": "Though we built our models to detect incivilities in newspaper comments, we were interested in how well they would perform in other domains of social media. Karan and\u0160najder (2018) has showed that cross-domain adaptation for detecting abusive language is possible-hence we would like to observe how well our model performs on a set of tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incivility Prediction in Twitter",
"sec_num": "7"
},
{
"text": "In June 2018, The United States House Intelligence Committee released a list of 3841 Twitter account names that were human-operated troll accounts associated with Russia's Internet Research Agency. Darren Linvill and Patrick Warren from Clemson University collected all the tweets published since June 2015 from these accounts, cleaned them, and published a set of almost 3 million tweets (Linvill and Warren, 2018) . These tweets are publicly available in FiveThirtyEight's Github page 3 .",
"cite_spans": [
{
"start": 389,
"end": 415,
"text": "(Linvill and Warren, 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incivility Prediction in Twitter",
"sec_num": "7"
},
{
"text": "As prior research suggest that trolls are a big source of incivility in social media platforms (Fauman, 2008; Hinduja and Patchin, 2008) , we took this opportunity to observe how our model performs on this dataset. We downloaded all the tweet texts and ran our GRU-based model on these texts. Results of this experiment can be found in the au-3 https://github.com/fivethirtyeight/ russian-troll-tweets thor's GitHub repository 4 .",
"cite_spans": [
{
"start": 95,
"end": 109,
"text": "(Fauman, 2008;",
"ref_id": "BIBREF7"
},
{
"start": 110,
"end": 136,
"text": "Hinduja and Patchin, 2008)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incivility Prediction in Twitter",
"sec_num": "7"
},
{
"text": "Our model identified 13% of all tweets as namecalling and 1.7% as vulgarity. These are roughly similar to the Arizona Daily Star training data, which had 14% name-calling and and 2.8% vulgarity. Though we do not have access to the expert annotators used by Coe et al. (2014) , but we can nonetheless get an approximate measure of our model's performance by sampling predictions from our model and estimating the true label following the Coe et al. (2014) annotation guidelines.",
"cite_spans": [
{
"start": 257,
"end": 274,
"text": "Coe et al. (2014)",
"ref_id": "BIBREF5"
},
{
"start": 437,
"end": 454,
"text": "Coe et al. (2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Observations",
"sec_num": "7.1"
},
{
"text": "To measure our model's precision, we took the 250 tweets that our model was most certain contained name-calling, and the 250 tweets that our model was most certain contained vulgarity. We manually reviewed each of these 500 tweets, and found only 7 instances of mistakenly tagged namecalling and 5 instances of mistakenly tagged vulgarity. To get a rough sense of our model's recall, we looked at the other end of the model's prediction spectrum. Based on a manual review of the model's prediction, the model almost never makes a mistake when the prediction score is below 10%; we found only one instance of mistaken name-calling, and no instance of mistaken vulgarity in the bottom 250 tweets that we manually annotated. Table 2 shows some example tweets and the prediction scores from our model. The bottom two examples under name-calling and the bottom one example under vulgarity represent mistakes. In the first name-calling error, the model is confident (probability 0.979) that there is a name-calling, perhaps because the terms GOP and POTUS frequently appear with name-calling in our training data. In the second name-calling error, the model is confident (probability 0.989) that there is a namecalling, likely because of the presence of the word pathetic, which is an aspersion, attacking an idea, not a name-calling, attacking a person. In the vulgarity error, hell has not been used to reference the religious concept of hell, but the word strongly associated with vulgarity in the training data. The table also shows some examples of reasonable successes of the model, for example, handling vulgar abbreviations like BS (short for bullshit) and WTH (short for Who the hell).",
"cite_spans": [],
"ref_spans": [
{
"start": 722,
"end": 729,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Observations",
"sec_num": "7.1"
},
{
"text": "Our work here aims towards keeping a civil environment in public discourse forums and social media platforms. Our goal was to build a filtering system that could work alongside human moderators to reduce their workload, be objective and independent of user reporting, and perform well on previously unseen social media streams. There is much work to do in this area: annotation of a large random sample of the troll tweets can give a more thorough estimate of model performance, and various forms of domain adaptation like selftraining might be applied to improve the performance of the model. We have used word n-grams for features in our baseline models, which can be improved by using features obtained from domainspecific lexicons. There are lexicons of abusive words (Wiegand et al., 2018) -which can be used to create non-sequential models with smaller feature sets. Whether these simpler models are better is yet to be proven -as Agrawal and Awekar (2018) has shown that vocabulary of words used for cyberbullying varies significantly from one social media platform to another. They have also showed that swear words are not necessary to be uncivil in online social media-hence these types of detection techniques should not rely on such hand-crafted features.",
"cite_spans": [
{
"start": 772,
"end": 794,
"text": "(Wiegand et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 937,
"end": 962,
"text": "Agrawal and Awekar (2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Works and Conclusion",
"sec_num": "8"
},
{
"text": "One research question that follows this work is to observe whether incivility affects user engagement in social media. Prior research has observed that receiving replies can have effects in a user's engagement Sadeque et al., 2015) , and the language of these replies can also have consequences (Arguello et al., 2006) . Habernal et al. (2018) has showed that 48% of comments that included ad hominem attacks ended the argument -which is indicative of lower engagement by the entire community. Hence, we believe that incivility has a significant influence on user engagement, and in turn may contribute to a community's sustainability. This is yet to be proven, and more work needs to be performed to prove or disprove this hypothesis.",
"cite_spans": [
{
"start": 210,
"end": 231,
"text": "Sadeque et al., 2015)",
"ref_id": "BIBREF23"
},
{
"start": 295,
"end": 318,
"text": "(Arguello et al., 2006)",
"ref_id": "BIBREF2"
},
{
"start": 321,
"end": 343,
"text": "Habernal et al. (2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Works and Conclusion",
"sec_num": "8"
},
{
"text": "In this paper, we have presented a recurrent neural that can identify incivilities in public discourse. Though trained on a corpus of newspaper comments, we have initial evidence that it also performs well in detecting incivilities in Twitter. We believe our model will be able to serve as a wide-range incivility filter in other social media platforms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Works and Conclusion",
"sec_num": "8"
},
{
"text": "https://www.kaggle.com/c/jigsawtoxic-comment-classification-challenge 2 This type of pooling worked well forDemidov (2018), and also performed well in our preliminary analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "anonymized",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Deep learning for detecting cyberbullying across multiple social media platforms",
"authors": [
{
"first": "Sweta",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Awekar",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sweta Agrawal and Amit Awekar. 2018. Deep learn- ing for detecting cyberbullying across multiple so- cial media platforms. CoRR, abs/1801.06482.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Contextual string embeddings for sequence labeling",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Duncan",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2018,
"venue": "COLING 2018, 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1638--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In COLING 2018, 27th International Con- ference on Computational Linguistics, pages 1638- 1649.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Talk to me: Foundations for successful individual-group interactions in online communities",
"authors": [
{
"first": "Jaime",
"middle": [],
"last": "Arguello",
"suffix": ""
},
{
"first": "Brian",
"middle": [
"S"
],
"last": "Butler",
"suffix": ""
},
{
"first": "Elisabeth",
"middle": [],
"last": "Joyce",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Kraut",
"suffix": ""
},
{
"first": "Kimberly",
"middle": [
"S"
],
"last": "Ling",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Ros\u00e9",
"suffix": ""
},
{
"first": "Xiaoqing",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '06",
"volume": "",
"issue": "",
"pages": "959--968",
"other_ids": {
"DOI": [
"10.1145/1124772.1124916"
]
},
"num": null,
"urls": [],
"raw_text": "Jaime Arguello, Brian S. Butler, Elisabeth Joyce, Robert Kraut, Kimberly S. Ling, Carolyn Ros\u00e9, and Xiaoqing Wang. 2006. Talk to me: Foundations for successful individual-group interactions in on- line communities. In Proceedings of the SIGCHI Conference on Human Factors in Computing Sys- tems, CHI '06, pages 959-968, New York, NY, USA. ACM.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Expressively vulgar: The socio-dynamics of vulgarity and its effects on sentiment analysis in social media",
"authors": [
{
"first": "Isabel",
"middle": [],
"last": "Cachola",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Holgate",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Preo\u0163iuc-Pietro",
"suffix": ""
},
{
"first": "Junyi Jessy",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2927--2938",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isabel Cachola, Eric Holgate, Daniel Preo\u0163iuc-Pietro, and Junyi Jessy Li. 2018. Expressively vulgar: The socio-dynamics of vulgarity and its effects on sen- timent analysis in social media. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2927-2938.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "On the properties of neural machine translation: Encoder-decoder approaches",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-8)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-8).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Online and uncivil? patterns and determinants of incivility in newspaper website comments",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Coe",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Kenski",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Rains",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Communication",
"volume": "64",
"issue": "4",
"pages": "658--679",
"other_ids": {
"DOI": [
"10.1111/jcom.12104"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Coe, Kate Kenski, and Stephen A. Rains. 2014. Online and uncivil? patterns and determinants of in- civility in newspaper website comments. Journal of Communication, 64(4):658-679.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Kernel submission for kaggle toxic classification challenge",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Demidov",
"suffix": ""
}
],
"year": 2018,
"venue": "Last Accessed",
"volume": "",
"issue": "",
"pages": "2018--2030",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Demidov. 2018. Kernel submission for kaggle toxic classification challenge. https: //www.kaggle.com/yekenot/pooled- gru-fasttext? Last Accessed: 2018-12-02.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Cyber bullying: Bullying in the digital age",
"authors": [
{
"first": "Michael",
"middle": [
"A"
],
"last": "Fauman",
"suffix": ""
}
],
"year": 2008,
"venue": "American Journal of Psychiatry",
"volume": "165",
"issue": "6",
"pages": "780--781",
"other_ids": {
"DOI": [
"10.1176/appi.ajp.2008.08020226"
]
},
"num": null,
"urls": [],
"raw_text": "Michael A. Fauman. 2008. Cyber bullying: Bullying in the digital age. American Journal of Psychiatry, 165(6):780-781.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Text mining and twitter to analyze british swearing habits",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Gauthier",
"suffix": ""
},
{
"first": "Adrien",
"middle": [],
"last": "Guille",
"suffix": ""
},
{
"first": "Fabien",
"middle": [],
"last": "Rico",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Deseille",
"suffix": ""
}
],
"year": 2015,
"venue": "Handbook of Twitter for Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Gauthier, Adrien Guille, Fabien Rico, and An- thony Deseille. 2015. Text mining and twitter to ana- lyze british swearing habits. In Handbook of Twitter for Research.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Before name-calling: Dynamics and triggers of ad hominem fallacies in web argumentation",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Habernal",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.06613"
]
},
"num": null,
"urls": [],
"raw_text": "Ivan Habernal, Henning Wachsmuth, Iryna Gurevych, and Benno Stein. 2018. Before name-calling: Dy- namics and triggers of ad hominem fallacies in web argumentation. arXiv preprint arXiv:1802.06613.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Cyberbullying: An exploratory analysis of factors related to offending and victimization",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Hinduja",
"suffix": ""
},
{
"first": "Justin",
"middle": [
"W"
],
"last": "Patchin",
"suffix": ""
}
],
"year": 2008,
"venue": "Deviant Behavior",
"volume": "29",
"issue": "2",
"pages": "129--156",
"other_ids": {
"DOI": [
"10.1080/01639620701457816"
]
},
"num": null,
"urls": [],
"raw_text": "Sameer Hinduja and Justin W. Patchin. 2008. Cyber- bullying: An exploratory analysis of factors related to offending and victimization. Deviant Behavior, 29(2):129-156.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Why swear? analyzing and inferring the intentions of vulgar expressions",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Holgate",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Cachola",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Preo\u0163iuc-Pietro",
"suffix": ""
},
{
"first": "Junyi Jessy",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4405--4414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Holgate, Isabel Cachola, Daniel Preo\u0163iuc-Pietro, and Junyi Jessy Li. 2018. Why swear? analyzing and inferring the intentions of vulgar expressions. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 4405-4414.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. CoRR, abs/1607.01759.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Predicting continued participation in newsgroups",
"authors": [
{
"first": "Elisabeth",
"middle": [],
"last": "Joyce",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"E"
],
"last": "Kraut",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Computer-Mediated Communication",
"volume": "11",
"issue": "3",
"pages": "723--747",
"other_ids": {
"DOI": [
"10.1111/j.1083-6101.2006.00033.x"
]
},
"num": null,
"urls": [],
"raw_text": "Elisabeth Joyce and Robert E. Kraut. 2006. Predict- ing continued participation in newsgroups. Journal of Computer-Mediated Communication, 11(3):723- 747.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Cross-domain detection of abusive language online",
"authors": [
{
"first": "Mladen",
"middle": [],
"last": "Karan",
"suffix": ""
},
{
"first": "Jan\u0161najder",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "132--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mladen Karan and Jan\u0160najder. 2018. Cross-domain detection of abusive language online. In Proceed- ings of the 2nd Workshop on Abusive Language On- line (ALW2), pages 132-137, Brussels, Belgium. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Perceptions of uncivil discourse online: An examination of types and predictors. Communication Research",
"authors": [
{
"first": "Kate",
"middle": [],
"last": "Kenski",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Coe",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Rains",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "0",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1177/0093650217699933"
]
},
"num": null,
"urls": [],
"raw_text": "Kate Kenski, Kevin Coe, and Stephen A. Rains. 2017. Perceptions of uncivil discourse online: An exami- nation of types and predictors. Communication Re- search, 0(0):0093650217699933.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Adam: A method for stochastic optimization. International Conference on Learning Representation",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. International Conference on Learning Representation.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Troll factories: The internet research agency and statesponsored agenda building",
"authors": [
{
"first": "Darren",
"middle": [
"L"
],
"last": "Linvill",
"suffix": ""
},
{
"first": "Patrcik",
"middle": [
"L"
],
"last": "Warren",
"suffix": ""
}
],
"year": 2018,
"venue": "Publications/ Academic-sources/Troll-Factories-The-Internet-\\Research-Agency-and-State-Sponsored\\-Agenda-Building",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Darren L. Linvill and Patrcik L. Warren. 2018. Troll factories: The internet research agency and state- sponsored agenda building. https://www. rcmediafreedom.eu/Publications/ Academic-sources/Troll-Factories- The-Internet-\\Research-Agency-and- State-Sponsored\\-Agenda-Building.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bullies move beyond the schoolyard: A preliminary look at cyberbullying",
"authors": [
{
"first": "Justin",
"middle": [
"W"
],
"last": "Patchin",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Hinduja",
"suffix": ""
}
],
"year": 2006,
"venue": "Youth Violence and Juvenile Justice",
"volume": "4",
"issue": "2",
"pages": "148--169",
"other_ids": {
"DOI": [
"10.1177/1541204006286288"
]
},
"num": null,
"urls": [],
"raw_text": "Justin W. Patchin and Sameer Hinduja. 2006. Bullies move beyond the schoolyard: A preliminary look at cyberbullying. Youth Violence and Juvenile Justice, 4(2):148-169.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Incivility and political identity on the internet: Intergroup factors as predictors of incivility in discussions of news online",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Rains",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Kenski",
"suffix": ""
},
{
"first": "Jake",
"middle": [],
"last": "Coe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harwood",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Computer-Mediated Communication",
"volume": "22",
"issue": "4",
"pages": "163--178",
"other_ids": {
"DOI": [
"10.1111/jcc4.12191"
]
},
"num": null,
"urls": [],
"raw_text": "Stephen A. Rains, Kate Kenski, Kevin Coe, and Jake Harwood. 2017. Incivility and political identity on the internet: Intergroup factors as predictors of incivility in discussions of news online. Journal of Computer-Mediated Communication, 22(4):163- 178.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Using machine learning to detect cyberbullying",
"authors": [
{
"first": "Kelly",
"middle": [],
"last": "Reynolds",
"suffix": ""
}
],
"year": 2011,
"venue": "2011 10th International Conference on Machine learning and applications and workshops",
"volume": "2",
"issue": "",
"pages": "241--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelly Reynolds, April Kontostathis, and Lynne Ed- wards. 2011. Using machine learning to detect cy- berbullying. In 2011 10th International Conference on Machine learning and applications and work- shops, volume 2, pages 241-244. IEEE.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "On obstructing obscenity obfuscation",
"authors": [
{
"first": "Sergio",
"middle": [],
"last": "Rojas-Galeano",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Trans. Web",
"volume": "11",
"issue": "2",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3032963"
]
},
"num": null,
"urls": [],
"raw_text": "Sergio Rojas-Galeano. 2017. On obstructing obscenity obfuscation. ACM Trans. Web, 11(2):12:1-12:24.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Predicting continued participation in online health forums",
"authors": [
{
"first": "Farig",
"middle": [],
"last": "Sadeque",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Prasha",
"middle": [],
"last": "Shrestha",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
}
],
"year": 2015,
"venue": "SIXTH INTERNATIONAL WORKSHOP ON HEALTH TEXT MINING AND INFORMATION ANALYSIS (LOUHI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Farig Sadeque, Thamar Solorio, Ted Pedersen, Prasha Shrestha, and Steven Bethard. 2015. Predict- ing continued participation in online health fo- rums. In SIXTH INTERNATIONAL WORKSHOP ON HEALTH TEXT MINING AND INFORMATION ANALYSIS (LOUHI), page 12.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Civility in america 2018: Civility at work and in our public squares",
"authors": [
{
"first": "Weber",
"middle": [],
"last": "Shandwick",
"suffix": ""
}
],
"year": 2018,
"venue": "Last Accessed",
"volume": "",
"issue": "",
"pages": "2018--2024",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weber Shandwick. 2018. Civility in america 2018: Civility at work and in our public squares. https://www.webershandwick.com/wp- content/uploads/2018/06/Civility- in-America-VII-FINAL.pdf. Last Ac- cessed: 2018-06-11.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Dropout: a simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15(1):1929-1958.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Cursing in english on twitter",
"authors": [
{
"first": "Wenbo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Krishnaprasad",
"middle": [],
"last": "Thirunarayan",
"suffix": ""
},
{
"first": "Amit",
"middle": [
"P"
],
"last": "Sheth",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW '14",
"volume": "",
"issue": "",
"pages": "415--425",
"other_ids": {
"DOI": [
"10.1145/2531602.2531734"
]
},
"num": null,
"urls": [],
"raw_text": "Wenbo Wang, Lu Chen, Krishnaprasad Thirunarayan, and Amit P. Sheth. 2014. Cursing in english on twit- ter. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & So- cial Computing, CSCW '14, pages 415-425, New York, NY, USA. ACM.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {
"DOI": [
"10.18653/v1/N16-2013"
]
},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93, San Diego, California. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Inducing a lexicon of abusive words -a feature-based approach",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Clayton",
"middle": [],
"last": "Greenberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1046--1056",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1095"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Josef Ruppenhofer, Anna Schmidt, and Clayton Greenberg. 2018. Inducing a lexicon of abusive words -a feature-based approach. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1046-1056, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Ex machina: Personal attacks seen at scale",
"authors": [
{
"first": "Ellery",
"middle": [],
"last": "Wulczyn",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "1391--1399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Pro- ceedings of the 26th International Conference on World Wide Web, pages 1391-1399. International World Wide Web Conferences Steering Committee.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"text": "Flair outperformed both of the other two baselines (36.55 vs. 23.35 and 18.46 F 1 in name-calling. Logistic regression and support vector machine models failed to detect single",
"content": "<table><tr><td/><td/><td/><td colspan=\"3\">Civil/Uncivil</td></tr><tr><td>Sigmoid</td><td/><td/><td/><td/><td/></tr><tr><td>Concatenation</td><td/><td/><td/><td/><td/></tr><tr><td>Auxiliary features</td><td/><td/><td/><td/><td/></tr><tr><td>Max pooling</td><td/><td/><td/><td/><td/></tr><tr><td>Average pooling</td><td/><td/><td/><td/><td/></tr><tr><td>Bidirectional GRU (160)</td><td/><td/><td/><td/><td/></tr><tr><td>Embedding (500)</td><td/><td/><td/><td/><td/></tr><tr><td>Input</td><td>lazy</td><td colspan=\"2\">politicians</td><td>ruined</td><td/><td>this</td><td>country</td></tr><tr><td colspan=\"7\">Figure 1: General structure of the RNN model. Auxiliary features are optional.</td></tr><tr><td/><td/><td/><td>Validation</td><td/><td/></tr><tr><td/><td/><td/><td>Name-calling</td><td/><td/><td>Vulgarity</td></tr><tr><td/><td/><td>Prec</td><td>Rec</td><td>F 1</td><td>Prec</td><td>Rec</td><td>F 1</td></tr><tr><td>Logistic regression</td><td/><td colspan=\"3\">56.13 11.05 18.46</td><td>-</td><td>0.00</td><td>0.00</td></tr><tr><td>Support vector machine</td><td/><td colspan=\"3\">54.10 14.89 23.35</td><td>-</td><td>0.00</td><td>0.00</td></tr><tr><td>Flair</td><td/><td colspan=\"4\">52.17 28.12 36.55 25.00</td><td>7.41</td><td>11.43</td></tr><tr><td>GRU</td><td/><td colspan=\"5\">43.65 61.72 51.13 37.50 66.67 48.00</td></tr><tr><td colspan=\"7\">GRU with auxiliary features 44.38 59.85 50.96 37.50 66.67 48.00</td></tr><tr><td>GRU with pretraining</td><td/><td colspan=\"5\">69.44 19.53 29.79 50.00 11.11 18.03</td></tr><tr><td/><td/><td/><td>Test</td><td/><td/></tr><tr><td/><td/><td/><td>Name-calling</td><td/><td/><td>Vulgarity</td></tr><tr><td/><td/><td>Prec</td><td>Rec</td><td>F 1</td><td>Prec</td><td>Rec</td><td>F 1</td></tr><tr><td>GRU</td><td/><td colspan=\"5\">45.76 50.63 48.07 48.72 57.57 52.77</td></tr></table>",
"num": null,
"html": null
},
"TABREF1": {
"type_str": "table",
"text": "",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "Examples of the GRU-based model predictions on the Russian troll Twitter data.",
"content": "<table/>",
"num": null,
"html": null
}
}
}
}